About   Slides   Home  

Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
RSS Feed
Recent Thoughts
Recent Comments

Patterns and Anti-Patterns: Acceptance Testing with FitNesse


  • Organizing Tests – Allowing customers to add new tests without breaking the build. This is possible by using the concept of symbolic links in FitNesse. Ideally, I would have a story tree [stories are not linear structures, they have relationships and hierarchy] under the FitNesse. The leaf of the tree is the smallest story that the developers work on. For each leaf, we define one or more acceptance tests (AT). This is one view of this info. Another view of the info would be to look at the stories from each iteration/sprint point of view. You might want to know which all stories are being played/worked upon in iteration/sprint 2, for example. The we create an Iteration/Sprint page at the same level at the root of the story tree. Under this page, we create separate  pages for each iteration/sprint and once the acceptance test is ready and working [after developers have written just enough code to make it work], we symbolically link the test to the iteration/sprint page. Each iteration/sprint page is defined as a Test Suite which runs as part of the Continuous Integration build. So the customers can work on the ATs for their stories and check them in, without breaking the build.
  • Version Control -  Checking in the acceptance test under the same version control with the code makes it very easy to reproduce the build at any point in time. Tests always go hand-in-hand with the code. Unfortunately FitNesse does not have built in support for version control. I’m working on a framework for adding version control support which will be a part of the FitNesse project soon.
  • Cross-Functional -  The vision of FitNesse was to be a collaboration tool rather than just a testing tool. Using FitNesse based acceptance tests for collaboration between cross-functional team members is a great way to help communication within the team. It also encourages every one on the team to talk about the domain entities and hence domain language at the very beginning of the story.
  • ATDD (Acceptance Test Driven Development) – In my opinion the real value of ATs is when we use to express business intent and use that to drive development. The way we have done this on our teams is, we have the customer/product owner/BA write the acceptance criteria (AC). Usually they do this before the beginning of the iteration/sprint. Developers use this to estimate. Once the story is added to the current iteration/sprint backlog, a develop pair picks up the story. They pair with the customer/product owner/BA/QA to write the acceptance tests in FitNesse. This helps the developers and QAs really understand the story. Then they part their ways. Developers write just enough code [including the fixture code] to make the AT pass. In the mean time, the QAs write other scenarios around the story. Once the developers and QAs are done. They have a touch point to make sure all the ATs are working. When this is done, the story is Dev complete and ready for BA and QA sign-offs.
  • Cleanup, Setup, Test, Teardown – We started with Setup, Test and TearDown + Cleanup. We could clean up any external state of the system at the end of the test. External state includes, data in database, files on file system, messages on Ques, etc. But soon we realized that if a test failed, we did not have a way to know what the state was, when the test failed, coz the AT would clean up the state after its execution. Hence we changed to a pattern where we did not clean up the state at the end, instead we cleaned it up at the beginning and setup whatever state the test needed.
  • Independent Tests – Its very easy to end up with tests that depend on each other. Usually people do this to avoid duplication. But this leads to a state where one test fails and it has a ripple effect across all the tests and it becomes very difficult to find out which test is the root cause. We really try had to create test that don’t depend on each other.
  • Dynamic Stubbing – Its very easy to fall into the trap of testing the system completely integrated with all external 3rd party systems. While it would be great to had tests that are integrated and as close to reality. There is a cost you need to pay for it. The cost can come in-terms of time to execute, the cost can come in terms of added complications of set-up, test execution and teardown. A lot of times it better to avoid this extra cost by dynamically simulating the expected response from external systems. At the beginning of the AT, you can set expectations on the external system and then run the AT to make sure your system behaves in the expected way. [similar to the mock object analogy]
  • Non-Production code for Setup/Teardown – A lot of times its tempting to use production code to setup and teardown data. For Ex. if you are using a OR mapping tool, it seems easy to use the OR mapping production code to setup and teardown data. This is quite problematic if this code changes. For all you know, this code itself has nothing to do with what you are testing. So what I prefer instead is to use non-production light weigh code for setup and teardown. This helps test only what I want to test. For Ex. in the case of DB data setup, I would use simple JDBC or ODBC code. You can also use the DBFit project for the same.
  • Suite Levels – Creating different levels of suites depending on the depth/level of feedback desired is very important to get quick feedback. Again use the Symbolic link concept of create different suite pages and include more tests or other suites. Ex: Smoke, Current Iteration/Sprint, Regression suite pages are really useful.
  • DRY (Don’t Repeat Yourself) – Use !include to avoid duplication and encourage once-and-only-once. If you have duplicate tables in your AT pages, refactor the tests, by pulling out the common tables and moving them to a separate page. Then use the !include to include the common page in the respective test page.
  • Make it Real – In an ideal world, you would be running your ATs against a production box. But that’s too costly to afford. So we try to strike a balance between cost + complexity and real + concrete feedback. So the idea is to write ATs as close as possible to the real environment without incurring too much cost. Ex: Execute ATs inside the app container if that’s how your app is going to run
  • Fixture Evolution – Allow Fixture implementation to evolve over time and treat them as first class citizens. I have some teams which write a wrapper framework with all possible fixtures before they actually write their first AT. Unfortunately this does not fly. JIT fixture creation works very well. Having said that, its also important to treat your fixture code as first class citizens. Follow the general good OO principles. Don’t over use statics, don’t create hierarchy of fixtures to share instance variables, use meaningful names, etc
  • At Least One Test/Story – Every story should have at least one acceptance test. Avoid long/multipurpose tests


  • Developer AT democracy – On a lot of teams, developers writing acceptance tests for themselves and by themselves. That defeats the purpose of ATs. They are used for communication and collaboration. They also happen to be good for testing. But if you only think of that aspect, you might miss the forest for the tree.
  • Unit Testing – Don’t write ATs at the unit testing level. Unit tests are implementation specific, while ATs are NOT implementation specific. ATs express business intent, while unit tests express technical implementation intent.
  • QA Testing Tool – FitNesse is not a QA testing tool. They can use it to write acceptance tests. But not other types of tests. Keep the tool simple and let it do one job well. With lack of support for test maintenance, it really becomes a big issue for QAs to use this tool effectively. Plus it is very difficult to write UI level ATs in FitNesse. You need a different tool. Its not a Silver Bullet -Don’t try to use FitNesse for all types of Acceptance Tests. Like UI, XML testin, etc
  • Test After – Writing tests after the code is already written does not give you enough value when compared to writing them before and using them to drive development
  • Hiding Test Data in Fixture Code– Hiding test data that affects the behavior in the fixtures is a bad idea. A lot of teams write FitNesse tests such that the test data is completely obscure from the user. Again, its about communication. Make sure your tests communicate intent. [Test data is a part of it]
  • Implementation Dependant ATs – Making test pages (tables) dependent on implementation details and data structures is really troublesome.IMHO, AT should be platform, technology and implementation independent. I would say if you build a system today with good AT, a few years down the lane if you decide to port the application to a new technology or architecture, with some changes in the fixture code, you should be able to run the ATs against the new system.
  • Logging in Fixture Code – Putting log statements or print statements in the fixture code is a real smell. This implies that your fixture code is probably too complicated. Having said that, it is very difficult to debug FitNesse AT the way it works. Have a look at the Patang project on how we have solved this issue.

    Licensed under
Creative Commons License