Metrics to Reduce Risk in Product Ship Decisions – Part IV – Case Study: DataFinder – Define System Test Entry Criteria (#4 in the series Metrics to Reduce Risk in Product Ship Decisions)
By Johanna Rothman
During the product definition phase, it is important to set the goals and requirements for a product. Goals are the things the product team wants to accomplish. Requirements are the things they must accomplish. In that same spirit, one can define entry criteria for the system test phase. Sample criteria may be:
System test entry criteria (requirements)
- Define white box tests for all functionality defined in
- Define black box tests for all other functionality.
- Automate all tests for one specific platform.
- No outstanding Critical bugs.
- The product must be able to accomplish the basic functionality as described in
- All modules must meet code freeze- all features are in and frozen, all code is developed, integrated, and debugged from the unit level.
- All code reviews complete.
System test entry goals
- Automate all testing into a suite which can be run in a 4 hour period across all platforms.
- No more than 20 major bugs and 100 minor bugs.
- code reviews complete
- Unit tests developed for all code in
DataFinder chose these criteria as system test entry criteria:
- All bug fixes most be code reviewed (peer walkthrough) before checkin.
- Minimum 95% regression test pass rate.
- All regression test failures must be known, bugs defined, and plan for fix in place.
- All performance tests must pass.
- Performance must be at
- Reliability must be 100%: all successful commits must have committed, all rollbacks must rollback successfully, all data recovery mechanisms must be successful.
System test entry criteria must be developed by the SQA and developers, as a joint effort. In DataFinder’s case, management suggested some of the criteria. The developers and SQA jointly agreed on all of the criteria and how to measure them.
It is easiest on everyone, and best for the product, to define the system test entry criteria as early in the project as possible. Criteria should be defined during the product definition phase, the functional spec phase, or the preliminary design phase. Initially, there were no system test entry criteria defined for DataFinder. A number of times, the development team claimed the product was ready for testing. However, the test effort was unable to effectively proceed due to numerous bugs. It became clearer during development that the product would never get to system test (successfully running the regression tests) unless these criteria existed.
Once entry criteria exist, it is very clear to the technical staff what the expectations of them are, and they can successfully work to those expectations. Schedule and development expectations can be met. A side effect of this type of entry criteria was to start discussing the complexity of the software in the framework of criteria. DataFinder product developers and SQA engineers were able to proactively think about where to put the testing effort. If entry and exit criteria had not been specified for system test, management would not have been able to understand which areas of the product required more resources (people, machines, testing, etc.).
Defined system test entry criteria also gives management another measurement point- how close are the technical staff to fulfilling the entry criteria? During the project there are times when technical contributors think “If I just had three more weeks…”. However, if at every time you examine the criteria, you are still three weeks away, there is a systemic problem. As an example, if four weeks from the start of system test period, you discover a product area in which there are insufficient system tests, unit test, or code reviews, you can take remedial action. When you have the metrics and track how close you are to meeting them, you can make smart choices. At one point in project, it became clear that part of the product was never going to pass the performance criterion. Development was able to start a redesign and reimplementation in time to make the system test start date.
Original article can be found at: http://www.jrothman.com/Papers/QW96.html
Johanna Rothman consults, speaks, and writes on managing high-technology product development. Johanna is the author of Manage It!’Your Guide to Modern Pragmatic Project Management’. She is the coauthor of the pragmatic Behind Closed Doors, Secrets of Great Management, and author of the highly acclaimed Hiring the Best Knowledge Workers, Techies & Nerds: The Secrets and Science of Hiring Technical People. And, Johanna is a host and session leader at the Amplifying Your Effectiveness (AYE) conference (http://www.ayeconference.com). You can see Johanna’s other writings at http://www.jrothman.com.