Metrics to Reduce Risk in Product Ship Decisions – Part VIII – Case Study: DataFinder – Assessment of the Metrics (#8 in the series Metrics to Reduce Risk in Product Ship Decisions)
By Johanna Rothman
Assessment of the entry criteria is then a matter of data gathering, and seeing if the data match the criteria. The data gathering is done by the responsible project or technical leader, and presented to the project leader in preparation for a system test entry readiness review. The project leader presents the entry criteria and data for each criterion. It is obvious which criteria are met and which ones are not yet met. The risks are illuminated to each person at the readiness review, and can be discussed. The project leader and project staff can then make the decision to enter or delay the system test period.
I have had experience where the entry criteria were used and where they were ignored. When the exit and entry criteria were used, we were able to predict exactly how long system test time would take, and when the product ship date would occur. When the entry criteria were used but the exit criteria ignored, the company shipped on time, but paid for that in support costs. When neither entry nor exit criteria were used, we were unable to predict ship date, and we were besieged by angry customers after shipment. The support costs were extremely high.
To return to the DataFinder case: Figure 3 shows the data for the total number of weeks from the time the company thought they were initially in system test to shipment:
Figure 3: Total DataFinder bug and checking metrics
DataFinder found that the system test and bug fix/code walkthrough activities increasingly found more bugs until approximately week 18. The number of checkins here tell you how many files are being perturbed by fixes- a possible indication of product stability. There is an interesting phenomenon at week 18- one or some of the bug fixes changed a large number of files, but the overall effect was to reduce the future number of bugs found. This data certainly does not bear out the persistent folklore that says “It’s only a one line change, it can’t be that bad”.
Figure 4: Total DataFinder system test plan, run, pass metrics
The fact that the number of system tests planned increased every week until week 22, see Figure 4, is an effect of the traditional delay in moving the feature knowledge into the SQA area.
No matter what the product quality definition, these basic metrics help an organization understand the state of the software. The software should show increasing stability over time: fewer checkins, fewer new bugs found, more bugs closed, and fewer bugs open. In addition, a running total of the number of tests passed per day of the system test period will indicate product stability in the code base.
Original article can be found at: http://www.jrothman.com/Papers/QW96.html
Johanna Rothman consults, speaks, and writes on managing high-technology product development. Johanna is the author of Manage It!’Your Guide to Modern Pragmatic Project Management’. She is the coauthor of the pragmatic Behind Closed Doors, Secrets of Great Management, and author of the highly acclaimed Hiring the Best Knowledge Workers, Techies & Nerds: The Secrets and Science of Hiring Technical People. And, Johanna is a host and session leader at the Amplifying Your Effectiveness (AYE) conference (http://www.ayeconference.com). You can see Johanna’s other writings at http://www.jrothman.com.