Metrics to Reduce Risk in Product Ship Decisions – Part VII – Case Study: DataFinder – Criteria Summary

Metrics to Reduce Risk in Product Ship Decisions – Part VII – Case Study: DataFinder – Criteria Summary (#7 in the series Metrics to Reduce Risk in Product Ship Decisions)
By Johanna Rothman

Even though all of the metrics dealt with defects, DataFinder product quality was not defined by defects. The referenceable site and performance criteria were the measurements that strictly dealt with customer satisfaction. Because this is a RDBMS type of product, the defect levels have to be low enough to ensure sufficient reliability. And, the reliability criterion was separate.

Table 1 lists some possible metrics based on time to market and low defects:

Table 1: Possible metrics

Time to Market Low Defects
Planned vs. actual dates Defects per unit time
Task productivity rates (how productive are the engineers in the aggregate for design tasks, implementation tasks, etc. Defects closed per unit time
Feature productivity rates (how productive the sub-project teams are for completing work on an entire feature. Defects found per activity (all reviews, unit tests, etc.)

Date variations give you two pieces of information: is the project on schedule, and how accurate were the original estimates. If the estimates were not accurate, you can use this data as a starting point to determine why:

  1. Was the schedule generated using technical contributor input, or was it a complete guess to begin with?
  2. Did people assume they had 40-hour work weeks available? Almost no one actually has 40 work-hours available for project work [Abdel-Hamid91].
  3. Were people working on other projects aside from this one- and reducing their available project-hours?

Task and Feature productivity rates give you data on how productive the engineering teams are by task and by feature. Note that this data should not be gathered or assessed by specific engineer, unless you really do have teams of only one engineer (no design reviews, no code reviews, no schedule reviews, just one developer who also writes all the documentation, plans and performs all the testing, and plans and monitors the project).

Defects found and closed per unit time can give you an indication when the testing is complete enough to stop. The bugs-found curve decays to close to 0.

Defects found per activity may give you some hints on what activities find the most issues, and which activities may need improvement. For example, if code reviews do not at least 5-10 times more defects than system tests, your code review process may not be adequate.

Original article can be found at: http://www.jrothman.com/Papers/QW96.html

Johanna Rothman consults, speaks, and writes on managing high-technology product development. Johanna is the author of Manage It!’Your Guide to Modern Pragmatic Project Management’. She is the coauthor of the pragmatic Behind Closed Doors, Secrets of Great Management, and author of the highly acclaimed Hiring the Best Knowledge Workers, Techies & Nerds: The Secrets and Science of Hiring Technical People. And, Johanna is a host and session leader at the Amplifying Your Effectiveness (AYE) conference (http://www.ayeconference.com). You can see Johanna’s other writings at http://www.jrothman.com.

PMHut Team

PMHut Team

PMHut.com is a website dedicated to providing PM articles, detailed project management software reviews, and the latest news for the most popular web-based collaboration tools.

Leave a Reply