Principles of Scoring Models in Project Management

Principles of Scoring Models in Project Management
By David Blumhorst

When I was running the IT-PMO at PeopleSoft we faced an interesting dilemma. As we finished work on the integration of JD Edwards there was a ton of unmet demand for IT work from all corners of the enterprise. This ranged from tweaks to the purchasing system to an all-new global training environment. We quickly realized even our ability to analyze the demand would be swamped by the incoming flood of work.

So, we devised a scoring system. Why? There were 3 main reasons, all of which really comprise some fundamental principles when creating a scoring model.

First was the need to analyze and separate the wheat from the chaff quickly. Our primary driver was to be able to make an initial cut from 120+ requests to something more manageable for more in-depth analysis. So we needed a way to make quick judgment calls to find the top 20-30 project requests with the most merit.

Second, we realized that any analysis that came up with a specific number (like $300K for changing the purchasing program), even with a caveat of +/- 100%, would become sticky. That is to say, if the $300K estimate was later revised to $400K – well within the +/- 100-% – the executives would still want to hold us to the $300K! “I thought you said $300K 2 months ago – what changed!” was a familiar refrain. Scoring models, on the other hand, place estimates in ranges. So as long as you don’t exceed the top range its all good.

Many project-driven organizations today face this same dilemma on an ongoing basis. Scoring models meet this challenge well. So, to create a scoring model that will quickly find the projects with the most merit without being nailed down to estimates too early, keep these key principles in mind:

  • Group your scoring criteria into around 3 buckets – these will be used as axis on a bubble chart later. My favorites are benefits, cost/size, and risk. Others include impact, and for product development groups may include market share, technical feasibility, and margin.
  • Scoring criteria should comprise ranges. An example would be a 1-5 rating of potential revenue increase, with 0 = none, 1 = less than $1million, 2= 1-5 million, 3 = 5-10 million, etc. Same goes for project cost or other financial metric. For criteria like risk, an example would be a rating on project familiarity with 1 = very familiar with this type of project and 5 = never done this kind of work before. Make sure all the criteria produce the same range of scores (e.g. 0 – 5) so you can create weighted averages for each group and a weighted average total project score.

  • Scoring criteria should fit the company’s strategic direction and business needs. A retailer will be concerned about increasing market share, while a SaaS company like Daptiv is concerned with customer satisfaction.

  • Bubble charts are a great tool for graphically envisioning which projects will produce the most bang for the buck. While the simplicity of a single chart is more efficient, I have seen new product development organizations with up to 6 criteria groupings used on 2-3 bubble charts.

  • Back test the model. Take the scoring model produced and score the current slate of active projects. When I did this with a major retailer a couple of years ago, we knew we had it right when the only current projects that wouldn’t have made the cut turned out to be problem children that should never have been launched.

  • Always analyze requests in cycles. Applying a scoring model to each request as it comes in negates the comparative process. It also leads to new priorities interrupting live projects, which results in project and resource churn. We typically recommend quarterly cycles. Monthly can work in an environment with larger quantities of shorter lifespan projects. Generally annual cycles are too long as too much work comes up in the interim. However, an annual planning process for the larger, more strategic work can be coupled with a quarterly cycle for the smaller work.

  • Scoring models work best when there is a cross-functional team empowered with the ability to make decisions. This means they will be high enough level in the company to not be second-guessed by colleagues or superiors.

Once requests are reviewed and sorted using a scoring model, decisions can be made about which should proceed for further analysis. Those that pass muster then pass into the more traditional initiation process for projects, ensuring that valuable analysis time is not wasted while allowing the focus necessary to properly present the best projects for funding.

Dave Blumhorst joined Daptiv in December 2009 and his Solutions and Services team is focused on enabling successful business outcomes for customers using Daptiv’s PPM toolset. Prior to joining Daptiv, Dave ran EffectiveIT Group, a Daptiv partner and process consulting firm, that delivered implementation services to Fortune 1000 clients such as Carlyle Group, Beam Global, and Aegon USA.

Dave is a seasoned executive that has run IT, professional services, and finance departments. He has served as a controller and CFO for small to mid-sized companies, served as CIO at mid-sized companies such as Clarent, and was the senior director of the IT-PMO at PeopleSoft. Throughout his 30+ year career he has always found innovative ways to use technology to create business value.

PMHut Team

PMHut Team

PMHut.com is a website dedicated to providing PM articles, detailed project management software reviews, and the latest news for the most popular web-based collaboration tools.

Leave a Reply