Skip to main content

Principles of Scoring Models

When I was running an IT-PMO at my previous company we faced an interesting dilemma. As we finished work on a large integration project there was a ton of unmet demand for IT work from all corners of the enterprise.

This ranged from tweaks to the purchasing system to an all-new global training environment. We quickly realized even our ability to analyze the demand would be swamped by the incoming flood of work.

So, we devised a scoring system. Why? There were three main reasons, all of which really comprise some fundamental principles when creating a scoring model.

First was the need to analyze and separate the wheat from the chaff quickly. Our primary driver was to be able to make an initial cut from 120+ requests to something more manageable for more in-depth analysis. So we needed a way to make quick judgment calls to find the top 20-30 project requests with the most merit.

We further realized that any analysis that came up with a specific number (like $300K for changing the purchasing program), even with a caveat of +/- 100%, would become sticky. That is to say, if the $300K estimate was later revised to $400K – well within the +/- 100-% – the executives would still want to hold us to the $300K! “I thought you said $300K 2 months ago – what changed!” was a familiar refrain. Scoring models, on the other hand, place estimates in ranges. So as long as you don’t exceed the top range it’s all good.

Many project-driven organizations today face this same dilemma on an ongoing basis. Scoring models meet this challenge well. So, to create a scoring model that will quickly find the projects with the most merit without being nailed down to estimates too early, keep these key principles in mind:

  1. Group your scoring criteria into around three buckets – these will be used as axis on a bubble chart later. My favorites are benefits, cost/size, and risk. Others include impact, and for product development groups may include market share, technical feasibility, and margin.
  2. Scoring criteria should comprise ranges. An example would be a 1-5 rating of potential revenue increase, with 0 = none, 1 = less than $1 million, 2= 1-5 million, 3 = 5-10 million, etc. Same goes for project cost or other financial metric. For criteria like risk, an example would be a rating on project familiarity with 1 = very familiar with this type of project and 5 = never done this kind of work before. Make sure all the criteria produce the same range of scores (e.g. 0 – 5) so you can create weighted averages for each group and a weighted average total project score.
  3. Scoring criteria should fit the company’s strategic direction and business needs. A retailer will be concerned about increasing market share, while a SaaS company is concerned with customer satisfaction.
  4. Bubble charts are a great tool for graphically envisioning which projects will produce the most bang for the buck. While the simplicity of a single chart is more efficient, I have seen new product development organizations with up to 6 criteria groupings used on 2-3 bubble charts.
  5. Back test the model. Take the scoring model produced and score the current slate of active projects. When I did this with a major retailer a couple of years ago, we knew we had it right when the only current projects that wouldn’t have made the cut turned out to be problem children that should never have been launched.
  6. Always analyze requests in cycles. Applying a scoring model to each request as it comes in negates the comparative process. It also leads to new priorities interrupting live projects, which results in project and resource churn. We typically recommend quarterly cycles. Monthly can work in an environment with larger quantities of shorter lifespan projects. Generally annual cycles are too long as too much work comes up in the interim. However, an annual planning process for the larger, more strategic work can be coupled with a quarterly cycle for the smaller work.
  7. Scoring models work best when there is a cross-functional team empowered with the ability to make decisions. This means they will be high enough level in the company to not be second-guessed by colleagues or superiors.

Once requests are reviewed and sorted using a scoring model, decisions can be made about which should proceed for further analysis. Those that pass muster then pass into the more traditional initiation process for projects, ensuring that valuable analysis time is not wasted while allowing the focus necessary to properly present the best projects for funding.

Don’t forget to leave your comments below.

Comments (7)