I’ve talked about the attractiveness of dashboards in here before, but the continual movement towards dashboard displays for senior management in all types of industries means we’re not going to be able to get away from them any time soon. For those of you who have not seen this phenomenon in the project management industry, you won’t be able to avoid it for long. A project dashboard is an onscreen view of several key performance indicators (KPIs), which show the measures for different aspects of your business in easy-to-understand displays. There’s no restriction on how to display this information or what to include in the dashboard. The sky’s the limit!
A dashboard may be dynamic or static. A static dashboard lets you see the indicators somewhat like a printed report. A dynamic view might let you click on the screen to change the display by filtering or selecting certain data or to drill down into one indicator to see the source information it was established from.
The views in a dashboard can be very creative. Rather than a simple columnar text report, you might be looking at a green/yellow/red traffic light report or perhaps a tachometer with an indicator on how far “in the red” this indicator is. There might be a curve view showing costs, or a graphical flag showing red for danger, or a checkered flag for something that’s complete. There can be histograms, pictures, colors or anything else that might allow the viewer to get their answers at a glance.
The data that drives a dashboard is also not restricted to project management information or even information from one application. It’s not uncommon to see some indicators from the project management system, some from the financial system, some from production software, and so on. There can even be indicators that compare the data from one system to another such as the budget cost from the project management system and actual cost from Finance.
As attractive as all that sounds (and believe me, management finds this kind of thing very attractive) it comes with a range of pitfalls. Let’s take a look at a couple of obvious ones.
The Wizard of Oz Syndrome
This is stunningly common. Management decides that they’ve got to have a dashboard right away. It’s so compelling that they make up a “dashboard” project. The hapless project manager realizes quickly that creating the dashboard isn’t the problem… getting the data to drive the indicators is. Management has little tolerance for a story of how it will be months before the data is complete enough or of sufficient quality to be trusted to move a dial on a dynamic dashboard and so, our luckless project manager becomes the Wizard of Oz. He or she creates the beautiful front-facing view and then manually fills in all the elements to make the indicators move where they think they should.
The pitfall here should now be obvious. The assumption of management is that what they’re looking at is an objective view, built from the ground up, to show summaries and analyses of critical data. In fact, what they’re getting is a totally subjective view that is being typed in by a small number or even a single employee.
Measure It All!
The challenge for some organizations is finding the “key” in key performance indicators. They decide to put into the dashboard everything they can measure. The dashboard quickly fills with information of all sorts and doesn’t leave the organization any more empowered. One of our standards for dashboard design is to ensure that every indicator on the dashboard results in the viewer being empowered to take business decisions.
The Glass is Half Full
One of the big challenges with dashboard data is determining that all the data required has been collected. I met with a very senior CIO a few months ago who was so excited when I showed him some dashboard examples that he asked if we could create it for him as our first part of the Enterprise Project Management deployment.
"Can it be ready for Friday?” he asked.
“Sure,” I replied, rather shocked that he’d asked.
“Really?” he said.
“Yes,” I said. “It can be ready Friday. Well, not this Friday. But some Friday …some time.”
He wasn’t amused. The problem, I explained is that, while I could create the dashboard itself in a very short amount of time, the data that we needed to drive such a screen was part of an extensive process that would take months to design and deploy.
“What if I could give you an indicator for resource capacity planning, but it only measured half the total projects?” I asked. “What could you do with that view?”
The answer of course is: nothing. What if the half of the projects that weren’t measured contained 80% of the resource requirements?
Even if we make a design that we train and deploy to everyone, we need to be sure that the dial we’re looking at can be counted on. In our office, we insist on a dashboard indicator that shows the reader the level of compliancy within the view. They should be able to see at a glance if the data is all being measured before they make a decision based on the indicator in question.
It’s not enough to know that all the projects are included in the measure of this indicator. When we’re talking about project management data, we also need to know how timely it is. What if a particular indicator is made up from some projects that were statused yesterday, some projects that haven’t been updated since last week and some projects that haven’t ever been statused? Clearly the timeliness of the data colors significantly the value of that metric. When we create dashboards around here, we always insist on indicators that show how recently the data has been updated and show a warning if some of the data is significantly out of date.
What’s the Source?
We want to avoid as many subjective measures as possible so whenever we can, we thwart the Wizard of Oz syndrome by showing indicators of where the data has come from and, even better, by allowing a drill-down into the source data whenever possible.
Gaming the Process
Once you deploy a dashboard and people figure out what it’s measuring, there is bound to be someone who tries to “Game” the process. They will try to show how well they’re doing by entering data into the system that generates a particular effect. (Yes, I know that’s bad.) This is human nature and, fortunately, doesn’t happen all that often. We can do a lot though to disincentivize such behavior by implementing checks and balances right on the dashboard. Whenever we can find data that has some correlate (such as progress in a task and hours spent on a timesheet) we try to tie them together and show warnings or indicators when these numbers don’t make sense.
Dashboards can be a powerful tool and, one of the things I like best about them is they bring our project management perspective right into the executive suite. If you’re being called upon to create a project dashboard, then take pause to make sure the decisions that will be made from it are going to be looking at the right data and the right analysis.
Chris Vandersluis is the founder and president of HMS Software based in Montreal, Canada. He has an economics degree from Montreal's McGill University and over 22 years experience in the automation of project control systems. He is a long-standing member of both the Project Management Institute (PMI) and the American Association of Cost Engineers (AACE) and is the founder of the Montreal Chapter of the Microsoft Project Association. Mr. Vandersluis has been published in numerous publications including Fortune Magazine, Heavy Construction News, the Ivey Business Journal, PMI's PMNetwork and Computing Canada. Mr. Vandersluis has been part of the Microsoft Enterprise Project Management Partner Advisory Council since 2003. He teaches Advanced Project Management at McGill University's Executive Institute. He can be reached at chrisv@hmssoftware.