Tuesday, 15 December 2009 23:00

How Data Display Can Change Project Decisions

Written by

Anyone who has worked with project management systems knows that the way you display data can dramatically affect the decisions people make from it. This is why we often see Gantt charts with critical activities in red. I'm reminded of one of my very first sales of project scheduling software back in the '80s. I don't dare share the name of the organization but it was a large utility. We'd made this sale a few days earlier and now I got a call for "technical assistance."

"I have a big problem. All my tasks are red," the hapless client reported.

"Oh, that's not a problem at all," I replied. "Red tasks just mean that you are looking at the critical tasks. These tasks have been marked as red because they are on the critical path."

"That's another thing," said my new client. "That word 'critical'. That's not going to work for us."

There was a moment of silence. I was speechless. (For people who've met me you will know how unusual this is.)

"Perhaps the word 'priority' would be more appropriate," I hesitantly replied. "Yes, that's excellent!" said the client. "Now, what can you do about changing these colors? I need to get rid of these red tasks"

For those of you who have grown up with critical path methodology you might laugh. "This person needs training," you might say. But I learned something very important from the interaction. My first reaction was to think that we needed to get in there right away and train people to distinguish between red tasks and blue tasks but I realized that the problem wasn't his, it was mine. His perspective was that red tasks were a problem and that he'd have to take action to eliminate all the red from his report before he could give it to management. If you're familiar with the critical path algorithm, this was going to be a problem because there are always tasks which are critical.

The challenge for the user wasn't trivial. Even if I had trained him, he knew that his management would not appreciate the distinctions of critical vs. non-critical and would see red tasks the way a bull sees a red cape - they'd want to charge at them with as much force as they could muster.

When data moves beyond highly skilled users and into the hands of people who will interact with it only occasionally, choosing the display mode of the data is extremely important. In this day and age, the desire of management to get "real-time" dashboards and "live displays" of projects can lead to unhealthy project environments. Let's consider the following very simple dashboard

howdatadisplay1

The situation here seems quite straightforward. Project 1 is running very late, Project 2 is slightly late and Project 3 is on time.

Showing this display instead of a complex bar chart might be very appropriate to management. This display will draw attention to the schedule of Project 1. What should be done? Most managers would now query the project manager of Project 1 to ask why the project is late and what can be done to get it back on time.

That's great so far. But next week when management sees the same report, is the same action appropriate? Probably not. Just having displayed the dashboard once has changed its context. If we display the identical dashboard with identical results next week, management will be likely to ask a very different question: "Have things improved since last week?" The display doesn't show this.

So, we have the same data, the same author of the data, the same display, the same reader of the data but the reaction is quite different. This is because the context is very important. The manager of the project system has a couple of choices now. He or she can make a report that shows a year of icons for this display so the time factor can show management when the task becomes red and then goes back to the yellow caution and hopefully then to green. Or, they can try something like this:

howdatadisplay2

Now both Project 1 and Project 2 are significantly behind schedule but we've added a new indicator to the graph. Now the trend of the scheduled delay is displayed. The eye naturally goes to the two red Projects: Project 1 and Project 2 and then to the right where we see that the trend for Project 1 seems to be improving and the trend for Project 2 seems to be getting worse. Also a concern is now Project 3 where the schedule is still on time but the trend is very much in the wrong direction. It looks like resources have been pulled from Projects 2 and 3 to work on Project 1 which is improving at the cost of the schedule of Project 3.

This is all still pretty good now but you can see how the paradigm of such data expands exponentially. There is nothing quite so attractive to senior management than coloured dashboard indicators and there's a whole industry of people making indicators and formulas to drive them. In the simple example above, new indicators might be ordered up in a heartbeat. Was the improvement in Project 1 actually done at the cost of Projects 2 and 3? What was the relative return on investment of working on Project 1 instead of 2 and 3. Perhaps this red indicator caused us to move resources from our most important client to our least important internal project. How was the move of resources to respond to the red "X" in Project 1 aligned to our strategic goals? What did it cost?

Before you know it we'll have a page full of symbols, curves, flashing lights and glowing buttons. There are a few other things missing from the whole display that are often overlooked. They can be summed up as timeliness and completeness.

Are we looking at all the data? Perhaps the project schedule is showing late but only half the tasks have been updated and when the data is all collected, the indicator will turn green. There's no indicator in this simple display about how complete the data is or isn't.

Is the data up to date? Perhaps Projects 2 and 3 were updated yesterday but Project 1 was only updated 90 days ago. Should they even be displayed on the same page? The data might no longer be relevant when comparing one project to another, yet there is no indicator on the display that the data is all homogenous.

When we create display systems such as dashboards and summary reports we have to consider these things. I have a couple of basic rules about dashboards:

Less is more. Just because we can measure a thing doesn't mean we should. Imagine a page with 500 coloured indicators with 100 different shapes being used. That's obviously visually stimulating, but will it be useful? Almost certainly not. Yet, a page with one color on it (just red for example) isn't useful either. That tells you to get into action but not where.

There must be action. Every indicator should be able to have a related action to it. E.g. If the traffic light is red and the arrow beside it is red, then the VP must call the project manager immediately and review an action plan for getting back on track.

The indicator must have quality. We have an expectation that project data that is reviewed by management has already been approved in some way. Yet management often asks for real-time dashboards that show data long before it's been reviewed or approved. Showing the quality of the data by either showing the level of approval, completeness or timeliness right on the dashboard, or through some other process, is key to being able to count on the decisions that will be made from the data.

The indicator is made for a particular audience. Making graphic dashboards with coloured, animated flashy graphics is fun and, in this day and age of technology, not that hard. Designing such a display that makes an organization more effective is much tougher. So, every display we make is written with a particular audience in mind. Who will read this display and what will be their context for the data.

The way you display data and what you display can make decisions and action possible that was impossible in the past. By the same token, a badly designed display can cause decision makers to make the incorrect decision inadvertently. So, think about what action such displays will cause as you're designing them.

Don't forget to leave your comments below


Chris Vandersluis is the founder and president of HMS Software based in Montreal, Canada. He has an economics degree from Montreal's McGill University and over 22 years experience in the automation of project control systems. He is a long-standing member of both the Project Management Institute (PMI) and the American Association of Cost Engineers (AACE) and is the founder of the Montreal Chapter of the Microsoft Project Association. Mr. Vandersluis has been published in numerous publications including Fortune Magazine, Heavy Construction News, the Ivey Business Journal, PMI's PMNetwork and Computing Canada. Mr. Vandersluis has been part of the Microsoft Enterprise Project Management Partner Advisory Council since 2003. He teaches Advanced Project Management at McGill University's Executive Institute. He can be reached at chrisv@hmssoftware.ca.

Read 6612 times

© ProjectTimes.com 2017

macgregor logo white web