Skip to main content

Author: Phil Jacklin

Project Initiation Documentation – What’s the Point?

Have you ever written a PID (Project Initiation Document)? Did you get any value from it? Did the project?

Is the PID just a bureaucratic process, or is there any value in it? Can we do something different and get more value?

Any project initiation document, or process, will only add value if at least one of the following things is true:

  1. It strengthens the quality of thinking before the project
  2. It gets read by somebody who believes it added value to them
  3. It is used as a reference point, by everyone, and accepted as ‘the standard’ for what is to be done.

Let’s look at these, in reverse order.

1. The PID as a reference point

I have rarely seen project initiation documentation used as a reference point or to hold people to account of an approach, agreement or a deliverable. Where I have seen attempts at this (“In the PID, we said we were going to do it this way”), I have seen easy rebuttals explain the folly of the concept (“but things have changed, and you need to change your approach as a result”). Some would argue that if you keep your PID up to date as things change, it is still valid. I would argue that project teams are terrible at keeping documentation up to date. Let’s stop pretending we are going to do this and find a better way and if the PID is continually updated to reflect reality, is it an initiation document, or just the current view on what we are going to do? Having a current agreed view on where we are going to might be useful. A PID-style document might not be the best way to do this.

What are the alternatives? Personally, I like meeting minutes the most. I like sitting down with the people that might be the readers of a PID and instead, having a discussion. This meeting creates a two-way process open to cross-examination and debate (much more so than a document). Circulating minutes of the outcome keeps a record of what was agreed, and no-one needs to read a dry boring text. As things change, we discuss them again and circulate minutes again. Our documentation is up to date. However, more importantly, the people involved know what is going on rather than interpreting an understanding from a dry document. I find this approach eases understanding (the most important thing) is quicker (project managers like faster) and can be used to hold someone to account (“in the meeting on XYZ date, you said…”).

2. People who read the PID get value from it.

For this to be true, firstly, the right people have to read the PID. That means not only the sponsor or governance functions but project members and other stakeholders. This does not happen. We are kidding ourselves if we think it does. For those that do read the PID, they read the sections that they are interested in or perceive as relevant to them, often missing out important context as a result. That is not to say that some people do not read the PID. Of course, some do. However, it is not read as widely as the information should be disseminated and understood.

For the PID to deliver value, the readers have to understand its contents. The project teams that produce PIDs are often not professional technical writers, and their PIDs are full of ambiguities, contradictions and open to interpretation. It is impossible in this scenario to be certain that the few people who read it, get the value from it that you intended.

Again, that does not mean no-one gets the expected value from reading the PID. It is only fair to assume that sometimes, a percentage of the people that read the PID, get the value intended from reading it. The problem is, that on projects you need everyone on the same page. The people that have read the PID and have interpreted something else, and the people that have not read the PID and have incorrect assumptions and thoughts, are a significant problem.

I do not think the solution to the problem is writing better PIDs. I do not want my project managers to become professional technical writers. I want them to strengthen their project management skills. Given the limited utility in the PID, in adding value to its readers, maybe its time we stopped writing PIDs. Instead, use the first governance meeting to present the PID contents to the governance group, use a project kick-off meeting to do the same to the stakeholders, use the first team meeting to do the same to the project team. Now everyone has the same information. Moreover, you can do this without writing the PID.

3. Strengthening the quality of thinking

If we look at the sections in a traditional PID very few of them, have content which is a foregone conclusion. That means thought (and often discussion, group-ideation, and compromise) are needed to produce the contents. Unless a PID is produced in isolation (which it should not be), it is very hard to argue that the process of producing a PID does not improve the quality of thinking about the project. It evidently does. Hang on! Have we just re-asserted that a PID is useful after spending the last 10 minutes saying the opposite?

The process to produce the PID document is where the thinking happens. The process and discussions add value. However, you do not need to produce the document at the end. The document does not add value. Yes, we need to capture the salient points and disseminate the learning, but a document is not the best way to do that. Instead of writing, think visually. Let’s have schematics and mind maps. Instead of hoping the document is read, let’s present the document. Let’s have more judicious use of meeting minutes as documentation. Let’s stop wasting our time writing PIDs and improve the wider understanding of the project by spending more time managing our projects.

In Summary

The main problems of the PID are that it is not a good communication tool. Written communication should be low on the list on a project, it’s open to misinterpretation and misunderstanding; it takes time to produce. Time that adds little value; (iii) the PID is not read. If it is, it is not read in a way that there is a shared understanding of what the project is going to do; (iv) project teams are bad at documentation and don’t keep it up to date and (v) whilst the process of writing a PID forces some critical thinking about the project that adds value, skipping the PID, does not mean we need to skip those processes too. Instead, maybe we should have the meetings and discussions to create the PID content but present rather than a document, minute rather than read and move quicker rather than spending time writing.

Can you predict if a project is going to be successful?

We all have failed projects. But what if we could predict how likely a project was to be successful. Can we?

There are certainly some factors that we would all agree are definite indicators of a project’s probability of being successful. Take two projects, identical in every way, except one has all resources utilised at 200% of capacity and the other has all resources utilised at 50% of capacity. Everything else is identical. There is universal agreement that the project with over-utilisation of resources is less likely to be successful than the project with under-utilisation of resources. In this very abstract scenario, project success has an element of predictability.

But that doesn’t mean a project with more resource availability has a higher probability of being successful, than a project with lower resource availability, even if all other things are equal. For example, is a project with resources utilised at 51% of capacity, more likely to be successful, than a project with resources utilised at 52% of capacity? The difference is probably negligible. Both projects are equally likely to be successful. But what about a project with resources utilised at 100% of capacity, compared to a project with resources utilised at 101% of capacity? The difference is the same as in the previous example (1%), but is the effect on probability of success different?

So now we have the situation where there is a tipping point beyond which a project’s likelihood of success starts to change, as well as another tipping point later, after which changes have no discernible effect (projects with a 500% or 501% of resource utilisation, for example, are equally likely to be successful). This would give us a success curve, as shown in figure 1. Jacklin 011017

This leads to the next logical question. What are the values of the tipping points? Of course, that question we can never truly find the answer to. You can’t set up identical projects with different values of resource availability, keep everything else equal, and then run the projects to completion to see which ones were successful and which ones failed. Maybe that means project success is not predictable? Or is there another way?1

Like we have developed the argument around resource utilisation and shown how that could affect project success rates, there are other variables that we can develop a similar argument for. Keep everything else equal and only alter the amount of budget contingency; Keep everything else equal and only alter the amount of slack on the critical path; Keep everything else equal and only alter the amount of scope creep. All these scenarios will develop along similar arguments to the resource availability example. Whilst we can’t provide absolute measurements and can’t define our tipping points, we can at least develop a theoretical model, a probability of success curve, for how probability of success will alter depending on different values.

Before we come on to how we can use this, we need to think about the effect of combinatorial factors. So far, in all our examples, we have only changed one factor and kept everything else equal to derive our success curves. In our real projects, there are thousands of moving parts and thousands of factors that we might want to take account of. These factors change values at the same time. What effect does that have? Does it have any effect?

If we have a project with 95% resource utilisation and -30% budget contingency, is that more, or less, likely to be successful than a project with 95% resource utilisation and a 30% scope creep? Are scope creep and resource utilisation the deadly duo and when seen in combination, there is an accelerator effect and projects are even less likely to be successful? And how can we measure and validate this?

There is no doubt that combinatorial factors make the whole analysis of project success a good deal more complicated. Measurement and validation of any model, very difficult to start with, now becomes almost impossible and our hopes of finding a model to predict project success are fading. But there are some assumptions and techniques we can use to give us a glimmer of hope.

If we were to build such a model that predicted project success, what would we use it for? It turns out, that an answer to this question, could help us build a useful model for at least one scenario. A model that ranks project success, across a range of projects, relative to each other, would be useful to help us understand which of our projects, across our whole portfolio, are least likely to be successful. Those are the projects that we might review, change, or keep a careful eye on as they progressed. In this scenario, an absolute ‘score’, a ‘percentage probability of success’, doesn’t matter. What matters, is a comparative score. We are only interested in those projects that are low, compared to others.

Our work is simplified considerably with a comparative model. The position of our tipping points does not need to be as exact as the comparative differences still apply wherever the values of the tipping points are set.2 The probability of success ‘score’ for different points along our success curve no longer matter either.

As we are only building a comparative model, it’s the difference between the scores for different projects that matter, not the absolute scores. So now, if a project has a 100% resource utilisation, it doesn’t matter what ‘success score’ is given to this point, what matters is the comparison of this score to other scores.

There is still complexity in combining factors, which absolutely needs to be done in any model of worth. No one would argue that project success is entirely dependent on one single factor. Since the ‘multiplier’ effect of different combinations cannot be safely evaluated (you can’t prove that factor A, in combination with factor B, is more likely to lead to a failed project), the simplest thing to do is to combine factors in the least aggressive way (i.e. additive not multiplicative) and to combine all factors in a consistent way. The model will not be perfect, but it will still be valid as a comparative tool to compare project A’s chance of success against project B’s chance of success.

So, what do we end up with? We have a very simplified model that gives us the ability to compare a group of projects against each other, to show which ones are more likely to be successful and which ones are less likely to be successful. It’s not perfect and there’s still work to be done to work out which factors we should be including (1000s is not practical. But do we need 100s to have a good working model, or are 10s of factors enough?). But with enough data to analyse, this problem can be solved. There are also assumptions and simplifications that we’ve had to use to get to any model. Despite the limitations, the model is something we can use in our evaluation of projects – another tool to help us deliver successful projects.

Any model of project success becomes even more useful when we apply human interference and irrationality in to the model, which is the environment that a real project must be delivered in to, but that’s a blog post for another day.
What would you do if you knew your project had a 35% probability of being successful?

Footnotes
1 There is a separate argument that it might not matter. The rate of change, in probability of success, at either side of the tipping point, is so small, that if you were to build a model and set the tipping point at the ‘wrong’ place, the effect would be negligible anyway. This argument degrades once you set the tipping point a lot further away from the place it ‘should’ be, but it does give you an ‘accuracy range’ at which you can place the tipping point and the model can still be valid.
2 This is not quite true, there are a range of places within which our tipping point can be validly placed and not degrade the results of the model too badly. But that range is significant enough for us to have a better degree of confidence in the model.