Jokes apart, the general impression, which is backed by research numbers from some of the leading organizations like Gartner and Forrester, is that 70-80% of software projects fail. Repeatedly we on the delivery side are lampooned to the point that we start disbelieving ourselves. The number of project successes seem to be as alien as finding life on Mars. Is it true that software projects fail that often?
What is failure? When any of the following 4 occur, a project is considered a failure:
- the required functionality is not met
- there is a time overrun,
- there is a cost overrun,
- a combination of any of the above 3.
There are projects that are totally scrapped for political and other reasons but I am keeping them out of this post. It will be interesting to see if someone can get the breakdown of the 70-80% failure rate among the 4 factors listed above to provide a little more clarity on where most projects are failing.
Required Functionality Not Met
The onus for this squarely rests with the project team and more so on the Business Analyst. However, I would like to add some context to this statement. In a traditional waterfall model the requirements are gathered first, analyzed and make their way through the SDLC process. In large projects the time gap between gathering requirements and UAT is significant, couple of years in some cases. In a rapidly changing world this time gap is significant as the requirements could have changed for no fault of the BA or the project team. Or the business environment is such that things change quickly (the financial crisis in 2008 drastically altered the way that banks view liquidity) or new legislation was introduced (think Dodd-Frank or Basel) that calls for a significant change in the way business is conducted. All these are beyond the control of the project team, though sufficient hints will be available (and will be seen by a keen BA) of impending changes. If nothing has changed and the functionality is still not met, the problem may have stemmed from any of the many points along the SDLC lifecycle – the requirements were poorly written, inadequate analysis, inappropriate design assumptions, no technical walkthroughs, poor caliber of the technical team, no unit testing, no SIT or poor quality of SIT, or simply insufficient time to do any of these.
Time and/or Cost Overrun
The reason I combine the two is, in most cases, because they go hand-in-hand. (In some cases, as in where the project is on hold and people are moved to other projects temporarily, there may be a time overrun but not a cost overrun). For effective measurement, there must be a benchmark. When we say a project overran time or budget then the implicit assumption is that the time and budget estimates were accurate in the first place. How often does this happen?
Generally, the time is pre-determined either by the business or the project manager or someone higher up. “The project go-live date is 30th June”. That’s it! Work backwards and figure out how to fit the SDLC within that time frame. Having scratched around we figure the requirements and analysis are due in 5 days! The time estimates are inaccurate, grossly underestimated and fundamentally wrong. Come June 30th the project is checked for completion and is given an ‘F’ grade. How fair is that?
So is the case with the budget. Estimating time and cost is an imperfect science. There are many methods that have been around ranging from the least complex-pick-a-number-from-thin-air to very complex function point analysis with a bunch of others with varying degrees of complexity lying in between. But for a few, most of the projects I have been part of the budget is determined by someone who is detached from reality and has not heard of any of the estimating mechanisms. In some instances these numbers were pruned down by the budget department. Now, what does the budget department know about the system? Zilch. Are these high-end methods reliable enough to produce an accurate estimate? No. But we have some basis for the numbers.
Why don’t many folks use these methods? Lack of time is the common answer. There is another reason too – lack of information to input into these methods. It is very interesting to note the stage at which the cost is estimated. In almost all situations the cost is estimated even before the requirements process starts! The reason is simple – “we need to create a project charter for which we need an estimate. Give us a number”. Surprisingly these guesstimates become estimates and finally serve as the benchmark against which the final results are compared.
We have an inaccurate time and cost estimate to begin with. Is it fair to compare the actual time and cost of the project against these inaccurate numbers? No, but this is precisely the conundrum we are in.
The Endless Cycle of Project Failure
Here is some food for thought to break the cycle:
- Elicit requirements fully and analyze them. These costs cannot be capitalized anyway. These are sunk costs if projects don’t happen but at least they provide a greater insight.
- Estimate time and cost based on those detailed requirements.
- Use a reasonable estimation method to arrive at time and cost.
- Compare actual project time and actual costs against these estimates.
Let’s bust up those project failure reasons to deliver project success!