Skip to main content

Author: Mike Morton

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent accumsan aliquam ligula in luctus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi ut fringilla mauris. Vestibulum et cursus libero. Maecenas aliquam viverra rutrum. Morbi dolor magna, rhoncus sit amet sollicitudin lacinia, commodo nec ipsum. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Sed velit orci, scelerisque vitae eros ut, laoreet lobortis eros. Vestibulum egestas mattis faucibus. Pellentesque at lectus dolor. Maecenas et ante velit. Curabitur eu nulla justo. Pellentesque ut pellentesque arcu. Quisque rutrum maximus bibendum. Aliquam tempor, neque et sollicitudin aliquet, ante elit vehicula justo, quis venenatis metus nisl a tellus. Curabitur dignissim, risus a interdum lacinia, orci sapien iaculis ante, et convallis eros orci quis ligula. In hac habitasse platea dictumst. Cras fringilla fermentum purus, vitae condimentum quam pulvinar commodo. Vivamus quis urna ac leo rutrum maximus. Integer efficitur pellentesque lacus sit amet pulvinar. Duis eleifend massa id eros fermentum euismod. Mauris nec turpis eu tellus viverra porttitor. In accumsan fringilla tellus ac tincidunt. Donec tempus rutrum feugiat.

Easy as Implementing a Package

Last weekend I had a conversation with an uncle who recently retired from his accounting job at a large university. His family was financially secure, the children were grown (with his first grandchild on the way), and he was healthy after a going through a medical scare years ago. It was time to call it quits and restore the antique motorcycle his wife had given him for Father’s Day last year, and get ready to bounce his new granddaughter on his lap.

But before it was official, his employer asked him to reconsider – for one more project: deployment of an enterprise (ERP) application across all the colleges of the university. “There was no way I was ever going to stick around for that,” he told me.

Most of us don’t have the luxury of tipping the hat and bidding adieu like my Uncle Larry did in the fact of a life-changing project presented by the boss. And life-changing it will be for a lot of people. What started out as a $100 million dollar project has ballooned to – $250 million. As a friend once said to me, “Now we’re talking about a SERIOUSLY BIG man-sized pile of money!”

It’s no wonder that my fellow Cutter colleague Steve Andriole said that not one of the CIOs and CTOs that he had spoken to in the last few months would install their enterprise applications again if they had a chance to do it over. In his recent Cutter Trends article entitled, “Sourcing Today and Tomorrow,” he said it took many of them years to get the software to work, with some costing hundreds of millions of dollars (man-sized piles). A few had even gotten fired when they exceeded their budgets and schedules. Do you really wonder why some folks head for the hills when the boss says the words, “Oracle,” or “SAP” implementation?

I don’t think it has to be that way. One of my clients – a large financial services company – has a solid benchmarking initiative in place that showed how nearly 100 of their projects performed in terms of cost, schedule, and quality across small to very large IT projects. Among them was a group comprised of package implementations, all plotted against industry trend lines. We color coded those projects on the graphs, using a legend convention showing them as blue squares, with the rest of the projects plotted as green circles, to distinguish the ERP batch from the overall sample.

The good news: In almost all dimensions, they behaved like the other “traditional” projects! The bad news: some had overruns and slippages just like the rest, and the piles of money for some of the overruns were pretty big. But not all behaved that way. Some were quite successful, showed high levels of productivity, and were right on target for the scope, meeting their deadlines, and finishing within budget.

What does that tell us? That ERP projects can either succeed or fail just like any large-scale IT project, and in my view, it is within our ability to influence their outcome. Enterprise application projects are not going to go away anytime soon in my view. Andriole thinks that CIOs will stop doing them, rent versus buy-and-install (going the ASP 2.0 route), and shift to software as a service (SaaS), but I think that will take a long time. Meanwhile, big organizations – multi-billion dollar corporations – still need to run their businesses. Their legacy systems will either take ongoing care and feeding, or CIOs will make the shift to companies like Oracle or SAP to keep the ship moving. Like it or not, companies will have to get better at managing this kind of work.

So here’s some more good news – I believe that it’s possible to better understand, manage, and predict how these projects behave, and not suffer from year+ delays, cost overruns, and poor reliability. Having worked with dozens of clients doing package implementation and deployment projects, including the major enterprise application vendors, here’s what we know about these kinds of projects:

  • Productivity on ERP projects is very similar to traditional IT (development) projects. Their schedules, effort/cost profiles, and defects position similarly against other software project trends.
  • Three distinct classes of complexity seem to drive their behavior – upgrades (lowest complexity), standard implementations (medium complexity), and global deployments (highest complexity).
  • It’s possible to “size” this work by estimating and counting elements like business processes (number of major/detailed processes), and the custom artifacts to implement them (i.e. reports/tables, interfaces, conversions, enhancements, and forms).
  • The effort proportions for things like “Business Blueprint” phase relative to the “Realization and Preparation” phase is highly similar to the proportions seen for the “Requirements/Design” and “Construction/Test’ phase on traditional software projects.

So here’s the tricky part. Traditional projects have long-suffered from cost overruns, schedule slippages, and cuts in scope. So tell me again why it’s good news that ERP projects behave similarly?

It means that companies that have improved their ability to measure and estimate their projects can apply the same skills to better forecast enterprise projects. It also means that if you collect some historical data on non-enterprise projects, there stands a good chance that you can leverage these existing patterns to sanity check your deadlines, budgeted effort, and scope targets against this history. Even better, you can run one of the commercially available software project estimation models to more accurately forecast time, effort, and achievable scope on these deployments. That way, you can get realistic about what you can implement within a given deadline in the first place, and not have as high a risk at suffering from an embarrassing and potentially job-threatening overrun.

This is part one of a two-part article by Michael Mah. Part two will appear in the August Project Times



Michael Mah is a Senior Consultant with Cutter Consortium’s Business Technology Trends & Impacts, Measurement and Benchmarking, Agile Project Management, and Sourcing & Vendor Relationships Practices. He is owner/partner at QSM Associates Inc. Mr. Mah is a recognized expert on practical applications of software metrics, project estimation/control, and IT productivity benchmarking. Over the past 10 years, he has published numerous articles on these and other management topics. His recent work merges concepts in software measurement and benchmarking with negotiation and dispute resolution techniques for IT outsourcing and relationship management. Mr. Mah’s particular interest is in people dynamics, such as the complex interactions between people, groups, divisions, and partnered companies working on the technology revolution at “Internet speed.” He is also focused on the latest research and theory on negotiation, including the use of game theory, role playing, and training to increase corporate and personal effectiveness. Mr. Mah is a frequent speaker at major trade conferences, including the Cutter Consortium Summit series, Better Software Conference, the Software Engineering Process Group, Software Best Practices Conference, the Technology Partners International Outsourcing Conferences, the Sourcing Interests Group, and others. Mr. Mah has a degree in engineering from Tufts University. His training in dispute resolution, mediation, and participatory processes is from the Program on Negotiation at Harvard Law School and the Radcliffe Institute for Advanced Study. He can be reached at [email protected].

PMO 2.0: Expanding the Value and Reach of the PMO

The world of project management has changed, limiting the effectiveness of traditional project management techniques in today’s fast-moving, knowledge-based, technology-oriented service organizations. The PMO must go beyond project or even portfolio management to include financial and organizational capacity management.

I’d like to introduce the concept of the Management Integration Center as a means of establishing a center of excellence to develop the expertise and capabilities to run technology services like a business.

Rescuing Project Management

The application of traditional project management techniques in today’s knowledge worker organizations has yielded little success. Billions of dollars are lost each year due to poor planning, and less than 30 percent of all projects are successfully completed. To reverse this unsettling trend, PMOs must reach beyond project and portfolio management to balance demand for resource and financial capacities across the enterprise.

Significant operational efficiency increases of 20 percent or more can be expected when organizations implement a center of excellence to focus on business functions and better align work with money and resources across the organization.

Introducing the Management Integration Center

Typically, organizational processes are operationally structured into three distinct levels: strategy, management and development. The strategy level includes executive processes such as business planning, governance and other high-level decision-making functions.

The management level provides the various linking functions between strategy and development to convert concept into products and services.

The development level is where specialized technology processes are defined and employed. This can include computer technology, R&D, manufacturing or construction processes.

Functions traditionally provided by a PMO reside as a part of the management level. For example, the PMO does not make strategic investment decisions, nor does it dictate the engineering approach. Rather, a good PMO facilitates the investment decision-making process and ensures the engineering approach applied to a project will result in the expected deliverables within defined constraints.

The term Management Integration Center (MIC) has been used to differentiate its expanded role and functions in the organization from those of a typical PMO. The MIC encompasses a wide variety of management and control functions to include support for all organizational work and resources, as well as consolidating other business management processes.

A MIC is needed for:

  • Collaboration between various groups within the organization
  • Defining a common roadmap to foster consistency of process, terminology, roles and responsibilities
  • Gathering, analyzing and disseminating business information
  • Administering enabling technology and applications
  • Developing and deploying staff with appropriate business skills.

A well-established, full-service MIC provides vital services to executives, internal departments, and customers alike by integrating business functions and information within a dedicated organization.

Since the MIC improves processes applied throughout the organization, it amplifies efficiency gains through increased performance of the entire enterprise.

The MIC as an Internal Services and Consultancy Provider

One of the overt drivers for developing an MIC is to first establish a mechanism for consolidating or creating the infrastructure and core competencies around business management. This may take the form of actually providing the service itself, such as an integrated business management application, or a dedicated planner/scheduler staff deployed to assist technical managers.

In some cases, it may manifest itself through educating the organization to raise general business awareness and process maturity. Regardless, the MIC must be positioned as a service provider rather than a domineering power center.

Of the many roles the MIC plays, one of the most important is as a liaison for the relationship between the product and service consumers and the provider organization.

Facilitating Business Process Superhighways

Processes form the framework for the flow of information, decisions work, assignments, and practically everything that is done in context of executing business today. The effectiveness and efficiency of these process ‘highways’ dictate the amount of traffic they can handle, while governance establishes the ‘rules of the road.’

By virtue of being the facilitator of the governance model and owner of the business processes, the MIC is a significant player in defining operational controls and organizational efficiency.

As the owner and administrator of the supporting technology, the MIC also develops control functions related to the automation of processes, such as defining the appropriate lifecycle and workflow models.

Every Ship Needs a Navigator

If we apply the analogy that any given business is much like a ship on a voyage, perhaps the most beneficial role of the MIC is to help it along its journey by knowing where it is at any given time by ‘reading the maps.’ While the destination is always defined by senior executives, a large component of success for the journey is ensuring the course is defined, communicated and followed.

This involves developing the analytical skills and intimate knowledge of information within supporting business systems to transform it into usable, business intelligence. Such a service is invaluable for identifying and leveraging information about past, current and future conditions.

Significant savings can be realized from this function alone. For example, if five percent of a $20 million budget is identified as being allocated to low-priority work, an organization can then spend an additional $1 million on important projects.

Business Performance Management

Information collection and analysis is a key service provided by the MIC as the bridge between tracking and controlling functions. The MIC is uniquely positioned to have the proper combination of organizational perspective, skills, access and capability to render data into useful information across departmental lines.

Business information is then further distilled into that which requires action vs. continued monitoring. Performance tracking is commonly applied to individual work items and resources, such as scheduled performance to baseline, earned value, cost versus budget or resource utilization.

However, particularly for the MIC, other important tracking functions include monitoring performance at a macro level, such as portfolio performance, average effort per particular type or class of work or process performance.

A key capability for the MIC is to have an integrated business application environment so that data can roll up and accumulate from the lowest level of planning detail to the upper levels of the organization and its portfolios.

Project, work and resource managers each have responsibilities for analyzing the information within their respective areas. And departments may also perform analysis from their perspective. But they all tend to concentrate their analysis efforts vertically. Only the MIC is uniquely positioned and equipped to perform further analysis across departmental boundaries to gain a comprehensive view of overall performance.

The Role of the MIC in Analytics and Reporting

The MIC should help define and produce the top 15-30 Key Performance Indicators (KPIs) that are used to manage at the executive level as part of routine governance and strategy meetings. KPIs refer to the set of core metrics that are an organization’s primary measures of health and performance. If you find you need more, consider rotating KPI reviews from one major grouping to another at each meeting, rather than trying to cover them all at one time.

The MIC should facilitate executive reviews by providing the KPI package to members ahead of time, flagging those in need of discussion, and keep track of actions decided upon or those that need to be reassessed at the next meeting.

As a communication tool, appropriate KPIs should get exposure beyond the governance board. Every employee should have the opportunity to see and understand organizational performance, so they can internalize their personal stake in those they influence, share in the success of performance gains, and better understand when and why improvements or changes are made.

Consolidating Business Expertise

Technology-based organizations inherently understand the value of developing depth of expertise and specialization when it comes to development-level functions. However, these same organizations have been slow to recognize and justify that the same specialization needs are present when it comes to the business management functions. A sign of organizational maturity is the recognition that business functions, such as those discussed, are as critical to being seen as a valued service provider, as much as the technology products and services themselves.

The MIC represents corporate recognition of these business capabilities as a mission critical function, and it becomes the focal point for building and applying essential business skills. In addition to improving business functions themselves, the total cost of execution actually goes down.

Assigning the Right Resource to the Right Job

Often, for lack of other options, expectations for accomplishing business functions are placed upon managers and department heads. Such functions are likely not their area of training and expertise, nor their top concern or priority. If processes, tools and techniques are formally established for these functions, they are often a product of department silos, making it difficult to consolidate results. If a common business application is available to accomplish the function, they probably aren’t using it often enough to become proficient in its use.

The net result is that if these functions are being routinely accomplished at all, they are being done by expensive resources that are better applied in other ways. And they are probably not being done as consistently or efficiently as they could be.

By comparison, the MIC can facilitate administration of these business processes using a trained and dedicated staff at considerably lower cost. This staff still works closely with managers to execute decision-making, provide reports and help them manage their work and resources better, without the administrative burden.



Terry Doerscher has over 24 years of practical process development, project management, PMO, business strategy, and work and resource management experience in the construction, nuclear, and IT fields. Mr. Doerscher is currently the Chief Solution Architect for Planview, responsible for developing Planview PRISMS™ Adaptive IT Management Best Practices, and coordinating its integration with Planview Enterprise software functionality. Prior to that, he was a business consultant and Director of Professional Services for Planview, managing the implementation of Planview for over 25 customers and supporting dozens more.

Commoditizing Project Management for the Mid-market

Over the last five years I’ve spent a fair amount of time working with Microsoft on deployments of its Project Server system. Microsoft refers to its entire solution as the Microsoft EPM (Enterprise Project Management) Solution as it encompasses much more than just Project Server. To consider the total solution we think of a “stack” of technology. There is Microsoft Windows Server 2003 to start with. Part of Windows Server that’s critical to this kind of deployment is Internet Information Services, which is the Web Server for delivering all the web content. Along with Windows Server is Windows Sharepoint Services that provides us with collaboration and web portal functionality. We often do authentication with Active Directory, so that’s part of the solution too. There’s also SQL Server where we’ll house the database. Microsoft Office Project Professional and Project Server and the Project Web Access interface are the more commonly expected pieces of software. Finally, there are some elements of the functionality that might require Microsoft Office, Microsoft Office System, SQL Reporting Services or SQL OLAP Services.

Quite a mouthful, isn’t it?

There’s no doubt that the end result is a powerful one, and no doubt that the solution has been well received. Even among Microsoft’s detractors, there is widespread opinion that Microsoft is a force to be reckoned with in the EPM space. The initial targets for this new enterprise functionality was, to no one’s surprise, Microsoft’s enterprise accounts. Microsoft doesn’t publish numbers of how many such accounts exist, but it is no secret that these accounts typically number in the thousands of PCs. In these kinds of companies, there are numerous resources that are applied to an EPM deployment. The IT department has network administrators and installers and technical support personnel. There are database administrators and programmers, and so on.

I bring up this whole topic because what Microsoft is about to confront is surely a trend to be considered by everyone who creates systems for enterprise project and portfolio management. Microsoft must, over the coming years start to look beyond just their enterprise accounts and see how they can craft a solution and a sales and deployment message that is as attractive to the mid-market as the one for the enterprise market was.

In our business we get calls almost every day from a mid-market sized firm. “How long will it take us to implement Project Server,” they ask. The question is worded in different ways but it becomes clear quite quickly that an answer in the denomination of months isn’t going to find any traction. I can’t tell you how many times we’ve gotten a call on this subject that sounds like, “Can you get the whole job done by Friday?”

With enterprise level clients, there is almost always an understanding that the deployment of such systems must be managed as a change management project. It’s the culture change, not the technology that is the big challenge. This is no surprise in a large firm. We’re talking about managing a major aspect of the business in a different way and this may well have a ripple effect through the organization.

There are aspects of this that are true at the mid-market level also, but it’s a truism to say that the smaller the organization, the more maneuverable it is. So when we explain how challenging changing behaviour may be, this is often met with more resistance at the mid-market or small-market level.

If you were Microsoft, or another project management software vendor, you’d have to think about how to tackle this market. The same sales model that worked for the enterprise isn’t going to fly here. What will be required is what people always expected from Microsoft: instant results.

This leads to what I believe will be a major trend in project management tools and their manufacturers over the next five years: The commoditizing of epm software. Publishers must ask themselves, “How can we provide a solution that enables the correct process, is a minimal drain on management to design and configure, and is priced in a way that mid-market companies can afford the total cost of ownership.”

So, how do you go about commoditizing such a product/service offering? I have a few ideas.

Make the technology all install at once. This is within the technical grasp of the large EPM system publishers. Make a one-click install that works for most mid-market size deployments. When we think of an enterprise deployment, we start talking about multiple servers, web farms, load balancing and other high-end challenges that just don’t exist when the total number of users is 200-300.

Next, pre-configure the software for my use. Sure it’s true that every company is a little different but there are many commonalities between firms. Instead of having the software arrive with nothing in it, the publishers could spend time making sure it’s pre-configured for the most common use with reports, customized fields, lookup values etc. all pre-set. Just add users and you’re there!

Make training available to the masses. There are so many great ways to distribute training now that EPM publishers need to take advantage of. Online instructor led or Computer-based courses, Teach yourself books, mixes of online and text-books and so on. The costs of such training would need to drop dramatically and the training would have to be broken into bite-sized pieces so any sized organization could digest them.

Don’t forget to abandon acronyms. Any arcane science has its secret codes. In the project management world we talk about things like CPM, SPI, CPI, EV, BCWS and so on. Even in high-end project management circles, these acronyms and abbreviations are being abandoned in favor of straight descriptive language.

Finally, build the processes into the software. There is a process to being effective with managing projects but the 80/20 rule has always applied here. Twenty percent of the process delivers eighty percent of the value. A basic fundamental process, created, perhaps around the tenets of the PMI could be woven right into the software of most EPM systems so that organizations could adopt it or not as they saw fit.

If you think of the project management systems market as though it was a pyramid, with the most experienced users at the top and the neophytes at the bottom, then the use of project management so far has been focused at the very tip of the pyramid. I sometimes hear people say that all kinds of complex algorithmic functionality should be added to project management software in order to do better analysis, but it’s certain that there’s little return for such an investment. No, the big returns for systems publishers are to make project management systems and project management methodology accessible to the masses. It should be like acquiring any other commodity; a bar of soap or a tube of toothpaste. Project management software, as a commodity, would be used by millions, upon millions of users, and that’s where the big payoffs come for software firms.

That makes commoditizing epm software inevitable.


Chris Vandersluis is the founder and president of HMS Software based in Montreal, Canada. He has an economics degree from Montreal’s McGill University and over 22 years experience in the automation of project control systems. He is a long-standing member of both the Project Management Institute (PMI) and the American Association of Cost Engineers (AACE) and is the founder of the Montreal Chapter of the Microsoft Project Association. Mr. Vandersluis has been published in numerous publications including Fortune Magazine, Heavy Construction News, the Ivey Business Journal, PMI’s PMNetwork and Computing Canada. Mr. Vandersluis has been part of the Microsoft Enterprise Project Management Partner Advisory Council since 2003. He teaches Advanced Project Management at McGill University’s Executive Institute. He can be reached at [email protected]

Adding Manpower to a Late Software Project Makes it Later

In 1975 a mighty clue bat was unleashed on the software world. In The Mythical Man-Month, Fred Brooks reminded us there are finite limits to our ability to compress the development process. Moreover, throwing people onto troubled projects often backfires. These insights should not have surprised us; after all, time and effort are hardly fungible commodities. Even with the best tools and methods, nine women still can’t deliver a baby in one month.



If Brooks merely reminded people of what they already suspected, why do so many software projects still come in late and over budget? A recent study of the QSM database showed that large projects (defined as over 50,000 ESLOC) have only a 19% chance of meeting their planned schedules and a 30% probability of making their budgeted effort. It’s discouraging to see organizations struggling after thirty years of technological change and process improvement effort. Why does this still happen so frequently? More importantly, what can we do to change it?

Technology Advances, But People Remain All Too Human

Part of the problem is that while technology has changed rapidly, human nature remains constant. A critical ingredient in software development – perhaps the critical ingredient – is people. This is an insight technical managers sometimes forget to factor into their plans.

Tools and methods allow us to do things more efficiently, but software development remains a uniquely human endeavor. Consequently, successful project management requires a mastery of both people and technical skills. The first part of this paper deals with the human factors that trip up so many software projects. The latter part brings data to the problem-solving table.

The people problems that plague software teams tend to involve over-optimism, fear of measurement, and using the wrong tools for the job. They fall into three broad categories:

The Triumph of Hope over Experience:

  • Competitive pressure. Bid solicitation (especially in the outsourcing world) involves a great deal of internal pressure on participants to win business. This competitive ‘tunnel vision’ often leads to overly optimistic assumptions that ignore an organization’s proven ability to deliver software.
  • Unfounded productivity assumptions. If it has always taken 20 hours to produce a widget, assembling a crack team of developers will not cut that number to 10. Productivity improvement is a long-term endeavor; not a short-term fix.


Fear of Measurement:

  • Not learning from history. Companies which measure projects well, develop organizational self-knowledge, identify capacities and patterns, and come to know their strengths and weaknesses. In short, they learn from experience and develop an empirical basis for project planning. Unfortunately, most organizations lack formal software measurement and evaluation capacity or measure and plan haphazardly. Lacking self-knowledge, these organizations continually put themselves at risk.
  • Not planning for growth. The planned project generally differs from the delivered project in one key component: it is smaller and delivers less functionality. Good project management and effective change control help mitigate scope creep, but a recent QSM study showed a median size growth of about 20%. Projects locked into budgets and schedules based on one set of requirements will be sorely pressed to meet these commitments when the requirements increase.
  • Not watching where we’re going. Most software teams work hard and want to succeed. There is an admirable human tendency to double one’s efforts when problems arise. Such industry should be encouraged, but Herculean effort makes a poor substitute for timely, gentle course corrections. In fact, it is usually too late to take effective countermeasures when problems finally manifest themselves.


Applying the Wrong BandAid:

  • Ineffective or inappropriate countermeasures. There are only three possible courses of action when a project threatens to exceed budget or schedule. Each works within a limited range of possibility and carries accompanying cost.
    • Relaxing the schedule: Results in a less expensive project with fewer defects. There are good and bad reasons why this option is not used more often. Legal or contractual requirements may mandate delivery by a certain date; late delivery may invoke penalties or loss of customer goodwill. Also, organizations may have committed project staff to other endeavors. The bad reasons center more on reluctance to change and unwillingness to “lose face”.
    • Reduce the scope of the delivery. Deferring non-critical functionality until a later release (or eliminating it entirely) can keep a project within time and cost constraints. The cost is obvious: less is delivered than was promised or expected.
    • Add staff. Within a narrow range, adding staff can reduce schedule, albeit slightly and at considerable cost. As many managers have discovered, schedule/effort tradeoff is non-linear: a single unit of schedule reduction “costs” many units of effort and this ratio increases exponentially as the schedule is compressed.

Challenging the Conventional Wisdom

So, what are harried software managers to do when faced with non-linear relationships between time and effort, technology that changes constantly, and human behaviors that, despite experience, remain stubbornly entrenched? This is where measurement is invaluable. Having a good metrics program in place tells organizations several important things: what they have built in the past, what their historical capabilities are, and which patterns in the data may be helpful in the future. A good metrics program does one more thing: armed with a good historical baseline, managers can monitor their progress and make timely course corrections as projects unfold. For managers who need to assess the risks/benefits of using new technologies in real time, this kind of feedback is priceless.

As technology continues to shift the productivity curve outward, managers are tempted to challenge the conventional wisdom. The allure of Agile programming may make them wonder if it isn’t possible, after all, to make that baby in one month instead of nine. This is not necessarily a bad thing. As new tools and methods appear it makes sense to reexamine old assumptions about the relationships between time, effort, and productivity. But that reexamination should be grounded in empirical methods and hard data, not pie in the sky optimism.

Take Fred Brooks’ famous maxim,“Adding manpower to a late software project makes it later”. QSM researchers have found a strong correlation between project size and most other metrics. In our experience, the non-linear relationships between size, time, effort, and defects often make simple rules of thumb less than universally applicable. In practice, these tried-and-truisms often hold true for many, if not most projects but since many software relationships ‘go exponential’ at certain points along the size spectrum, it’s probably not a bad idea to test them against the data.

“Adding Manpower to a Late Project Makes It ????”

We looked at large Information Technology software projects completed in the last decade to answer the question, “Just how does the ‘mega staff’ strategy affect large projects?” On a scatter plot of effective (new and modified) size vs. average staff we found an interesting separation in projects at the high end of the staffing curve. We call this gap the “Unglued Point”: where staffing runs wild.

Below 100,000 lines of code, the projects are evenly distributed. But beginning at the 100 K ESLOC mark, a hole opens up, separating the bulk of these projects from those staffed at far higher levels.

The trend lines in the first chart are average, plus, and minus one standard deviation lines. At any point on the size spectrum, there is wide range of staffing strategies. Above the range of ‘normal’ variability is the unglued point, representing projects with exceptionally high staffing. The high staff projects position well above the +1 standard deviation line, placing them over the 68th percentile, closer to the 75th percentile or above.

What can these high staff projects tell us? How do their schedules compare with other, more reasonably staffed projects? How does the high staff strategy impact project quality? And of course, what are the cost implications of such a strategy?

Let’s find out.

The second graph displays only projects above the unglued point for staffing. The parallel lines show average, plus and minus 1 σ trend lines for “reasonably staffed” projects. Crossing these diagonally is the trend from the high-staffed projects shown. For projects up to 100,000 lines of code, using large teams seems to deliver projects at or below the QSM average for schedule.

However, matters deteriorate rapidly as projects increase in size. At best, aggressive staffing may keep a project’s schedule within the normal range of variability but this strategy becomes increasingly ineffective as project size increases.

What about quality? Again, only high staff projects are shown. The steeply sloped line crossing the QSM defect trend lines is the average of the mega-staffed projects. Their quality is consistently worse than average (higher defect density) and increases precipitously as the projects increase in size. The impact of high staffing on project quality is clearly negative.

Finally, what are the cost implications of the large team strategy? First let’s review what is purchased in terms of schedule reduction: at best high staffing moves a project into the range of normal schedule variation, though this strategy becomes increasingly ineffective as projects increase in size. Overall project quality, which is its legacy to its users, is worse than normal. Now the cost: as the following table illustrates, high staffed projects are several times more expensive.

As to the question at the start of this section: If you answered ‘later,’ you were correct.

Conclusion

So, how did Brooks’ famous maxim hold up against the evidence? Does adding staff to a late project only make it later? It’s hard to tell. Large team projects, on the whole, did not take notably longer than average. For small projects the strategy had some benefit, keeping deliveries at or below the industry average, but this advantage disappeared at the 100,000 line of code mark. At best, aggressive staffing may keep a project’s schedule within the normal range of variability.

Contrary to Brooks’ law, for large projects the more dramatic impacts of bulking up on staff showed up in quality and cost. Software systems developed using large teams had more defects than average, which would adversely affect customer satisfaction and, perhaps repeat business. The cost was anywhere from 3 times greater than average for a 50,000 line of code system up to almost 8 times as large for a 1 million line of code system. Overall, mega-staffing a project is a strategy with few tangible benefits, which should be avoided unless you have a gun pointed at your head. One suspects some of these projects found themselves in that situation: between a rock and a hard place.

How do managers avoid these types of scenarios? Software development remains a tricky blend of people and technical skills, but having solid data at your fingertips and challenging the conventional wisdom wisely can help you avoid costly mistakes. Measurement allows you to manage both the technical and people challenges of software development with confidence whether you are negotiating achievable schedules based on your proven ability to deliver software, finding the optimal team size for that new project, planning for requirements growth, tracking your progress, or making timely mid-course corrections. You might even avoid that giant clue bat!

 


 

Kate Armel is a technical manager with Quantitative Software Management, Inc. She has 8 years of experience in technical writing, metrics research and analysis, and assisting Fortune 1000 firms estimate, track, and benchmark software projects. Ms. Armel was the chief editor and co-author of the QSM Software Almanac.

Donald M. Beckett is a consultant for Quantitative Software Management with more than 20 years of software development experience, including 10 years specifically dedicated to software metrics and estimating. Beckett is a Certified Function Point Specialist with the International Function Point Users Group and has trained over 300 persons in function point analysis in Europe, North America, and Latin America. He was a contributing author to “IT Measurement: Practical Advice from the Experts.” Beckett is a graduate of Tulane University.

Project Manager Perspective

In the last few months I’ve had occasion to come across some project management difficulties that have everything to do with perspective.  We rarely consider the point from which we create our point of view because we live inside it.  Our perspective is often how water is to a fish.  We swim through it all day but never really notice or declare it’s there.  Yet not acknowledging that our point of view is really coming from our perspective can cause tremendous difficulties in a project and opens a massive blind spot.

 

Why does this matter?  Because no two people have the same point of view.  When you think of looking at something, you can get right next to someone and see things from almost their perspective, but it’s not exactly the same.  Their head would have to be displaced for you to see things from where they were.  Yet, if we do just that, move their head and move your eyes to the exact spot they were in, it’s already a different time.  Time counts too in a perspective.  So if we can never see things exactly the way someone else sees them, how do we communicate effectively at all?  We can allow for a perspective by declaring our point of view. 

 

In the absence of realizing that everything we perceive comes from our own particular point of view and that our perspective is unique, people are left with saying “that’s the way it is” rather than “that’s the I see it”.  Is this just semantics?  It very definitely is not.  If you say “that’s the way it is,” then it allows for no other interpretations.  You’ve declared that the universe is that way, there is no other and that’s the end of the discussion.  If, however, you say, “that’s the way I see it” you’ve made room for at least one other perspective.  Someone can now say “I see it somewhat differently”.  When this happens, you may be surprised.  ‘How could someone interpret this differently?’ you may ask yourself.  Your eyes suddenly open as you realize that perhaps there are yet other interpretations that you’ve not considered.

 

In the world of project management, being able to identify a perspective as a perspective is a critical skill.  In any project, schedules and scope are often identified by the shortest of descriptions.  Four or five words in a schedule may be all the description of scope that a task has.  This isn’t any problem if everyone understands the same thing by those four or five words but that’s rarely common.

 

Much more likely is that everyone who reads those four or five words has interpreted them differently. 

 

“Write process documentation” is a description I saw recently in a project in which we were involved. 

  • The new consultant read these three words and imagined a document that would be two or three pages in length. A title and sentence for each element of the process would be plenty, he figured. A total of four hours would be sufficient to the task, he thought.
  • The project manager envisioned a document of 20 or 30 pages. A page per process would be required and a flow diagram of each process would headline each page. The document would have to go to internal review after one draft and then client review after a second. The work would take about 10 days.
  • The client envisioned a manual of about 200 pages. Each process would be a chapter. The flow diagram would headline the chapter, there would be a brief synopsis and then the process would be described in a step-by-step fashion complete with screen shots using their own data. The client expected that this manual would be done naturally as the project evolved, with a new chapter added as each process was created. They saw a review period of 10 days as about right for reviewing the manual once completed.

 

Needless to say when the documentation was written, everyone was upset.  When the project manager tried to describe what they had done, the client was shocked.  That wasn’t “proper” documentation,  the PM was told.  The consultant too was upset.  They’d figured the work they would have to do would be a short description.  They’d been obliged to go back to each process and design a flow diagram and a lot more detail. 

 

“Create a link from product A to product B” was a description we found in another project recently.  Despite the documentation that the client had seen, and a demonstration that they’d attended they were shocked to find out that the link created between the two products needed to be triggered by an action by a user.  ‘How could anyone call this a link if it wasn’t “automated”?’ they asked.  When it was pointed out that the client had both seen the link working and seen the feature description of this link, they explained that they’d assumed they weren’t seeing the whole picture because “of course” any link would be automated, wouldn’t it?

 

It’s about perspective.  Everything we’ve ever experienced adjusts our perspective and if you don’t at least acknowledge that there’s no such thing as an unbiased observer, then what you’re most likely to encounter are problems with scope and problems with estimates.

 

Project managers hold a pivotal position in the success of the project.  Regardless of the level of authority of a project manager, they are virtually always considered a facilitator for the communication between those who will do the work, those who will consume what is created and those who will manage the people and the work.  Communication skills are critical, of course.  But, in my opinion, an even more critical skill is being able to identify the point of view of each player in the puzzle.  This is partly why Collaboration has been identified as such an important aspect of Enterprise Project Management.

 

So, how do you mitigate the risks of an inadequate appreciation of perspective?  Start by working on your own perspective muscle.  Learn to challenge your own “isisms”.  When someone says “the way that it is, is…” challenge the assertion, even if it’s just to yourself.  If you catch yourself saying “that’s the way it is,” work on catching yourself.  A book that helped me in this area is from Edward de Bono and is called the Six Thinking Hats.  It’s a classic and is easy to find.  The book deals with how to generate creativity and perhaps that’s a good skill for project managers also.

 

If you’re looking for more practical techniques to avoid the kinds of examples we see above, take a page from the Defense and Aerospace folks.  Project managers who works in those environments have had requirements to include two documents that are essential parts of any mega-project bid.  They are the SOW (Statement of Work) and the BOE (Basis of Estimate).  They’re the bible for anyone doing project work in these environments, because they are a requirement and they identify in much more explicit detail what is meant by the short scope description that we’d normally see in a task name.

 

In the end, if you’re capable of at least identifying your own perspective when you describe something to someone else and acknowledging that they may have their own perspective, you’ll already be ahead of almost everyone.

 

 

Chris Vandersluis is the founder and president of HMS Software based in Montreal, Canada. He has an economics degree from Montreal’s McGill University and over 22 years experience in the automation of project control systems. He is a long-standing member of both the Project Management Institute (PMI) and the American Association of Cost Engineers (AACE) and is the founder of the Montreal Chapter of the Microsoft Project Association. Mr. Vandersluis has been published in numerous publications including Fortune Magazine, Heavy Construction News, the Ivey Business Journal, PMI’s PMNetwork and Computing Canada. Mr. Vandersluis has been part of the Microsoft Enterprise Project Management Partner Advisory Council since 2003. He teaches Advanced Project Management at McGill University’s Executive Institute. He can be reached at [email protected]