Published here August 2009.


Musings Index

Information Technology: Management and Oversight of Projects Totaling Billions of Dollars Need Attention
A statement by David A. Powner, Director, IT Management Issues 4/28/09

According to the above Statement, Mr. Powner made a number of interesting observations before the Subcommittee on Federal Financial Management, Government Information, Federal Services, and International Security, Committee on Homeland Security and Governmental Affairs, U.S. Senate. These should be of particular interest not just to IT project managers, but to all those senior managers responsible for selection and oversight of portfolios of IT projects. Items of particular interest are abstracted below, but the full report can be found in document: GAO-ITbillionsOfDollars.pdf.[1]

Apparently, billions of taxpayer dollars are spent on federal information technology (IT) projects each year. Indeed, for fiscal year 2009, federal IT spending has risen to an estimated $71 billion. Whichever way you cut it, that's big money to be spending on IT projects. Unfortunately, things are not always as they should be. For example:

  • Beginning in 2004, the Office of Management and Budget (OMB) identified major projects that were poorly planned by placing them on a quarterly Management Watch List.
  • OMB took steps to improve the identification of poorly planned and performing IT projects, but projects totaling billions of dollars require more attention.
  • [In 2008] OMB determined that 352 projects - totaling about $23.4 billion - were poorly planned ... primarily because of weaknesses in the way they addressed (1) cost, schedule, and performance; (2) security; (3) privacy; and (4) acquisition strategy.

To address these concerns, OMB has used certain key oversight mechanisms to highlight troubled projects, justify IT investments, and manage cost and schedule growth. These include:

  • The Management Watch List noted above identifying major IT projects that are poorly planned;
  • A list of high-risk projects that are performing poorly; and
  • Investment justifications for major IT projects that agency officials are required to prepare to demonstrate both to their management and to OMB that the projects are well planned.

So far, so good. But then they advocate the use of a project management tool known as Earned Value Management (EVM). Indeed, the Statement says:

  • OMB has required the use of Earned Value Management, but Agencies' Earned Value Management Policies and Implementation need improvement;
  • EVM is a project management approach that, if implemented appropriately, provides objective reports of project status, produces early warning signs of impending schedule delays and cost overruns, and provides unbiased estimates of a program's total costs.[2]

We seriously question the comfort suggested by this description of the EVM tool.

Conceptually, EVM is an attractive mechanical approach to calculating project status. But the whole thing hinges on establishing the value earned to date. That is particularly difficult with IT projects, especially in reaching "unbiased" estimates of current value, because we are dealing here with intellectual work. Unlike physical deliverables such as construction, you cannot "see" how far you have got relative to the final structure, so you have to come up with a notional estimate of status. If you are too optimistic you are simply proved to be wrong later. If you are pessimistic you get on the "Watch List". Guess which way those responsible will naturally tend to lean?

True that some form of EVM is essential to know where you are relative to where you planned to be. The presumption here is that the plan is realistic and achievable as it stands. How often is that? Next, if you compare EVM period on period, then you know how fast you are traveling and, more importantly, the rate of change of the progress - how many organizations actually show that increment on their reports and compare it with previous increments? An added complication is that it is sometimes difficult to assemble "all the costs to the prescribed cut-off date", especially since financial accounting systems do not normally recognize expenditures until some time after commitments have been made - often quite a long time.

But EVM does not tell you anything about where you are going, how you will get there, and most importantly when and for how much. That's where a completely separate exercise is required (i.e. re-forecasting, especially when it comes to projecting the final cost). And in our experience, that's where most organizations fail.

So EVM is a poor predictor of the final time and cost, the issues that management should be most interested in. In our view, the real solution is a less-frequent (perhaps 3-monthly) total re-estimate of all known outstanding work to completion based on performance experience to date. This is then added to the cost of all the work done to date in order to get the final cost total as the base for determining percent complete.

So much for the individual project. However, one suspects that the problem is much larger in that large portfolios of programs and projects are involved where the overall risks are even larger than the sum of individual project risks or shortfalls. In this case, a sound project portfolio management methodology is required and managed at the senior management level. Such a methodology must encompass the entire product life cycle and include feedback of the benefits derived so that initial project selection in the first place can be improved. Such a methodology still needs to be developed and generally accepted amongst the IT industry. When it is, extensive training will be required at senior management levels to effectively implement this management process.

This is all a far cry from EVM. But just maybe the primary thrust of the report is simply to demonstrate that governments waste an awful lot of our money. But then most of us know that already.


As we have emphasized elsewhere,[3] rigorous application of Earned Value Management requires significant management effort and is difficult to justify for ongoing assessment of progress. This is especially true of information technology work where many of the tasks require intellectual effort that is difficult to assess objectively in terms of percentage complete. Consequently, it is appealing to devise workarounds for these challenges.

The simplest is to assess all tasks as either not started/started or not finished/finished. All tasks started but not finished are allowed 50% of the budgeted value while all tasks truly finished are allowed 100%.[4] This assessment is pretty coarse and only works on projects where there are a large number of tasks to be assessed.

However, we have found the following compromise more equitable:

  • Take 15% once a task has actually started.
  • Take 45% when you are comfortable that the task is around half way through.
  • Take 85% when you think the task is pretty nearly done.
  • You only take the last 15% when your client (or next person in the work flow) actually accepts and uses the deliverable from the task.

You then apply this as a standard procedure throughout your project progress reporting.

You are still relying on your assessments to "average out" over several tasks, but the number of tasks to do so is much lower than in the case of the "coarse" approach.

1. GAO document available from accessed 6/9/09
2. Ibid, p10
3. For example:;
4. See:
Home | Issacons | PM Glossary | Papers & Books | Max's Musings
Guest Articles | Contact Info | Search My Site | Site Map | Top of Page