Clearing the Decks
In this section, we address several frequent objections head on. If you find our reasoning insufficient or downright defective, you have the opportunity to bail out at the end of it.
The first is that all projects are different, so we can't study them with statistical methods. I find this argument specious. Were this true, projects would then be unique in our inability to use mathematics to better understand them. Manufacturing has adopted statistical sampling as a reliable way to ensure quality control.
People are all different, yet the medical profession uses statistics to reason about diagnosis and treatment. My position: Projects are more alike than they are different, and studying them statistically makes sense. One caveat: Projects that are heavy into research areas, that are cutting edge, or essentially one-offs, will pose problems; we called them Class 1 projects in Predicting Project Outcomes. But for classe 2 through 5, those problems don't exist.
A second objection is that we don't have the information required for statistical study, that the data either don't exist or are inaccurate. It's a chicken-and-egg problem: We don't analyze statistically because we don't have good data, and we don't collect data because no one is interested in using it. The real problems are political, social, and economic. Many embarrassing project failures are quietly buried for political reasons.
Company-wide data are hard to acquire and centrally store for social reasons.
And industry-wide data are scarce and obscure because of competitive economic
reasons. But in the last 40 years, some disciplines have been able to acquire
industry-wide compensation data (see the Radford
databases, for example), so we suspect that this problem is not insurmountable.
Perhaps if more PMs were strident about needing data to do a better job, their companies would make accurate collection, storage, and access a higher priority. A new generation of PMs moving into higher management circles needs to take the lead if we are going to progress to more scientific methods.
Finally, there is the perpetual conundrum how do you define success? The answer is, in lots of ways. But for this article, we are going to take "success" to mean "finished on time." That's unambiguous when looking at historical projects they either finished on time or they didn't. Similarly, we know which projects made their first major milestone, and which didn't.
All models depend on assumptions and data, and can be no better than those used. But I find too much throwing the baby out with the bath water in the above objections. Let's work with what we have, attempt to always improve on all fronts, and make progress. The other choice – to do nothing – is unacceptable.
|