Precision of Measurement Is No Guarantee of Usefulness of What’s Measured

One of the main myths of traditional project management relates to measurement precision. Traditional project managers have numerous statistical tools in their arsenal. Such measures as earned value or cost performance indicators etc. are touted as providing a precise scientific measure of how we’re doing. All of this points back to a Tayloristic view of software and product development. To put it another way we assume that developing software is a clearly defined process and thus amenable to scientific measurement and monitoring.

The problem is, as most proponents of agile development methods know, product development isn’t, for the most part, a defined process amenable to statistical process control. That’s one of the reason empirical methods like Scrum were developed. Given that, the value of these measurements no longer depends on their precision, if it ever did. And yet we often continue to try and apply these kinds of measurement to our process and its output, often at considerable cost.

Early in the 19th century, reputable scientists performed many precise measurements using an assortment of mechanical calipers on human heads. Using a complex and detailed map of the 27 brain “organs” reflected in the various bumps and fissures on the skull could determine such aspects of the individual’s personality as “comparative sagacity,” “cleverness,” or “poetical talent.”

“In its heyday during the 1820s-1840s, phrenology was often used to predict a child’s future life, to assess prospective marriage partners and to provide background checks for job applicants.” ( http://en.wikipedia.org/wiki/Phrenology)

Likewise, attempts at precise and detailed statistical process control of what is an inherently imprecise and empirical process like software development also offers little or no predictive power and amounts to unnecessary overhead with little or no business value. The results from the extremely simple means we use in Scrum to track progress and plan releases (e.g. burndown charts, velocity tracking etc.) provide more value for less effort. Our measures are intentionally imprecise to reflect the uncertainty of what we do. And yet as part of an empirical process model with short inspect-and-adapt feedback loops we know as much, or more, about our efforts than someone taking detailed and precise measurements while simultaneously not giving the false impression that we know more than we do.

Jimi Fosdick
CST