It was late afternoon in April of 1999 when the phone in my office rang. The conversation went something like this:
“This software estimate just landed on my desk and I need to finish it by close of business today to support a fixed price bid.”
“What can you tell me about this project?”
“We’re rewriting an existing mainframe billing system developed in COBOL. The new system will be written in C++, so it should be much smaller than the old system.”
“Great –perhaps we can use the existing system as a rough baseline. How big is it?”
“I don’t have that information.”
“Will this be a straight rewrite, or will you add new features?”
“Not sure – the requirements are still being fleshed out.”
“What about resources? How many people do you have on hand?”
“Not sure – the team size will depend on how much work must be done… which we don’t know yet.”
“Can we use some completed projects to assess your development capability?”
“Sorry, we don’t have any history.”
“That’s OK –even without detailed information on scope, resources, or productivity we should still be able to produce a rough order of magnitude estimate based on relevant industry data.”
“Rough order of magnitude??? My boss will never accept that much risk on a fixed price bid! Isn’t there some general rule of thumb we can apply?”
Welcome to the world of software cost estimation where the things we know – the known knowns – are often outweighed by the things we don’t know. Numerous estimation methods exist. Scope is described using effort, delivered code volume, features, or function points. Expert judgment, Wideband Delphi, top down, bottom up, parametric and algorithmic models each have their determined champions. But regardless of method, all estimates are vulnerable to risk arising from uncertain inputs, requirements changes, and scope creep. Skilled estimators and better methods can reduce this risk, but they can’t eliminate it. Thus, the ability to identify and account for uncertainty remains a vital component of successful risk management.