Is There Magic Associated With Software Benchmarks?

https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.nist.gov%2Fcybersecurity&psig=AOvVaw11XLFUvHDqypo2iBwk1bY5&ust=1625669309057000&source=images&cd=vfe&ved=0CAoQjRxqFwoTCKi-yKLYzvECFQAAAAAdAAAAABAD
Source: https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.nist.gov%2Fcybersecurity&psig=AOvVaw11XLFUvHDqypo2iBwk1bY5&ust=1625669309057000&source=images&cd=vfe&ved=0CAoQjRxqFwoTCKi-yKLYzvECFQAAAAAdAAAAABAD

Presented: April 11, 2013 11:00 am (ET)
Presented by: Donald J. Reifer

Software productivity benchmarks have proven to be a useful tool for determining whether an organization’s software estimates are realistic. In addition, they provide a firm with the yardsticks that they need to determine whether their current software cost, productivity and quality performance is competitive and if they can deliver what they promise on-time and within negotiated budget. To set the stage, the speaker will start by defining key terms and concepts and reviewing the benchmarking process. He will then highlight twelve lessons learned when developing benchmarks and discuss over a decade of practical experience using them to foster improvements in software-intensive firms. He will demonstrate that when used properly, benchmarks can serve as a helpful tool for management because they set realistic performance expectations. Even though skeptics will always question the numbers generated, he will show that benchmarks when produced with rigor and care will pass the tests when scrutinized and serve as a solid basis for management and control.

Computer Icon

Host a Webinar with CSIAC

Are you interested in delivering a webinar presentation on your DoD research and engineering efforts?

Want to find out more about this topic?

Request a FREE Technical Inquiry!