AmbySoft.com

IT Project Success Rates: 2008 Open Research

How to Measure Anything 2nd Edition This open research into IT project success rates was performed in early December 2008 and there was 279 respondents. The survey was announced on the Dr. Dobb’s Journal (DDJ) mailing list.

The Survey Results

The results of the survey is summarized in my article Software Development Success Rates.

Some findings include:

  1. As Figure 1 depicts, iterative and agile project teams had higher success rates than traditional, which in turn had higher success rates than ad-hoc.
  2. People really don’t define success in terms of “on time, on budget, to specification” regardless of what the theory folks may claim. For example:
    • 83% of respondents believe that meeting actual needs of stakeholders is more important than building the system to specification.
    • 82% believe that delivering high quality is more important than delivering on time and on budget
    • 70% believe that providing the best ROI is more important than delivering under budget
    • 58% believe that delivering when the system is ready to be shipped is more important than delivering on schedule
  3. For all the “RUP bashing” that goes on within the agile community, as Figure 1 shows iterative projects are just as successful as agile projects.
  4. Figure 2 depicts how iterative and agile approaches are more effective delivering higher quality, greater ROI, better stakeholder satisfaction, and deliver in a timely manner compared with traditional and ad-hoc approaches. Traditional approaches were better than ad-hoc when it came to quality, but ad-hoc approaches were better at delivering functionality.
  5. Figure 3 depicts the success rate by paradigm and distribution level. Regardless of paradigm, the more distributed the team the lower the success rates. For any given distribution level, agile/iterative always did as good or better than traditional approaches, and traditional always did better than ad-hoc.

 

Figure 1. Success rates by development paradigm.

 

 

Figure 2. Success factors by paradigm (Scale is from -10 to +10).

 

 

Figure 3. Success rates by paradigm and distribution level.

 

 

Downloads

Survey questions

The Survey Questions(100 K)

Survey Data File

Raw Data(144 K)

Survey Presentation

Summary Presentation(150 K)

 

What You May Do With This Information

You may use this data as you see fit, but may not sell it in whole or in part. You may publish summaries of the findings, but if you do so you must reference the survey accordingly (include the name and the URL to this page). Feel free to contact me with questions. Better yet, if you publish, please let me know so I can link to your work.

 

Discussion of the Results

  1. I was disappointed with the number of respondents as I was really hoping for over 1000 respondents. As a result the real success rates will be somewhere in a larger range than with previous surveys. I think that people are becoming fatigued at answering surveys as it’s become incredibly easy for people, such as myself, to do surveys online.
  2. This survey measures success as defined by the respondent, it does not force a definition of success on them. The success rate was calculated by summarizing the weighted average of each range (i.e. 90-100% averages to 95%) times the number of respondents.
  3. These figures vary significantly from those of the Standish Group’s Chaos Report which reports a 34% success rate and a 51% “challenged” rate. They define success as “on time, on budget, meeting the spec”, but that definition doesn’t seem to hold when we ask people what they actually value. I’m not convinced that it’s appropriate to force a definition of success on people, regardless of how easy it would the processing of the resulting data.
  4. It is difficult to compare the numbers from the Chaos Report and this survey. This survey is open, you have complete access to the original questions and data, yet the Chaos Report is closed to outsiders. I invite the Standish Group to open source their material.
  5. The request that went out indicated that the survey was exploring success rates, so the success rate figures could be a bit higher as a result due to selection bias (organizations that are really struggling may not be reporting).
  6. The success rate difference between agile and traditional has narrowed from previous surveys, which I suspect is the result of few respondents to this one.
  7. The similarity in success rates between agile and iterative is likely a reflection of the similarities between the approaches and could be a reflection that perhaps the greatest determinant of success is the shorter feedback cycle of the two approaches.
  8. This survey suffers from the fundamental challenges faced by all surveys.

 

Links to Other Articles/Surveys

  1. My other surveys
  2. Answering the “Where’s the Proof that Agile Methods Work” Question

 

Why Share This Much Information?

I’m sharing the results, and in particular the source data, of my surveys for several reasons:

  1. Other people can do a much better job of analysis than I can. If they publish online, I am more than happy to include links to their articles/papers.
  2. Once I’ve published my column summarizing the data in DDJ, I really don’t have any reason not to share the information.
  3. Too many traditionalists out there like to use the “where’s the proof” question as an excuse not to adopt agile techniques. By providing some evidence that a wide range of organizations seem to be adopting these techniques maybe we can get them to rethink things a bit.
  4. I think that it’s a good thing to do and I invite others to do the same.

 

Suggested Reading

This is an eye-opening book for anyone who is trying to understand how to measure concepts, such as developer productivity levels for example, which are often perceived as difficult to measure. If you choose to think outside of the metrics box for a bit you’ll quickly realize that you can easily measure information which is critical to your decision making process.