Ann Marie Neufelder
Software reliability growth models have been used since the 1960s to project software failure rate and reliability given defect data observed during testing [Ref. 1]. The primary disadvantage of these growth models is that they cannot be used until system level software testing commences. By this phase it is generally too late to improve the reliability of the software via any other means than adding additional testing resources and/or delaying the release schedule. These models are, however, still valuable as they are particularly useful for proactively planning warranty costs and maintenance staffing.
Since 1987, models that predict software reliability at the beginning of the project have been available in the public domain [Ref. 2]. These models facilitate advanced planning of all resources required to reach a specific reliability objective. They are also useful for identifying key strengths and key areas of improvement that correspond to reduced failure rate and higher reliability.
The primary question to be answered by this article is "How cost effective are these reliability models?" The author has analyzed the actual defects delivered by 32 software organizations. Of these, 10 organizations were predicting software reliability early in the life cycle, 13 were measuring it during testing and 9 weren't measuring it at all. The prediction and estimation models used by the 32 organizations in this study are summarized in Tables 1 and 2.
In the study, "cost benefit" is measured by these three things:
These performance metrics were chosen because fixing defects and missing a market/development window are the primary root causes for software development cost overruns.
Table 3 presents the results of the study. In this table, the actual normalized defect density is a normalized metric of escaped defects and is in terms of defects per normalized effective size of software. The probability of a late delivery is simply the percentage of time that the organization delivers a software product late. Magnitude of late delivery measures how late the deliveries are.
As you can see from Table 3, the group that does predictions early in the life cycle has significantly fewer escaped defects and less likelihood of delivering software late than either of the other groups. When this group is late delivering software, it is by a much smaller margin than the others. So, even if they are late, they may still be able to hit the market window. The organizations that use reliability growth models versus no models at all have fewer escaped defects, deliveries that are late less often and by a smaller margin of error. So, measuring late is better than not at all.
Now let's see how much it costs on average to use the reliability models -- see Table 4. Please note that a variety of tools/documents/templates exist in every price range. Generally, the more expensive tools require less effort to do the modeling. In addition, all models require that effective size be predicted in a parallel effort. Size prediction is a necessary part of software management, which is not included in the time estimate for using the models. The primary cost of using the growth models is in developing and maintaining an interface between the defect tracking system and the tool/template containing the reliability growth model.
On average, a single escaped software defect can take from a week to several weeks of effort to correct when including the time to isolate, repair, checkout, retest, reconfigure and redistribute. So, a typical organization needs only to reduce its escaped defects by 1 to compensate for the cost of using the prediction models or the simple reliability growth models.
2. Rome Laboratory, "Methodology for Software Reliability Estimation and Assessment," Technical Report RLTR- 92-52, Vol. 1 and 2, 1992. This was one of the first publicly available multi-parameter models for predicting software reliability without the need for testing defect data.
3. Neufelder, Ann Marie, "The Naked Truth About Software Engineering," January 2004.