Prediction = Power

  • Elaine J. Weyuker
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2644)


An argument is made that predictive metrics provide a very powerful means for organizations to assess characteristics of their software systems and allow them to make critical decisions based on the value computed. Five different predictors are discussed aimed at different stages of the software lifecycle ranging from a metric that is based on an architecture review which is done at the earliest stages of development, before even low-level design has begun, to one designed to predict the risk of releasing a system in its current form. Other predictors discussed include the identification of characteristics of files that are likely to be particularly fault-prone, a metric to help a tester charged with regression testing to determine whether or not a particular selective regression testing algorithm is likely to be cost effective to run on a given software system and test suite, and a metric to help determine whether a system is likely to be able to handle a significantly increased workload while maintaining acceptable performance levels.


Architecture review fault-prone metrics prediction regression testing risk scalability 


  1. 1.
    G. Abowd, L. Bass, P. Clements, R. Kazman, L. Northrup, and A. Zaremski. Recommended best industrial practice for software architecture evaluation. Technical Report CMU/SEI 96-TR-025, available at, Jan, 1997.
  2. 2.
    E.N. Adams. Optimizing Preventive Service of Software Products. IBM J. Res. Develop., Vol28, No1, Jan 1984, pp. 2–14.CrossRefGoogle Scholar
  3. 3.
    A. Avritzer and E.J. Weyuker. Investigating Metrics for Architectural Assessment, Proc. IEEE/Fifth International Symposium on Software Metrics (Metrics98), Bethesda, Md., Nov. 1998, pp. 4–10.Google Scholar
  4. 4.
    A. Avritzer and E.J. Weyuker. Metrics to Assess the Likelihood of Project Success Based on Architecture Reviews. Empirical Software Eng. Journal, Vol. 4, No. 3, Sept. 1999, pp. 197–213.Google Scholar
  5. 5.
    A. Avritzer, J. Kondek, D. Liu, and E.J. Weyuker. Software Performance Testing Based on Workload Characterization. Proc. ACM/Third International Workshop on Software and Performance (WOSP2002), Rome, Italy, July 2002.Google Scholar
  6. 6.
    V.R. Basili and B.T. Perricone. Software Errors and Complexity: An Empirical Investigation. Communications of the ACM, Vol27, No1, Jan 1984, pp. 42–52.CrossRefGoogle Scholar
  7. 7.
    L. Bass, P. Clements, and R. Kazman. Software architecture in practice. Addison Wesley, 1998.Google Scholar
  8. 8.
    N.E. Fenton and N. Ohlsson. Quantitative Analysis of Faults and Failures in a Complex Software System. IEEE Trans. on Software Engineering, Vol26, No8, Aug 2000, pp. 797–814.CrossRefGoogle Scholar
  9. 9.
    T.L. Graves, A.F. Karr, J.S. Marron, and H. Siy. Predicting Fault Incidence Using Software Change History. IEEE Trans. on Software Engineering, Vol 26, No. 7, July 2000, pp. 653–661.CrossRefGoogle Scholar
  10. 10.
    L. Hatton. Reexamining the Fault Density — Component Size Connection. IEEE Software, March/April 1997, pp. 89–97.Google Scholar
  11. 11.
    M.J. Harrold, D. Rosenblum, G. Rothermel, and E.J. Weyuker. Empirical Studies of a Prediction Model for Regression Test Selection. IEEE Trans. on Software Engineering, Vol 27, No 3, March 2001, pp. 248–263.CrossRefGoogle Scholar
  12. 12.
    J.P. Holtman. Best current practices: software architecture validation. AT&T, March, 1991.Google Scholar
  13. 13.
    K-H. Moller and D.J. Paulish. An Empirical Investigation of Software Fault Distribution. Proc. IEEE First Internation Software Metrics Symposium, Baltimore, Md., May 21–22, 1993, pp. 82–90.Google Scholar
  14. 14.
    J.C. Munson and T.M. Khoshgoftaar. The Detection of Fault-Prone Programs. IEEE Trans. on Software Engineering, Vol18, No5, May 1992, pp. 423–433.CrossRefGoogle Scholar
  15. 15.
    T. Ostrand and E.J. Weyuker. The Distribution of Faults in a Large Industrial Software System. Proc. ACM/International Symposium on Software Testing and Analysis (ISSTA2002), Rome, Italy, July 2002, pp. 55–64.Google Scholar
  16. 16.
    D.S. Rosenblum and E.J. Weyuker. Predicting the Cost-Effectiveness of Regression Testing Strategies, Proc. ACM Foundations of Software Engineering Conf (FSE4), Oct 1996, pp. 118–126.Google Scholar
  17. 17.
    D.S. Rosenblum and E.J. Weyuker. Using Coverage Information to Predict the Cost-Effectiveness of Regression Testing Strategies, IEEE Trans. on Software Engineering, March, 1997, pp. 146–156.Google Scholar
  18. 18.
    E.J. Weyuker and A. Avritzer. A Metric to Predict Software Scalability. Proc. IEEE/Eighth International Symposium on Software Metrics (Metrics2002), Ottawa, Canada, June 2002, pp. 152–158.Google Scholar
  19. 19.
    E.J. Weyuker. Predicting Project Risk from Architecture Reviews. Proc.IEEE/Sixth International Symposium on Software Metrics (Metrics99), Boca Raton, Fla, Nov. 1999, pp. 82–90.Google Scholar
  20. 20.
    E.J. Weyuker. Difficulties Measuring Software Risk in an Industrial Environment. Proceedings IEEE/International Conference on Dependable Systems and Networks (DSN2001), Goteberg, Sweden, July 2001, pp. 15–24.Google Scholar

Copyright information

© IFIP 2003

Authors and Affiliations

  • Elaine J. Weyuker
    • 1
  1. 1.AT&T Labs – ResearchFlorham Park

Personalised recommendations