Developing Dependable Systems by Maximizing Component Diversity

  • Jeff Tian
  • Suku Nair
  • LiGuo Huang
  • Nasser Alaeddine
  • Michael F. Siok


In this chapter, we maximize component diversity as a means to achieve the goal of system dependability. Component diversity is examined from four different perspectives: 1) environmental perspective that emphasizes a component’s strengths and weaknesses under diverse operational environments, 2) target perspective that examines different dependability attributes, such as reliability, safety, security, fault tolerance, and resiliency, for a component, 3) internal perspective that focuses on internal characteristics that can be linked logically or empirically to external dependability attributes, and 4) value-based perspective that focuses on a stakeholder’s value assessment of different dependability attributes. Based on this examination, we develop an evaluation framework that quanties component diversity into a matrix, and use a mathematical optimization technique called data envelopment analysis (DEA) to select the optimal set of components to ensure system dependability. Illustrative examples are included to demonstrate the viability of our approach.


Data Envelopment Analysis Fault Tolerance Data Envelopment Analysis Model Software Reliability Dependable System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.



The work reported in this chapter was supported in part by NSF Grant IIP-0733937. We thank the anonymous reviewers for their constructive comments.


  1. [1]
    Avizienis, A. A., Laprie, J.-C., Randell, B., and Landwehr, C. (2004). Basic concepts and taxonomy of dependable and secure computing. IEEE Trans. on Dependable and Secure Computing, 1(1):11–33.CrossRefGoogle Scholar
  2. [2]
    Barr, R. (2004). DEA software tools and technology: A state-of-the-art survey. In Cooper, W., Seiford, L. M., and Zhu, J., editors, Handbook on Data Envelopment Analysis, pages 539–566. Kluwer Academic Publishers, Boston, MA.Google Scholar
  3. [3]
    Basili, V. R., Donzelli, P., and Asgari, S. (2004). A unied model of dependability: Capturing dependability in context. IEEE Software, 21(6):19–25.CrossRefGoogle Scholar
  4. [4]
    Basili, V. R. and Perricone, B. T. (1984). Software errors and complexity: An empirical investigation. Communications of the ACM, 27(1):42–52.CrossRefGoogle Scholar
  5. [5]
    Boehm B., Port D., Huang L., and Brown A.W. (2002). Using the Spiral Model and MBASE to Generate New Acquisition Process Models: SAIV, CAIV, and SCQAIV. CrossTalk,January, pp. 20-25.Google Scholar
  6. [6]
    Boehm B., Abts C., Brown A.W., Chulani S., Clark B., Horowitz E., Madachy R., Riefer D., and Steece B. (2000). Software Cost Estimation with COCOMO II, Prentice Hall.Google Scholar
  7. [7]
    Boehm B., Huang L., Jain A., Madachy R. (2004) “The ROI of software dependability: The iDAVE model”, IEEE Software, vol. 21, no. 3, pp. 54-61.CrossRefGoogle Scholar
  8. [8]
    Charnes, A., Cooper, W. W., Lewin, A. Y., and Seiford, L. M., editors (1994). Data Envelopment Analysis: Theory, Methodology, and Applications. Kluwer Academic Publishers, Boston, MA.MATHGoogle Scholar
  9. [9]
    Chillarege, R., Bhandari, I., Chaar, J., Halliday, M., Moebus, D., Ray, B., and Wong, M.-Y. (1992). Orthogonal defect classication — a concept for in-process measurements. IEEE Trans. on Software Engineering, 18(11):943–956.CrossRefGoogle Scholar
  10. [10]
    Fenton, N. and Peeger, S. L. (1996). Software Metrics: A Rigorous and Practical Approach, 2nd Edition. PWS Publishing, Boston, MA.Google Scholar
  11. [11]
    Gashi, I., Popov, P., and Strigini, L. (2007). Fault tolerance via diversity for off-the-shelf products: A study with SQL database servers. IEEE Trans. on Dependable and Secure Computing, 4(4):280–294.CrossRefGoogle Scholar
  12. [12]
    Herrero, I. and Salmeron, J. L. (2005). Using the DEA methodology to rank software technical efficiency. Communications of the ACM, 48(1):101–105.CrossRefGoogle Scholar
  13. [13]
    Huang L. (2006). “Software Quality Analysis: A Value-Based Approach”. Ph.D. Dissertation. Proquest.Google Scholar
  14. [14]
    Huang L. and Boehm B. (2005). Determining How Much Software Assurance Is Enough? A Value-Based Approach. In Proceedings of 4 th International Symposium on Empirical Software Engineering (ISESE).Google Scholar
  15. [15]
    IEEE (1990). IEEE Standard Glossary of Software Engineering Terminology. STD 610.121990.Google Scholar
  16. [16]
    ISO (2001). ISO/IEC 9126-1:2001 Software Engineering -Product Quality -Part 1: Quality Model. ISO.Google Scholar
  17. [17]
    Kallepalli, C. and Tian, J. (2001). Measuring and modeling usage and reliability for statistical web testing. IEEE Trans. on Software Engineering, 27(11):1023–1036.CrossRefGoogle Scholar
  18. [18]
    Karlin, S. and Taylor, H. M. (1975). A First Course in Stochastic Processes, 2nd Ed. Academic Press, New York.MATHGoogle Scholar
  19. [19]
    Laprie, J.-C., editor (1992). Dependability: Basic Concepts and Terminlogy, Depaendable Computing and Fault Tolerance. Springer-Verlag, New York.Google Scholar
  20. [20]
    Lyu, M. R., editor (1995). Software Fault Tolerance. John Wiley & Sons, Inc., New York.Google Scholar
  21. [21]
    Lyu, M. R. and Avizienis, A. A. (1992). Assuring design diversity in N-version software: A design paradigm for N-version programming. In Meyer, J. F. and Schlichting, R. D., editors, Dependable Computing for Critical Applications 2. Springer-Verlag, New York.Google Scholar
  22. [22]
    Ma, L. and Tian, J. (2007). Web error classication and analysis for reliability improvement. Journal of Systems and Software, 80(6):795–804.CrossRefGoogle Scholar
  23. [23]
    Mili, A., Vinokurov, A., Jilani, L.L., Sheldon, F.T. and Ayed, R.B. (2007). Towards an Engineering Discipline of Computational Security. In Proceedings of 40 th Annual Hawaii International Conference on System Sciences (HICSS 2007).Google Scholar
  24. [24]
    Mills, H. D. (1972). On the statistical validation of computer programs. Technical Report 72-6015, IBM Federal Syst. Div.Google Scholar
  25. [25]
    Munson, J. C. and Khoshgoftaar, T. M. (1992). The detection of fault-prone programs. IEEE Trans. on Software Engineering, 18(5):423–433.CrossRefGoogle Scholar
  26. [26]
    Musa, J. D. (1993). Operational proles in software reliability engineering. IEEE Software, 10(2):14–32.CrossRefGoogle Scholar
  27. [27]
    Musa, J. D. (1998). Software Reliability Engineering. McGraw-Hill, New York.Google Scholar
  28. [28]
    Offutt, J. (2002). Quality attributes of web applications. IEEE Software, 19(2):25–32.CrossRefGoogle Scholar
  29. [29]
    Peeger, S. L., Hatton, L., and Howell, C. C. (2002). Solid Software. Prentice Hall, Upper Saddle River, New Jersey.Google Scholar
  30. [30]
    Siok, M.F. (2008). Empirical Study of Software Productivity and Quality. D.E. Praxis, Southern Methodist University, Dallas, Texas, U.S.A.Google Scholar
  31. [31]
    Steese B., Chulani S., Boehm B. (2002). Determining Software Quality Using COQUALMO. Case Studies in Reliability and Maintenance, W. Blischke and D. Murthy, eds., Jon Wiley & Sons.Google Scholar
  32. [32]
    Thayer, R., Lipow, M., and Nelson, E. (1978). Software Reliability. North-Holland, New York.Google Scholar
  33. [33]
    Tian, J. (2005). Software Quality Engineering: Testing, Quality Assurance, and Quantiable Improvement. John Wiley & Sons, Inc. and IEEE CS Press, Hoboken, New Jersey.Google Scholar
  34. [34]
    Tian, J. and Ma, L. (2006). Web testing for reliability improvement. In Zelkowitz, M. V., editor, Advances in Computers, Vol.67, pages 177–224. Academic Press, San Diego, CA.Google Scholar
  35. [35]
    Weyuker, E. J. (1998). Testing component-based software: A cautionary tale. IEEE Software, 15(5):54–59.CrossRefGoogle Scholar
  36. [36]
    Whittaker, J. A. and Thomason, M. G. (1994). A Markov chain model for statistical software testing. IEEE Trans. on Software Engineering, 20(10):812–824.CrossRefGoogle Scholar
  37. [37]
    Wu, X., McMullan, D. and Woodside, M. (2003). Component Based Performance Prediction. In Proceedings of the 6 th ICSE Workshop on Component-Based Software Engineering: Automated Reasoning and Prediction (CBSE6),Portland, Oregon.Google Scholar
  38. [38]
    Yacoub, S., Cukic, B., and Ammar, H.(2004). A scenario-based reliability analysis approach for component-based software. IEEE Trans. on Reliability, 54(4):465–480.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag US 2009

Authors and Affiliations

  • Jeff Tian
    • 1
  • Suku Nair
    • 1
  • LiGuo Huang
    • 1
  • Nasser Alaeddine
    • 1
  • Michael F. Siok
    • 1
  1. 1.Southern Methodist UniversityDallasUSA

Personalised recommendations