N-Version Genetic Programming via Fault Masking

  • Kosuke Imamura
  • Heckendorn Robert B. 
  • Terence Soule
  • Foster James A. 
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2278)


We introduce a new method, N-Version Genetic Programming (NVGP), for building fault tolerant software by building an ensemble of automatically generated modules in such a way as to maximize their collective fault masking ability. The ensemble itself is an example of n-version modular redundancy for fault tolerance, where the output of the ensemble is the most frequent output of n independent modules. By maximizing collective fault masking, NVGP approaches the fault tolerance expected from n version modular redundancy with independent faults in component modules. The ensemble comprises individual modules from a large pool generated with genetic programming, using operators that increase the diversity of the population. Our experimental test problem classified promoter regions in Escherichia coli DNA sequences. For this problem, NVGP reduced the number and variance of errors over single modules produced by GP, with statistical significance.


Genetic Program Component Module Replacement Candidate Optimal Linear Combination Fault Tolerant Software 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Pradhan, D. K., Banerjee, P.: Fault-Tolerance Multiprocessor and Distributed Systems: Principles. In Pradhan, D.K.: Fault-Tolerant Computer System Design. Chapter 3, Prentice Hall PTR, (1996), 142Google Scholar
  2. 2.
    Avizienis, A. and J.P.J. Kelly: Fault Tolerance by Design Diversity: Concepts and Experiments. IEEE Computer, vol. 17 no. 8, (1984), 67–80Google Scholar
  3. 3.
    Victoria Hilford., Lyu, M. R., Cukic B., Jamoussi A., Bastani F. B.: Diversity in the Software Development Process. Proceedings of Third International Workshop on Object-Oriented Real-Time Dependable Systems, IEEE Comput. Soc, (1997), 129–36 (
  4. 4.
    Knight, J.C., Leveson, N.B.: An Experimental Evaluation of the Assumption of Independence in Multiversion Programming. IEEE Transaction on Software Engineering, vol. SE-12, no. 1 (1986)Google Scholar
  5. 5.
    Hatton, L.: N-version vs. one good program. IEEE Software, vol 14, no. 6 (1997) 71–76CrossRefGoogle Scholar
  6. 6.
    Pedersen, A.G., Engelbrecht, J.: Investigations of Escherichia Coli promoter sequences with artificial neural networks: New signals discovered upstream of the transcriptional startpoint. Proceedings of the Third International Conference on Intelligent Systems for Molecular Biology (1995) 292–299 (
  7. 7.
    Towell, G.G., Shavlik, J.W., Noordewier, M.O.: Refinement of approximate domain theories by knowledge-based neural networks. Proceedings of AAAI-90 (1990) 861–866 (
  8. 8.
    Ma, Q., Wang, J.T.L.: Recognizing Promoters in DNA Using Bayesian Neural Networks. Proceedings of the IASTED International Conference, Artificial Intelligence and Soft Computing (1999) 301–305 (
  9. 9.
    Handley, S.: Predicting Whether Or Not a Nucleic Acid Sequence is an E. Coli Promoter Region Using Genetic Programming. Proceedings of First International Symposium on Intelligence in Neural and Biological Systems, IEEE Computer Society Press, (1995) 122–127Google Scholar
  10. 10.
    Imamura, K., Foster, J.A.: Fault Tolerant Computing with N-Version Genetic Programming. Proceedings of Genetic and Evolvable Computing Conference (GECCO), Morgan Kaufmann, (2001) 178Google Scholar
  11. 11.
    Imamura, K., Foster, J.A.: Fault Tolerant Evolvable Hardware Through N-Version Genetic Programming. Proceedings of World Multiconference on Systemics, Cybernetics, and Informatics (SCI), vol. 3, (2001) 182–186Google Scholar
  12. 12.
    Hashem, S.: Optimal Linear Combinations of Neural Networks. Neural Networks, vol. 10, no. 4, (1997) 599–614 ( Scholar
  13. 13.
    Hashem, S.: Improving Model Accuracy Using Optimal Linear Combinations of Trained Neural Networks. IEEE Transactions on Neural Networks, vol.6, no.3 (1995) 792–794 ( Scholar
  14. 14.
    Terence Soule, “Heterogeneity and Specialization in Evolving Teams”, Proceeding of Genetic and Evolvable Computing Conference (GECCO), Morgan Kaufmann (2000) 778–785Google Scholar
  15. 15.
    Schapire R.E., Freund, F.: A Short Introduction to Boosting. Journal of Japanese Society for Artificial Intelligence 14, no. 5, (1999) 771–80 ( Scholar
  16. 16.
    Breiman,6L.: Bagging Predictor. Technical Report No.421, Department of Statistics, University of California Berkley, 1994 (
  17. 17.
    Iba, H.: Bagging, Boosting, and Bloating in Genetic Programming. Proceedings of the Genetic and Evolutionary Computation Conference, vol. 2, Morgan Kaufmann, (1999) 1053–1060Google Scholar
  18. 18.
    Land, W.H. Jr., Masters T., Lo J.Y., McKee, D.W., Anderson, F.R.: New results in breast cancer classification obtained from an evolutionary computation/adaptive boosting hybrid using mammogram and history data. Proceedings of the 2001 IEEE Mountain Workshop on Soft Computing in Industrial Applications. IEEE, (2001) 47–52Google Scholar
  19. 19.
    Basak, S.C., Gute, B.D., Grunwald, G.D., David W. Opitz, D.W., Balasubramanian, K.: Use of statistical and neural net methods in predicting toxicity of chemicals: A hierarchical QSAR approach. Predictive Toxicology of Chemicals: Experiences and Impact of AI Tools-Papers from the 1999 AAAI Symposium, AAAI Press, (1999) 108–111Google Scholar
  20. 20.
    Opitz, D.W., Basak, S.C., Gute, B.D.: Hazard Assessment Modeling: An Evolutionary Ensemble Approach. Proceedings of the Genetic and Evolutionary Computation Conference, vol. 2, Morgan Kaufmann (1999) 1643–1650Google Scholar
  21. 21.
    Maclin, R., Opitz, D.: An empirical evaluation of bagging and boosting. Proceedings of the Fourteenth International Conference on Artificial Intelligence, AAAI Press/MIT Press (1999) 546–551 (
  22. 22.
    Bauer, E., Kohavi, R.: An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Machine Learning, vol. 36, 1/2, Kluwer Academic Publishers (1999) 105–139 ( Scholar
  23. 23.
    Kohavi, R.: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection. Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI), Morgan Kaufmann (1995) 1137–1145 (
  24. 24.
    Langdon, W.B., Buxton, B.F.: Genetic Programming for Combining Classifiers. Proceedings of Genetic and Evolvable Computing Conference (GECCO), Morgan Kaufmann, (2001) 178Google Scholar
  25. 25.
    Soule, T.: Voting Teams: A Cooperative Approach to Non-Typical Problems. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-99), vol. 1,Morgan Kaufmann (1999) 916–922Google Scholar
  26. 27.
    Wang, J.T.L., Ma, Q., Shash D., Wu, C.: Application of neural networks to biological data mining: a case study in protein sequence classification. Proceedings. KDD-2000. Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, (2000) 305–309 (
  27. 28.
    Banzhaf, W., Nordin, P., Keller, R.E., Francone, F.D.: Genetic Programming: An Introduction: On the Automatic Evolution of Computer Programs and Its Applications. Academic Press/Morgan Kaufmann (1998)Google Scholar
  28. 30.
    Matthwes, B. W.:Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochimica et Biophysica Acta, vol. 405 (1975) 443–451Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Kosuke Imamura
    • 1
  • Heckendorn Robert B. 
    • 1
  • Terence Soule
    • 1
  • Foster James A. 
    • 1
  1. 1.Initiative for Bioinformatics and Evolutionary Studies (IBEST), Dept. of Computer ScienceUniversity of IdahoMoscow

Personalised recommendations