Intelligent assistance for coronary heart disease diagnosis: A comparison study

  • Guido Bologna
  • Ahmed Rida
  • Christian Pellegrini
Diagnostic Problem Solving
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1211)


Using only non invasive medical information, we propose inductive decision trees exploiting C4.5 algorithm, artificial neural networks with three MLP models, and linear discriminant analysis to diagnose coronary heart disease. The first neural network model is a constructive MLP called OIL (Orthogonal Incrementing Learning) that builds its hidden neurons during the training phase. The second one is a fixed MLP architecture with the same number of hidden neurons obtained from the first network building methodology. The last one is a special ”interpretable” MLP model with a fixed architecture (IMLP), which is interpretable through symbolic rule extraction. In general, explanation of connectionist model responses are difficult to obtain, especially when input examples have continuous variables. This is not acceptable for real world diagnosis applications. The novelty in our study consists in the interpretability of the IMLP model we have developed. For this diagnosis application, all neural networks globally obtain better predictive accuracies than C4.5 and the linear discriminant analysis. Results obtained with the OIL method are slightly better than those obtained by IMLP, but they lack interpretability.


Hide Layer Linear Discriminant Analysis Hide Neuron Input Neuron Rule Extraction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    White H. Connectionist non-parametric regression. multi-layer feedforward networks can learn arbitrary mappings. Neural Networks 1990: 3 (3); 535–551.Google Scholar
  2. 2.
    Towell G.G, Shawlik J.W. Extracting Refined Rules from Knowledge-Based Neural Networks, Machine Learning, 13 (1), 1993.Google Scholar
  3. 3.
    Setiono R, Liu H. Understanding Neural Networks via Rule Extraction. IJCAI 1995: 1; 480–485.Google Scholar
  4. 4.
    Gorman R.P, Sejnowski T.J., Analysis of Hidden Unites in a Layered Network Trained to Classify Sonar Targets. Neural Networks 1988: 1(1); 75–88.Google Scholar
  5. 5.
    Andrews R, Geva S. Extracting Rules From a Constrained Error Backpropagation Network, Proc of the 5th Australian Conference on Neural Networks, Brisbane, 1994.Google Scholar
  6. 6.
    Mooney R, Shavlik J, Towell G, Gove A. An experimental Comparison of Symbolic and Connectionist Learning Algorithms, Proc. IJCAI-89 Morgan Kaufmann Los Altos, CA 775–780, 1989.Google Scholar
  7. 7.
    Atlas L, Cole R, Connor J, El-Sharkawi M, Marks R.J, Muthusumi Y, Barnard E. Performance Comparison Between Backpropagation Networks and Classification Trees on Three Real-World Applications, Touretzky (ed) Advances in Neural Information Processing 2, Morgan Kaufmann, San Mateo, CA, 622–629. 1990.Google Scholar
  8. 8.
    Tsoi A.C, Pearson R.A. Comparison of Three Classification Techniques, CART, C4.5 and MLP, Lippman R.P. et al (eds) Advances in Neural Information Processing 3, Morgan Kaufmann, San Mateo CA 963–969.Google Scholar
  9. 9.
    Mitchell T.M, Thsun S.B. Explanation Based Learning. A comparison of symbolic and connectionist Learning Algorithms, Proc 10th Int. Conf. on Machine Learning, Morgan Kaufmann San Mateo CA 197–204, 1993Google Scholar
  10. 10.
    Feng G, Sutherland A, King R, Muggleton S, Henery R. Comparison of Machine Learning Classifiers to Statistics and Neural Networks, Proc. 4th Int. Workshop on Artificial Intelligence and Statistics, Florida 1993.Google Scholar
  11. 11.
    Quinlan J.R. Comparing Connectionist and Symbolic Learning Methods, Hanson et al, 445–456, 1994.Google Scholar
  12. 12.
    Quinlan JR. C4.5: Programs for Machine Learning. Morgan Kaufmann 1993.Google Scholar
  13. 13.
    Amendolia S.R, Bertolucci E, Biadi O, Bottigli U, Caravelli P, Fantacci M.E, Fidecaro E, Mariani M, Messineo A, Rosso V, Stefanini A. Neural Network Expert System for Screening Coronary Heart Disease. Physica Medica 1993: IX (1); 13–17.Google Scholar
  14. 14.
    Fahlman S. E, Lebiere C. The Cascade-Correlation Learning Architecture TechReport 1990: CMU-CS-90-100 Carnegie Mellon University Google Scholar
  15. 15.
    Lengellé R, Denoeux T, Training MLPs Layer by Layer Using an Objective Function for Internal Representations. Neural Networks 1996 vol:9 Nbr: 1; 83–98.Google Scholar
  16. 16.
    Vysniauskas V, Groen F.C, Krose J.A. Orthogonal Incremental Learning of a feedforward Network. ICANN'95, Paris; vol:1; p311.Google Scholar
  17. 17.
    Møllerr M., A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning. Neural Networks 1993 vol: 6 nbr: 4; 525–533Google Scholar
  18. 18.
    Riedmiller M. Rprop—Description and Implementation Details. TechReport riedmiller-94a. 1994Google Scholar
  19. 19.
    Huang HH, Zhang C, Lee S. Implementation and Comparison of Neural Network Learning Paradigms: Back Propagation, Simulated Annealing and Tabu Search. Artificial Neural Networks in Engineering 1991: ASME Press, New York; 95–100.Google Scholar
  20. 20.
    Bologna G, Pellegrini C. Three Medical Examples in Neural Network Rule Extraction (Submitted on 1996) to Physica Medica, ed. Giardini Editori e Stampatori in Pisa.Google Scholar
  21. 21.
    Delogu P. Uso di Reti Neurali per Diagnosi Cliniche Automatiche. Master Thesis, University of Pisa (Italy), 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Guido Bologna
    • 1
  • Ahmed Rida
    • 1
  • Christian Pellegrini
    • 1
  1. 1.Artificial intelligence group, Computing Science CenterUniversity of GenevaGeneva 4Switzerland

Personalised recommendations