Automatic Design of Multiple Classifier Systems by Unsupervised Learning

  • Giorgio Giacinto
  • Fabio Roli
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1715)


In the field of pattern recognition, multiple classifier systems based on the combination of the outputs of a set of different classifiers have been proposed as a method for the development of high performance classification systems. Previous work clearly showed that multiple classifier systems are effective only if the classifiers forming them make independent errors. This achievement pointed out the fundamental need for methods aimed to design ensembles of “independent” classifiers. However, the most of the recent work focused on the development of combination methods. In this paper, an approach to the automatic design of multiple classifier systems based on unsupervised learning is proposed. Given an initial set of classifiers, such approach is aimed to identify the largest subset of “independent” classifiers. A proof of the optimality of the proposed approach is given. Reported results on the classification of remote sensing images show that this approach allows one to design effective multiple classifier systems.


Error Probability Weight Seed Automatic Design Unsupervised Learn Probabilistic Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    L. Xu, A. Krzyzak, and C.Y. Suen, “Methods for combining multiple classifiers and their applications to handwriting recognition”, IEEE Trans. on Systems, Man, and Cyb., Vol. 22, No. 3, May/June 1992, pp. 418–435CrossRefGoogle Scholar
  2. 2.
    J. Kittler, M. Hatef, R.P.W. Duin and J. Matas “On Combining Classifiers”, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.20, No.3, March 1998, pp. 226–239CrossRefGoogle Scholar
  3. 3.
    L. K. Hansen, and P. Salamon, “Neural network ensembles”, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 12, No. 10, October 1990, pp. 993–1001CrossRefGoogle Scholar
  4. 4.
    K. Tumer and J. Ghosh, “Error correlation and error reduction in ensemble classifiers”, Connection Science 8, December 1996, pp. 385–404Google Scholar
  5. 5.
    A. J. C. Sharkey (Ed.), Special Issue: Combining Artificial Neural Nets: Ensemble Approaches. Connection Science Vol. 8, No. 3 & 4, Dec. 1996Google Scholar
  6. 6.
    Y.S. Huang, and C.Y. Suen, “A method of combining multiple experts for the recognition of unconstrained handwritten numerals”, IEEE Trans. on on Pattern Analysis and Machine Intelligence, Vol.17, No.1, January 1995, pp.90–94CrossRefGoogle Scholar
  7. 7.
    Y.S. Huang, K. Liu and C. Y. Suen,, “The combination of multiple classifiers by a neural network approach”, Int. Journal of Pattern Recognition and Artificial Intelligence, Vol. 9, no.3, 1995, pp.579–597CrossRefGoogle Scholar
  8. 8.
    G. Giacinto and F. Roli, “Ensembles of Neural Networks for Soft Classification of Remote Sensing Images”, Proc. of the European Symposium on Intelligent Techniques, Bari, Italy, pp. 166–170Google Scholar
  9. 9.
    D. Partridge, “Network generalization differences quantified”, Neural Networks, Vol.9, No.2, 1996, pp.263–271CrossRefGoogle Scholar
  10. 10.
    D. Partridge and W.B. Yates, “Engineering multiversion neural-net systems”, Neural Computation, 8, 1996, pp. 869–893CrossRefGoogle Scholar
  11. 11.
    D.W. Opitz and J.W. Shavlik, “Actively searching for an effective neural network ensemble”, Connection Science Vol. 8, No. 3 & 4, Dec. 1996, pp. 337–353CrossRefGoogle Scholar
  12. 12.
    B.E. Rosen, “Ensemble learning using decorrelated neural networks”, Connection Science Vol. 8, No.3 & 4, Dec. 1996, pp. 373–383CrossRefGoogle Scholar
  13. 13.
    C. Ji and S. Ma, “Combination of weak classifiers”, IEEE Trans. On Neural Networks, Vol.8, No.1, Jan. 1997, pp. 32–42CrossRefGoogle Scholar
  14. 14.
    K.D. Bollacker, and J. Ghosh, “Knowledge reuse in multiple classifier systems”, Pattern Recognition Letters, 18, 1997, pp. 1385–1390CrossRefGoogle Scholar
  15. 15.
    B. Littlewood and D.R. Miller, “Conceptual modelling of coincident failures in multiversion software”, IEEE Trans. On Software Engineering, 15(12), 1989, pp; 1569–1614CrossRefMathSciNetGoogle Scholar
  16. 16.
    A.K. Jain and R.C. Dubes, Algorithms for clustering data, Prentice Hall, 1988Google Scholar
  17. 17.
    F. Roli, “Multisensor image recognition by neural networks with understandable behaviour” International Journal of Pattern Recognition and Artificial Intelligence Vol. 10, No.8, 1996, pp. 887–917CrossRefGoogle Scholar
  18. 18.
    S. B. Serpico, and F. Roli, “Classification of multi-sensor remote-sensing images by structured neural networks”, IEEE Trans. Geoscience Remote Sensing 33, 1995, pp. 562–578.Google Scholar
  19. 19.
    S.B. Serpico., L. Bruzzone and F. Roli, “An experimental comparison of neural and statistical non-parametric algorithms for supervised classification of remote-sensing images” Pattern Recognition Letters17, 1996,1331–1341.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Giorgio Giacinto
    • 1
  • Fabio Roli
    • 1
  1. 1.Department of Electrical and Electronic EngineeringUniversity of Cagliari, ItalyPiazza D’ArmiCagliariItaly

Personalised recommendations