A Weighted Majority Vote Strategy Using Bayesian Networks

  • Luigi P. Cordella
  • Claudio De Stefano
  • Francesco Fontanella
  • Alessandra Scotto di Freca
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8157)


Most of the methods for combining classifiers rely on the assumption that the experts to be combined make uncorrelated errors. Unfortunately, this theoretical assumption is not easy to satisfy in practical cases, thus effecting the performance obtainable by applying any combination strategy. We tried to solve this problem by explicitly modeling the dependencies among the experts through the estimation of the joint probability distributions among the outputs of the classifiers and the true class. In this paper we propose a new weighted majority vote rule, that uses the joint probabilities of each class as weights for combining classifier outputs. A Bayesian Network automatically infers the joint probability distribution for each class. The final decision is made by taking into account both the votes received by each class and the statistical behavior of the classifiers. The experimental results confirmed the effectiveness of the proposed method.


Bayesian Network Joint Probability Direct Acyclic Graph Majority Vote Joint Probability Distribution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Ahalt, S.C., Krishnamurthy, A.K., Chen, P., Melton, D.E.: Competitive learning algorithms for vector quantization. Neural Netw. 3(3), 277–290 (1990)CrossRefGoogle Scholar
  2. 2.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  3. 3.
    De Stefano, C., D’Elia, C., Scotto di Freca, A., Marcelli, A.: Classifier combination by bayesian networks for handwriting recognition. Int. J. of Pattern Rec. and Artif. Intell. 23(5), 887–905 (2009)CrossRefGoogle Scholar
  4. 4.
    De Stefano, C., Fontanella, F., Marrocco, C., di Freca, A.S.: A hybrid evolutionary algorithm for bayesian networks learning: An application to classifier combination. In: Di Chio, C., Cagnoni, S., Cotta, C., Ebner, M., Ekárt, A., Esparcia-Alcazar, A.I., Goh, C.-K., Merelo, J.J., Neri, F., Preuß, M., Togelius, J., Yannakakis, G.N. (eds.) EvoApplicatons 2010, Part I. LNCS, vol. 6024, pp. 221–230. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  5. 5.
    Freund, Y., Shapire, R.: Experiments with a new boosting algorithm. In: Proceedings of ICML 1996, pp. 148–156 (1996)Google Scholar
  6. 6.
    Heckerman, D.: A tutorial on learning with bayesian networks. Tech. rep., Learning in Graphical Models (1995)Google Scholar
  7. 7.
    Kang, H.J., Lee, S.W.: Combination of multiple classifiers by minimizing the upper bound of bayes error rate for unconstrained handwritten numeral recognition. Int. J. of Pattern Rec. and Artif. Intell. 19(3), 395–413 (2005)CrossRefGoogle Scholar
  8. 8.
    Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On combining classifiers. IEEE Transactions on PAMI 20(3), 226–239 (1998)CrossRefGoogle Scholar
  9. 9.
    Kuncheva, L., Skurichina, M., Duin, R.P.W.: An experimental study on diversity for bagging and boosting with linear classifiers. Information Fusion 3(4), 245–258 (2002)CrossRefGoogle Scholar
  10. 10.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley-Interscience (2004)Google Scholar
  11. 11.
    Oza, N., Tumer, K.: Classifier ensembles: Select real-world applications. Information Fusion 9(1), 4–20 (2008)CrossRefGoogle Scholar
  12. 12.
    Sierra, B., Serrano, N., Larraaga, P., Plasencia, E.J., Inza, I., Jimnez, J.J., Revuelta, P., Mora, M.L.: Using bayesian networks in the construction of a bi-level multi-classifier. a case study using intensive care unit patients data. Artificial Intelligence in Medicine 22(3), 233–248 (2001)CrossRefGoogle Scholar
  13. 13.
    Xu, L., Krzyzak, A., Suen, C.Y.: Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans. on Systems, Man, and Cybernetics 22(3), 418–435 (1992)CrossRefGoogle Scholar
  14. 14.
    Zhou, Z., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artificial Intelligence 137(1-2), 239–263 (2002)CrossRefzbMATHMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Luigi P. Cordella
    • 1
  • Claudio De Stefano
    • 2
  • Francesco Fontanella
    • 2
  • Alessandra Scotto di Freca
    • 2
  1. 1.Dipartimento di Ingegneria Elettrica e delle Tecnologie dell’Informazione (DIETI)Università di Napoli Federico IIItaly
  2. 2.Dipartimento di Ingegneria Elettrica e dell’Informazione (DIEI)Università di Cassino e del Lazio MeridionaleItaly

Personalised recommendations