Comparison of Four Methods of Combining Classifiers on the Basis of Dispersed Medical Data

  • Małgorzata Przybyła-KasperekEmail author
Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 57)


The main aim of the article is to compare the results obtained using four different methods of combining classifiers in a dispersed decision-making system. In the article the following fusion methods are used: the majority vote, the weighted majority vote, the Borda count method and the highest rank method. Two of these methods are used if the individual classifier generates a class label and two are used in the case when the individual classifier produces ranking of classes instead of unique class choice. All of these methods were tested in a situation when we have access to data from medical field and this data are in a dispersed form. The use of dispersed medical data is very important because it is common situation that medical data from one domain are collected in many different medical centers. It would be good to be able to use all this accumulated knowledge at the same time.


Decision-making system Global decision Fusion method Majority vote Weighted majority vote Borda count Highest rank 


  1. 1.
    Alpaydin, E., Jordan, M.I.: Local linear perceptrons for classification. IEEE Trans. Neural Netw. 7(3), 788–794 (1996)CrossRefGoogle Scholar
  2. 2.
    Barabash, Y.L.: Collective Statistical Decisions in Recognition. Radio i Sviaz, Moscow (1983)Google Scholar
  3. 3.
    Cho, S.-B., Kim, J.H.: Combining multiple neural networks by fuzzy integral for robust classification. IEEE Trans. Syst. Man Cybern. 25(2), 380–384 (1995)CrossRefGoogle Scholar
  4. 4.
    Clark, P., Niblett, T.: Induction in noisy domains. In: Bratko I., Lavrac N. (eds.) Progress in Machine Learning, pp. 11–30 (1987)Google Scholar
  5. 5.
    Drucker, H., Cortes, C., Jackel, L.D., LeCun, Y., Vapnik, V.: Boosting and other ensemble methods. Neural Comput. 6(6), 1289–1301 (1994)CrossRefGoogle Scholar
  6. 6.
    Gatnar, E.: Multiple-Model Approach to Classification and Regression. PWN, Warsaw (2008). (in Polish)Google Scholar
  7. 7.
    Ho, T.K., Hull, J.J., Srihari, S.N.: Decision combination in multiple classifier systems. IEEE Trans. Pattern Anal. Mach. Intell. 16(1), 66–75 (1994)CrossRefGoogle Scholar
  8. 8.
    Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Comput. 3(1), 79–87 (1991)CrossRefGoogle Scholar
  9. 9.
    Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 20(3), 226–239 (1998)CrossRefGoogle Scholar
  10. 10.
    Kuncheva, L., Bezdek, J.C., Duin, R.P.W.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recognit. 34(2), 299–314 (2001)CrossRefGoogle Scholar
  11. 11.
    Kuncheva, L.: Combining pattern classifiers methods and algorithms. John Wiley & Sons (2004)Google Scholar
  12. 12.
    Michalski, R., Mozetic, I. Hong, J., Lavrac, N.: The multi-purpose incremental learning system AQ15 and its testing applications to three medical domains. In: Proceedings of the 5th National Conference on Artificial Intelligence, pp. 1041–1045 (1986)Google Scholar
  13. 13.
    Ng, K.-C., Abramson, B.: Probabilistic multi-knowledge-base systems. Appl. Intell. 4(2), 219–236 (1994)CrossRefGoogle Scholar
  14. 14.
    Przybyła-Kasperek, M., Wakulicz-Deja, A.: Application of reduction of the set of conditional attributes in the process of global decision-making. Fund. Inform. 122(4), 327–355 (2013)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Przybyła-Kasperek, M., Wakulicz-Deja, A.: Global decision-making system with dynamically generated clusters. Inf. Sci. 270, 172–191 (2014)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Przybyła-Kasperek, M., Wakulicz-Deja, A.: A dispersed decision-making system—the use of negotiations during the dynamic generation of a systems structure. Inf. Sci. 288, 194–219 (2014)CrossRefGoogle Scholar
  17. 17.
    Rogova, G.L.: Combining the results of several neural network classifiers. Neural Netw. 7(5), 777–781 (1994)CrossRefGoogle Scholar
  18. 18.
    Shoemaker, L., Banfield, R.E., Hall, L.O., Bowyer, K.W., Kegelmeyer, W.P.: Using classifier ensembles to label spatially disjoint data. Inf. Fusion 9(1), 120–133 (2008)CrossRefGoogle Scholar
  19. 19.
    Ślȩzak, D., Wróblewski, J., Szczuka, M.: Neural network architecture for synthesis of the probabilistic rule based classifiers. In: ENTCS 82, Elsevier (2003)Google Scholar
  20. 20.
    Wakulicz-Deja, A., Przybyła-Kasperek, M.: Application of the method of editing and condensing in the process of global decision-making. Fund. Inform. 106(1), 93–117 (2011)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 2.5 International License (, which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.Institute of Computer Science, University of SilesiaSosnowiecPoland

Personalised recommendations