Alternative Ensemble Classifier Based on Penalty Strategy for Improving Prediction Accuracy

  • Cindy-Pamela LopezEmail author
  • Maritzol Tenemaza
  • Edison Loza-Aguirre
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 876)


The Increasing demand for accurate classifier systems for user’s service has called the application of machine learning techniques. One of the most used techniques consist in grouping classifiers into an ensemble classifier. The resulting classifier is generally more accurate than any individual classifier. In this work, we propose an alternative ensemble classification system based on combining three classifiers: Naive Bayes, Random Forest and Multilayer Perceptron. To increase robustness of prediction, we organized the algorithms used by penalty calculations instead of a score-based voting system. We have compared the results of our proposed penalty factor system with the most popular classification algorithms and an ensemble classifier that uses the voting technique. Our results show that our algorithm improves the accuracy in prediction of classification in exchange of a reasonable response time.


Ensemble classification Machine learning Classification algorithm Classification 



The authors gratefully acknowledge the financial support provided by Escuela Politécnica Nacional for the development of the research project PII-16-04.


  1. 1.
    Ponomarev, A.: Recommending tourist locations based on data from photo sharing service: method and algorithm. In: Conference of Open Innovations Association and Seminar on Information Security and Protection of Information Technology (FRUCT-ISPIT), pp. 272–278 (2016)Google Scholar
  2. 2.
    Witten, I., Frank, E.: Weka machine learning algorithms in Java. Data mining: practical machine learning tools and techniques with Java implementations, pp. 404–417 (2000)Google Scholar
  3. 3.
    Le Cessie, S., Van Houwelingen, J.: Ridge estimators in logistic regression. Appl. Stat. 41(1), 191–201 (1992)CrossRefGoogle Scholar
  4. 4.
    Kunheva, L.I., Shipp, C.A.: Relationship between combination methods and measures of diversity in combining classifiers. Inf. Fusion 3(2), 135–148 (2002)CrossRefGoogle Scholar
  5. 5.
    Nascimento, D.S.C., Canuto, A.M.P., Coelho, A.L.V.: Multi-label meta-learning approach for the automatic configuration of classifier ensembles. Electron. Lett. 52(20), 1688–1690 (2016)CrossRefGoogle Scholar
  6. 6.
    Tao, H., Mo, L., Shen, F., Du, Z., Yan, R.: Multi-classifiers ensemble with confidence diversity for fault diagnosis in induction motors. In: International Conference of Sensing Technology (ICST), pp. 1–5 (2016)Google Scholar
  7. 7.
    Raja, M., Swamynathan, S.: Ensemble learning for network data stream classification using similarity and online genetic algorithm classifiers. In: International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 1601–1607 (2016)Google Scholar
  8. 8.
    Cheriguene, S., et al.: Classifier ensemble selection based on mRMR algorithm and diversity measures: an application of medical data classification (2018). ISBN 978-3-319-62521-8Google Scholar
  9. 9.
    Oh, S., Lee, M.S., Zhang, B.T.: Ensemble learning with active example selection for imbalanced biomedical data classification. IEEE/ACM Trans. Comput. Biol. Bioinform. 8(2), 316–325 (2011)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Cindy-Pamela Lopez
    • 1
    Email author
  • Maritzol Tenemaza
    • 1
  • Edison Loza-Aguirre
    • 1
  1. 1.Departamento de Informática y Ciencias de la ComputaciónEscuela Politécnica NacionalQuitoEcuador

Personalised recommendations