Abstract
This chapter describes the basic ideas under the ensemble approach, together with the classical methods that have being used in the field of Machine Learning. Section 3.1 states the rationale under the approach, while in Sect. 3.2 the most popular methods are briefly described. Finally, Sect. 3.3 summarizes and discusses the contents of this chapter.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Rokach, L.: Ensemble-based classifiers. Artif. Intell. Rev. 33(1), 1–39 (2010)
Polikar, R.: Ensemble based systems in decision making. IEEE Circuits Syst. Mag. 6(3), 21–45 (2006)
Brown, G.: Ensemble learning. In: Sammut, C., Webb, G.I. (eds.) Encyclopedia of Machine Learning. Springer, Berlin (2010)
Caruana, R., Niculescu-Mizil, A.: An empirical comparison of supervised learning algorithms. In: Machine Learning: Proceedings of the 23rd International Conference, pp. 161–168. ACM (2006)
Tukey, J.W.: Exploratory Data Analysis. Addison-Wesley, Reading (1977)
Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley, New York (2004)
Freund, Y.: Boosting a weak learning algorithm by majority. Inf. Comput. 121(82), 256–285 (1995)
Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Machine Learning: Proceedings of the Thirteenth International Conference, pp. 325–332 (1996)
Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. J. Artif. Intell. Res. 2, 263–286 (1995)
Mardani, A., Jusoh, A., Nor, K., Khalifah, Z., Zakwan, N., Valipour, A.: Multiple criteria decision-making techniques and their applications -a review of the literature from 2000 to 2014. Econ. Res. Ekon. Istraivanja 28(1), 516–571 (2015)
Saaty, T.: What is the analytic hierarchy process? In: Mitra, G., Greenberg, H., Lootsma, F., Rijkaert, M., Zimmermann, H. (eds.) Mathematical Models for Decision Support, vol. 48, pp. 109–121. Springer, Berlin (1988)
Ilangkumaran, M., Karthikeyan, M., Ramachandran, T., Boopathiraja, M., Kirubakaran, B.: Risk analysis and warning rate of hot environment for foundry industry using hybrid MCDM technique. Saf. Sci. 72, 133–143 (2015)
Hashemkhani Zolfani, S., Esfahani, M.H., Bitarafan, M., Zavadskas, E.K., Arefi, S.L.: Developing a new hybrid MCDM method for selection of the optimal alternative of mechanical longitudinal ventilation of tunnel pollutants during automobile accidents. Transport 28, 89–96 (2013)
Hu, S.-K., Lu, M.-T., Tzeng, G.-H.: Exploring smart phone improvements based on a hybrid MCDM model. Expert Syst. Appl. 41, 4401–4413 (2014)
Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: a survey and categorization. Inf. Fusion 6(1), 5–20 (2005)
Bramer, M.: Principles of Data Mining. Springer, Berlin (2013)
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Berlin (2006)
Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)
Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990)
Vega-Pons, S., Ruiz-Shulcloper, J.: A survey of clustering ensemble algorithms. Int. J. Pattern Recognit. Artif. Intell. 25(3), 337–372 (2011)
Hu, J., Li, T., Luo, C., Fujita, H., Yang, Y.: Incremental fuzzy cluster ensemble learning based on rough set theory. Knowl. Based Syst. 132, 144–155 (2017). https://doi.org/10.1016/j.knosys.2017.06.020
Dimitriadou, E., Weingessel, A., Hornik, K.: A cluster ensembles framework. Design and Application of Hybrid Intelligent Systems. IOS Press, Amsterdam (2003)
Strehl, A., Ghosh, J.: Cluster ensembles—a knowledge reuse framework for combining multiple partitions. J. Mach. Learn. Res. 3, 583–617 (2002)
Hore, P., Hall, L.O., Goldgof, D.B.: A scalable framework for cluster ensembles. Pattern Recognit. 42(5), 676–688 (2009)
Hore, P., Hall, L., Goldgof, D.: A cluster ensemble framework for large data sets. In: Proceedings of IEEE International Conference on Systems, Man and Cybernetics, SMC’06, vol. 4, pp. 3342–3347 (2006)
Pérez-Gállego, P., Quevedo, J.R., del Coz, J.J.: Using ensembles for problems with characterizable changes in data distribution: a case study on quantification. Inf. Fusion 34, 87–100 (2017)
Windeatt, T., Duangsoithong, R., Smith, R.: Embedded feature ranking for ensemble MLP classifiers. IEEE Trans. Neural Netw. 2286, 988–994 (2011)
Attik, M.: Using ensemble feature selection approach in selecting subset with relevant features. In: Advances in Neural Networks-ISNN, pp. 1359–1366. Springer (2006)
Saeys, Y., Abeel, T., Van de Peer, Y.: Robust feature selection using ensemble feature selection techniques. Machine Learning and Knowledge Discovery in Databases. Springer, Berlin (2008)
Bolón-Canedo, V., Sánchez-Maroño, N., Alonso-Betanzos, A.: An ensemble of filters and classifiers for microarray data classification. Pattern Recognit. 45(1), 531–539 (2012)
Yang, F., Mao, K.Z.: Robust feature selection for microarray data based on multicriterion fusion. IEEE/ACM Trans. Comput. Biol. Bioinform. (TCBB) 8(4), 1080–1092 (2011)
Seijo-Pardo, B., Porto-Díaz, I., Bolón-Canedo, V., Alonso-Betanzos, A.: Ensemble feature selection: homogeneous and heterogeneous approaches. Knowl. Based Syst. https://doi.org/10.1016/j.knosys.2016.11.017
Kuncheva, L., Whitaker, C.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn. 51, 181–207 (2003)
Zhihua, Z.: Ensemble Methods: Foundations and Algorithms. Chapman and Hall/CRC. CRC Press, Boca Raton (2012)
Fernández-Delgado, M., Cernadas, E., Barro, S., Amorim, D.: Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15, 3133–3181 (2014)
Wainberg, M., Alipanahi, B., Frey, B.J.: Are random forests truly the best classifiers? J. Mach. Learn. Res. 17, 1–5 (2016)
Schapire, R.E., Freund, Y.: Boosting. Foundations and Algorithms. The MIT Press, Cambridge (2012)
Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5, 197–227 (1990)
Wu, X., Kumar, V., Quinlan, J.R., Ghosh, J., Yang, Q., Motoda, H., et al.: Top 10 algorithms in data mining. Knowl. Inf. Syst. 14(1), 1–37 (2008)
Quinlan, J.R.: Bagging, Boosting and C4.5. In: Proceedings of the 13th National Conference on Artificial Intelligence, pp. 725–730 (1996)
Merler, S., Caprile, B., Furlanello, C.: Parallelizing AdaBoost by weights dynamics. Comput. Stat. Data Anal. 51(5), 2487–2498 (2007)
Zhang, C.X., Zhang, J.S.: A local boosting algorithm for solving classification problems. Comput. Stat. Data Anal. 52(4), 1928–1941 (2008)
Buhlmann, P., Yu, B.: Boosting With the L2 Loss. J. Am. Stat. Assoc. 98(462), 324–339 (2003)
Dettling, M., Bühlmann, P.: Boosting for tumor classification with gene expression data. Bioinformatics 19(9), 1061–1069 (2003)
Buhlmann, P.: Boosting for high-dimensional linear models. Ann. Stat. 34(2), 559–583 (2006)
Landesa-Vzquez, I., Alba-Castro, J.L.: Double-base asymmetric AdaBoost. Neurocomputing 118, 101–114 (2013)
Ting, K. M.: A comparative study of cost-sensitive boosting algorithms. In: ICML, pp. 983–990 (2000)
Viola, P., Jones, M.: Fast and robust classification using asymmetric AdaBoost and a detector cascade. In: NIPS (2002)
Nikolaou, N., Edakunni, N., Kull, M., Flatch, P., Brown, G.: Cost-sensitive boosting algorithms: do we really need them? Mach. Learn. 104(2–3), 359–384 (2016)
Breiman, M.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)
Ho, T.K.: Random decision forests. In: Proceedings of 3rd International Conference on Document Analysis and Recognition, vol. 1, pp. 278–282 (1995)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
Flach, P.: Machine Learning. The Art and Science of Algorithms that Make Sense of Data. Cambridge University Press, Cambridge (2012)
Bolón-Canedo, V., Sánchez-Maroño, N., Alonso-Betanzos, A.: Data classification using an ensemble of filters. Neurocomputing 135, 13–20 (2014)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Bolón-Canedo, V., Alonso-Betanzos, A. (2018). Foundations of Ensemble Learning. In: Recent Advances in Ensembles for Feature Selection. Intelligent Systems Reference Library, vol 147. Springer, Cham. https://doi.org/10.1007/978-3-319-90080-3_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-90080-3_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-90079-7
Online ISBN: 978-3-319-90080-3
eBook Packages: EngineeringEngineering (R0)