Abstract
This chapter outlines the fundamental of machine learning literature and provides the review of various literatures on understanding the variety of optimization techniques used for machine learning and prediction models. These techniques concern optimization either for the singular tree generation or the selection in homogeneous/heterogeneous ensembles. For the ensemble selection, various evaluation functions are studied and used with different methods of path. Comparisons with the state-and-art methods are performed on datasets or medical applications designed to validate the different techniques. The critical review of currently available optimization techniques is followed by descriptions of machine learning applications. This study will help the researcher to avoid overlapping efforts and make new basis for novice researchers.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bzdok D, Altman N, Krzywinski M (2018) Statistics versus machine learning. Nat Methods 15(4)
Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Wadsworth International Group
Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann
Kodratoff Y (1998) Technique et outils de l’extraction de connaissances à partir de données. Université Paris-Sud, Revue SIGNAUX (92)
Boser BE, Guyon IM, Vapnik VN (1992) A training algorithm for optimal margin classifiers. In: 5th annual workshop on computational learning theory. ACM, Pittsburgh, pp 144–152
Kim J, Pearl J (1987) Convice; a conversational inference consolidation engine. IEEE Trans Syst Man Cybern 17:120–132
Sebag M (2001) Apprentissage automatique, quelques acquis, tendances et défis. L.M.S: Ecole Polytechnique
Denis F, Gilleron R (1996) Notes de cours sur l’apprentissage automatique. Université de Lille
Kodratoff Y (1997) L’extraction de connaissance à partir de données: un nouveau sujet pour la recherche scientifique. Revue électronique READ
Simon H (1983) Why should machines learn? In: Machine learning: an artificial intelligence approach, vol 1
Carbonell JG (1962) Learning by analogy: formulating and generalizing plans from past experience. In: Michalak RS, Carbonell JG, Mitchell TM (eds) Machine learning, an artificial intelligence approach. Tioga Press, Palo Alto, CA
Langley P, Simon HA (1995) Applications of machine learning and rule induction. Technical Report 95-1, Institute for the Study of Learning and Expertise
Quinlan JR (1986) Induction of decision trees. Mach Learn 1:81–106
Denis F, Gilleron R (1997) Apprentissage à partir d’exemples. Université Charles de Gaulle, Lille 3
Dayan P, Sahani M, Deback G (1999) Unsupervised learning. In: Wilson RA, Keil F (eds) The MIT encyclopedia of the cognitive sciences
Mitchell T (1997) Machine learning. McGraw-Hill Publishing Company, McGraw-Hill Series in Computer Science (Artificial Intelligence)
Taleb Zouggar S, Adla A (2013) On generating and simplifying decision trees using tree automata models. INFOCOMP J 12(2):32–43
Morgan JN, Sonquist JA (1963) Problems in the analysis of survey data, and a proposal. J Am Stat Assoc 58:415–434
Kass G (1980) An exploratory technique for investigating large quantities of categorical data. Appl Stat 29(2):119–127
Friedman JH (1977) A recursive partitioning decision rule for non parametric classification. IEEE Trans Comput 26(4):404–408
Partalas I, Tsoumakas G, Vlahavas I (2012) A study on greedy algorithms for ensemble pruning. Technical Report TR-LPIS-360-12, LPIS, Dept. of Informatics, Aristotle University of Thessaloniki, Greece
Taleb Zouggar S, Adla A (2017) Proposal for measuring quality of decision trees partition. Int J Decis Support Syst Technol 9(4):16–36
Beiman L (1996) Heuristics of instability and stabilization in model selection. Ann Stat 24(6):2350–2383
Freund Y, Schapire RE (1995) A decision-theoretic generalization of on-line learning and an application to boosting. In: The 2nd European conference, EuroCOLT ’95. Springer-Verlag, pp 23–37
Breiman L (2000) Randomizing outputs to increase prediction accuracy. Mach Learn 40:229–242
Ho TK (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20(8):832–844
Wolpert D (1992) Stacked generalization. Neural Netw 5:241–259
Lewis-Beck MS, Bryman A, Liao TF (2004) Multi-strategy research. In: The SAGE encyclopedia of social science research methods
Partalas I, Tsoumakas G, Vlahavas I (2010) An ensemble uncertainty aware measure for directed hill climbing ensemble pruning. Mach Learn 81:257–282
Tsoumakas G, Partalas I, Vlahavas I (2009) An ensemble pruning primer. In: Okun, Valentino (eds) Applications of supervised and unsupervised ensemble methods. Springer-Verlag, pp 1–13
Margineantu DD, Dietterich TG (1997) Pruning adaptive boosting. In: The 14th international conference on machine learning. Morgan Kaufmann, San Francisco, pp 211–218
Yang Y, Korb K, Ting K, Webb G (2005) Ensemble selection for superparent-one-dependence estimators. In: AI 2005: advances in artificial intelligence, pp 102–112
MartÃnez-Muñoz G, Suarez A (2006) Pruning in ordered bagging ensembles. In: 23rd international conference in machine learning (ICML-2006). ACM Press, New York, pp 609–616
Bakker B, Heskes T (2003) Clustering ensembles of neural network models. Neural Netw 16(2):261–269
Fu Q, Hu SX, Zhao SY (2005) Clusterin-based selective neural network ensemble. J Zhejiang Univ Sci 6(5):387–392
Zhou ZH, Tang W (2003) Selective ensemble of decision trees. In: 9th International conference on rough sets, fuzzy sets, data mining, and granular computing. Chongqing, China, pp 476–483
Zhang Y, Burer S, Street WN (2006) Ensemble pruning via semi-definite programming. J Mach Learn Res 7:1315–1338
Partalas I, Tsoumakas G, Vlahavas I (2012) A study on greedy algorithms for ensemble pruning. Technical Report TR-LPIS-360-12, LPIS, Aristotle University of Thessaloniki, Greece
Taleb Zouggar S, Adla A (2018) A diversity-accuracy measure for homogenous ensemble selection. Int J Interact Multimedia Artif Intell (IJIMAI)
Taleb Zouggar S, Adla A (2018) A new function for ensemble pruning. In Dargam F, Delias P, Linden I, Mareschal B (eds) 4th International conference, ICDSST 2018, Heraklion, Greece, May 22–25, 2018, Proceedings. Decision support systems VIII: sustainable data-driven and evidence-based decision support, LNBIP. Springer International Publishing AG
Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51:181–207
Taleb Zouggar S, Adla A (2018) EMnGA: entropy measure and genetic algorithms based method for heterogeneous ensembles selection. IDEAL 2:271–279
Lallich S, Lenca P, Vaillant B (2007) Construction of an off-centered entropy for supervised learning. In ASMDA, 8
Breiman L (2001) Random forests. Mach Learn 45:5–32
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Zouggar, S.T., Adla, A. (2020). Optimization Techniques for Machine Learning. In: Kulkarni, A., Satapathy, S. (eds) Optimization in Machine Learning and Applications. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-15-0994-0_3
Download citation
DOI: https://doi.org/10.1007/978-981-15-0994-0_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-0993-3
Online ISBN: 978-981-15-0994-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)