Skip to main content

Optimization Techniques for Machine Learning

  • Chapter
  • First Online:
Optimization in Machine Learning and Applications

Part of the book series: Algorithms for Intelligent Systems ((AIS))

Abstract

This chapter outlines the fundamental of machine learning literature and provides the review of various literatures on understanding the variety of optimization techniques used for machine learning and prediction models. These techniques concern optimization either for the singular tree generation or the selection in homogeneous/heterogeneous ensembles. For the ensemble selection, various evaluation functions are studied and used with different methods of path. Comparisons with the state-and-art methods are performed on datasets or medical applications designed to validate the different techniques. The critical review of currently available optimization techniques is followed by descriptions of machine learning applications. This study will help the researcher to avoid overlapping efforts and make new basis for novice researchers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bzdok D, Altman N, Krzywinski M (2018) Statistics versus machine learning. Nat Methods 15(4)

    Google Scholar 

  2. Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Wadsworth International Group

    Google Scholar 

  3. Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann

    Google Scholar 

  4. Kodratoff Y (1998) Technique et outils de l’extraction de connaissances à partir de données. Université Paris-Sud, Revue SIGNAUX (92)

    Google Scholar 

  5. Boser BE, Guyon IM, Vapnik VN (1992) A training algorithm for optimal margin classifiers. In: 5th annual workshop on computational learning theory. ACM, Pittsburgh, pp 144–152

    Google Scholar 

  6. Kim J, Pearl J (1987) Convice; a conversational inference consolidation engine. IEEE Trans Syst Man Cybern 17:120–132

    Google Scholar 

  7. Sebag M (2001) Apprentissage automatique, quelques acquis, tendances et défis. L.M.S: Ecole Polytechnique

    Google Scholar 

  8. Denis F, Gilleron R (1996) Notes de cours sur l’apprentissage automatique. Université de Lille

    Google Scholar 

  9. Kodratoff Y (1997) L’extraction de connaissance à partir de données: un nouveau sujet pour la recherche scientifique. Revue électronique READ

    Google Scholar 

  10. Simon H (1983) Why should machines learn? In: Machine learning: an artificial intelligence approach, vol 1

    Google Scholar 

  11. Carbonell JG (1962) Learning by analogy: formulating and generalizing plans from past experience. In: Michalak RS, Carbonell JG, Mitchell TM (eds) Machine learning, an artificial intelligence approach. Tioga Press, Palo Alto, CA

    Google Scholar 

  12. Langley P, Simon HA (1995) Applications of machine learning and rule induction. Technical Report 95-1, Institute for the Study of Learning and Expertise

    Google Scholar 

  13. Quinlan JR (1986) Induction of decision trees. Mach Learn 1:81–106

    Google Scholar 

  14. Denis F, Gilleron R (1997) Apprentissage à partir d’exemples. Université Charles de Gaulle, Lille 3

    Google Scholar 

  15. Dayan P, Sahani M, Deback G (1999) Unsupervised learning. In: Wilson RA, Keil F (eds) The MIT encyclopedia of the cognitive sciences

    Google Scholar 

  16. Mitchell T (1997) Machine learning. McGraw-Hill Publishing Company, McGraw-Hill Series in Computer Science (Artificial Intelligence)

    Google Scholar 

  17. Taleb Zouggar S, Adla A (2013) On generating and simplifying decision trees using tree automata models. INFOCOMP J 12(2):32–43

    Google Scholar 

  18. Morgan JN, Sonquist JA (1963) Problems in the analysis of survey data, and a proposal. J Am Stat Assoc 58:415–434

    Article  Google Scholar 

  19. Kass G (1980) An exploratory technique for investigating large quantities of categorical data. Appl Stat 29(2):119–127

    Article  Google Scholar 

  20. Friedman JH (1977) A recursive partitioning decision rule for non parametric classification. IEEE Trans Comput 26(4):404–408

    Article  Google Scholar 

  21. Partalas I, Tsoumakas G, Vlahavas I (2012) A study on greedy algorithms for ensemble pruning. Technical Report TR-LPIS-360-12, LPIS, Dept. of Informatics, Aristotle University of Thessaloniki, Greece

    Google Scholar 

  22. Taleb Zouggar S, Adla A (2017) Proposal for measuring quality of decision trees partition. Int J Decis Support Syst Technol 9(4):16–36

    Article  Google Scholar 

  23. Beiman L (1996) Heuristics of instability and stabilization in model selection. Ann Stat 24(6):2350–2383

    Article  MathSciNet  Google Scholar 

  24. Freund Y, Schapire RE (1995) A decision-theoretic generalization of on-line learning and an application to boosting. In: The 2nd European conference, EuroCOLT ’95. Springer-Verlag, pp 23–37

    Google Scholar 

  25. Breiman L (2000) Randomizing outputs to increase prediction accuracy. Mach Learn 40:229–242

    Article  Google Scholar 

  26. Ho TK (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20(8):832–844

    Google Scholar 

  27. Wolpert D (1992) Stacked generalization. Neural Netw 5:241–259

    Article  Google Scholar 

  28. Lewis-Beck MS, Bryman A, Liao TF (2004) Multi-strategy research. In: The SAGE encyclopedia of social science research methods

    Google Scholar 

  29. Partalas I, Tsoumakas G, Vlahavas I (2010) An ensemble uncertainty aware measure for directed hill climbing ensemble pruning. Mach Learn 81:257–282

    Article  MathSciNet  Google Scholar 

  30. Tsoumakas G, Partalas I, Vlahavas I (2009) An ensemble pruning primer. In: Okun, Valentino (eds) Applications of supervised and unsupervised ensemble methods. Springer-Verlag, pp 1–13

    Google Scholar 

  31. Margineantu DD, Dietterich TG (1997) Pruning adaptive boosting. In: The 14th international conference on machine learning. Morgan Kaufmann, San Francisco, pp 211–218

    Google Scholar 

  32. Yang Y, Korb K, Ting K, Webb G (2005) Ensemble selection for superparent-one-dependence estimators. In: AI 2005: advances in artificial intelligence, pp 102–112

    Google Scholar 

  33. Martínez-Muñoz G, Suarez A (2006) Pruning in ordered bagging ensembles. In: 23rd international conference in machine learning (ICML-2006). ACM Press, New York, pp 609–616

    Google Scholar 

  34. Bakker B, Heskes T (2003) Clustering ensembles of neural network models. Neural Netw 16(2):261–269

    Article  Google Scholar 

  35. Fu Q, Hu SX, Zhao SY (2005) Clusterin-based selective neural network ensemble. J Zhejiang Univ Sci 6(5):387–392

    Article  Google Scholar 

  36. Zhou ZH, Tang W (2003) Selective ensemble of decision trees. In: 9th International conference on rough sets, fuzzy sets, data mining, and granular computing. Chongqing, China, pp 476–483

    Google Scholar 

  37. Zhang Y, Burer S, Street WN (2006) Ensemble pruning via semi-definite programming. J Mach Learn Res 7:1315–1338

    MathSciNet  MATH  Google Scholar 

  38. Partalas I, Tsoumakas G, Vlahavas I (2012) A study on greedy algorithms for ensemble pruning. Technical Report TR-LPIS-360-12, LPIS, Aristotle University of Thessaloniki, Greece

    Google Scholar 

  39. Taleb Zouggar S, Adla A (2018) A diversity-accuracy measure for homogenous ensemble selection. Int J Interact Multimedia Artif Intell (IJIMAI)

    Google Scholar 

  40. Taleb Zouggar S, Adla A (2018) A new function for ensemble pruning. In Dargam F, Delias P, Linden I, Mareschal B (eds) 4th International conference, ICDSST 2018, Heraklion, Greece, May 22–25, 2018, Proceedings. Decision support systems VIII: sustainable data-driven and evidence-based decision support, LNBIP. Springer International Publishing AG

    Google Scholar 

  41. Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51:181–207

    Article  Google Scholar 

  42. Taleb Zouggar S, Adla A (2018) EMnGA: entropy measure and genetic algorithms based method for heterogeneous ensembles selection. IDEAL 2:271–279

    Google Scholar 

  43. Lallich S, Lenca P, Vaillant B (2007) Construction of an off-centered entropy for supervised learning. In ASMDA, 8

    Google Scholar 

  44. Breiman L (2001) Random forests. Mach Learn 45:5–32

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdelkader Adla .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zouggar, S.T., Adla, A. (2020). Optimization Techniques for Machine Learning. In: Kulkarni, A., Satapathy, S. (eds) Optimization in Machine Learning and Applications. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-15-0994-0_3

Download citation

Publish with us

Policies and ethics