Abstract
Artificial Intelligence, a field which deals with the study and design of systems, which has the capability of observing its environment and does functionalities which aims at maximizing the probability of its success in solving problems. AI turned out to be a field which captured wide interest and attention from the scientific world, so that it gained extraordinary growth. This in turn resulted in the increased focus on a field—which deals with developing the underlying conjectures of learning aspects and learning machines—machine learning. The methodologies and objectives of machine learning played a vital role in the considerable progress gained by AI. Machine learning aims at improving the learning capabilities of intelligent systems. This survey is aimed at providing a theoretical insight into the major algorithms that are used in machine learning and the basic methodology followed in them.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Kohavi R, Provost F (1998) Glossary of terms. Mach Learn 30:271–274
Bishop CM (2006) Pattern recognition and machine learning. Springer, New York. ISBN: 0-387-31073-8
Mohri M, Rostamizadeh A, Talwalkar A (2012) Foundations of machine learning, the MIT Press ISBN: 9780262018258
Harrington P (2012) Machine learning in action: classifying with K-nearest neighbors. New York, pp 18–36. ISBN: 9781617290183
Breiman L (2001) Random forests. Mach Learn 45(1):5–32
Ali J, Khan R, Ahmad N, Maqsood I (2012) Random forests and decision trees. IJCSI Int J Comput Sci Issues 9(5):3
Horning N Introduction to decision trees and random forests, American Museum of Natural History’s
Yadav SK, Bharadwaj BK, Pal S (2011) Data mining applications: a comparative study for predicting student’s performance. Int J Innovative Technol Creative Eng (IJITCE) 1(12):13–19
Breiman L, Friedman J, Olshen R, Stone C (1984) Classification and regression trees. Wadsworth, New York
Quinlan JR (1986) Introduction of decision tree. In: Journal of machine learning, pp 81–106
Park, Hyeoun-Ae (2013) An introduction to logistic regression: from basic concepts to interpretation with particular attention to nursing domain. Korean AcudNurs 43(2):154–164
Everitt BS (1998) The Cambridge dictionary of statistics. Cambridge University Press, Cambridge
Zhang H, Jiang L, Su J (2005) Augmenting Naive Bayes for ranking. In: International journal of pattern recognition and artificial intelligence, 2005
Sheng S, Zhang H (2004) Learning weighted naive Bayes with accurate ranking. In: Proceedings of the fourth IEEE international conference on data mining (ICDM-04), pp 567–570
Pazzani M, Domingos P (1997) The optimality of the simple Bayesian classifier under zero-one loss. Mach Learn 29:103–130
McCallum A, Nigam K (2003) A comparison of event models for naïve Bayes text classification. J Mach Learn Res 3:1265–1287
Rennie JDM, Shih L, Teevan J, Karger DR (2003) Tackling the poor assumptions of Naive Bayes text classifiers. In: Proceedings of the twentieth international conference on machine learning (ICML-2003), Washington DC
Ghahramani Z (2004) Gatsby computational neuroscience unit, University College London, Sept 16
Hastie T, Robert T, Jerome F (2009) The elements of statistical learning: data mining, inference, and prediction. Springer, New York, pp 485–586. ISBN 978-0-387-84857-0
Fasulo D (1999) An analysis of recent work on clustering algorithms, April 26
Steinbach M, Karypis G, Kumar V (2000) A comparison of document clustering techniques. In: Proceedings of the KDD-2000 workshop text mining
Chavent M, Ding Y, Fu L, Stolowy H, Wang H (2005) Disclosure and determinants studies: an extension using the divisive clustering method (DIV). Eur Account Rev 15(2):181–218
Ackermann MR, Blömer J, Kuntze D, Sohler C (2012) Analysis of agglomerative clustering. Algorithmica
Skurichina M, Duin RPW (2002) Bagging, boosting and the random subspace method for linear classifiers. Pattern Anal Appl 5:121–135
Bauer E, Kohavi R (1999) An empirical comparison of voting classification algorithms: bagging, boosting and variants. Mach Learn 36:105–139
Optiz D, Maclin R (1999) Popular ensemble methods: an empirical study. J Artif Intell Res 11:169–198. doi:10.1613/jair.614
Breiman L (1996) Bias, variance and arching classifiers. Technical report. Retrieved 19 Jan 2015
Acknowledgments
The authors would like to acknowledge Prof. Krishna Shastri (Ex-Joint Director, CIR, Amrita School of Engineering), Sureya Sathiamoorthi and Sree Harini of B.Tech (Computer Science and Engineering), 2010–2014 batch for their support extended in this study. This work was carried out as part of the open-source cloud lab set up by Dr. T. Senthil Kumar, established in Amrita CTS Lab (Amrita Cognizant Innovation Lab) in Amrita School of Engineering, Coimbatore.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer India
About this paper
Cite this paper
Sankar, A., Divya Bharathi, P., Midhun, M., Vijay, K., Senthil Kumar, T. (2016). A Conjectural Study on Machine Learning Algorithms. In: Suresh, L., Panigrahi, B. (eds) Proceedings of the International Conference on Soft Computing Systems. Advances in Intelligent Systems and Computing, vol 397. Springer, New Delhi. https://doi.org/10.1007/978-81-322-2671-0_10
Download citation
DOI: https://doi.org/10.1007/978-81-322-2671-0_10
Published:
Publisher Name: Springer, New Delhi
Print ISBN: 978-81-322-2669-7
Online ISBN: 978-81-322-2671-0
eBook Packages: EngineeringEngineering (R0)