Abstract
The main objective of this chapter is to explain the machine learning concepts, mainly modeling and algorithms; batch learning and online learning; and supervised learning (regression and classification) and unsupervised learning (clustering) using examples. Modeling and algorithms will be explained based on the domain division characteristics, batch learning and online learning will be explained based on the availability of the data domain, and supervised learning and unsupervised learning will be explained based on the labeling of the data domain. This objective will be extended to the comparison of the mathematical models, hierarchical models, and layered models, using programming structures, such as control structures, modularization, and sequential statements.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
T. G. Dietterich, “Machine-learning research: Four current directions,” AI Magazine, vol. 18, no. 4, pp. 97–136,1997.
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. New York: Springer, 2009.
S. Suthaharan. “Big data classification: Problems and challenges in network intrusion prediction with machine learning,” ACM SIGMETRICS Performance Evaluation Review, vol. 41, no. 4, pp. 70–73, 2014.
A. K. Jain. “Data clustering: 50 years beyond K-means.” Pattern recognition letters, vol. 31, no. 8, pp. 651–666, 2010.
S. B. Kotsiantis. “Supervised machine learning: A review of classification techniques,” Informatica 31, pp. 249–268, 2007.
O. Okun, and G. Valentini (Eds.), “Supervised and unsupervised ensemble methods and their applications,” Studies in Computational Intelligence series, vol. 126, 2008.
M. Ji, T. Yang, B. Lin, R. Jin, and J. Han. “A simple algorithm for semi-supervised learning with improved generalization error bound,” in Proceedings of the 29th International Conference on Machine Learning, pp. 1223–1230, 2012.
M.G. Lagoudakis and R. Parr. “Reinforcement learning as classification: Leveraging modern classifiers,” in Proceedings of the 20th International Conference on Machine Learning, vol. 3, pp. 424–431, 2003.
M. A. Hearst, S. T. Dumais, E. Osman, J. Platt, and B. Scholkopf. “Support vector machines.” Intelligent Systems and their Applications, IEEE, vol. 13, no. 4, pp. 18–28, 1998.
L. Rokach, and O. Maimon. “Top-down induction of decision trees classifiers-a survey.” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 35, no. 4, pp. 476–487, 2005.
L. Breiman, “Random forests.” Machine learning 45, pp. 5–32, 2001.
G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv preprint arXiv:1207.0580, 2012.
D. Meyer, F. Leisch, and K. Hornik. “The support vector machine under test.” Neurocomputing 55, pp. 169–186, 2003.
O. L. Mangasarian and D. R. Musicant. 2000. “LSVM Software: Active set support vector machine classification software.” Available online at http://research.cs.wisc.edu/dmi/lsvm/.
M. Dunbar, J. M. Murray, L. A. Cysique, B. J. Brew, and V. Jeyakumar. “Simultaneous classification and feature selection via convex quadratic programming with application to HIV-associated neurocognitive disorder assessment.” European Journal of Operational Research 206(2): pp. 470–478, 2010.
V. Jeyakumar, G. Li, and S. Suthaharan. “Support vector machine classifiers with uncertain knowledge sets via robust optimization.” Optimization, pp. 1–18, 2012.
G. Huang, H. Chen, Z. Zhou, F. Yin and K. Guo. “Two-class support vector data description.” Pattern Recognition, 44, pp. 320–329, 2011.
V. Franc, and V. Hlavac. “Multi-class support vector machine.” In Proceedings of the IEEE 16th International Conference on Pattern Recognition, vol. 2, pp. 236–239, 2002.
D. Wang, J. Zheng, Y. Zhou, and J. Li. “A scalable support vector machine for distributed classification in ad hoc sensor networks.” Neurocomputing, vol. 74, no. 1, pp. 394–400, 2010.
L. Breiman. “Bagging predictors.” Machine learning 24, pp. 123–140, 1996.
L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. “Regularization of neural networks using dropconnect.” In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 1058–1066, 2013.
W. Tu, and S. Sun, “Cross-domain representation-learning framework with combination of class-separate and domain-merge objectives,” In: Proceedings of the CDKD 2012 Conference, pp. 18–25, 2012.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer Science+Business Media New York
About this chapter
Cite this chapter
Suthaharan, S. (2016). Modeling and Algorithms. In: Machine Learning Models and Algorithms for Big Data Classification. Integrated Series in Information Systems, vol 36. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7641-3_6
Download citation
DOI: https://doi.org/10.1007/978-1-4899-7641-3_6
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-7640-6
Online ISBN: 978-1-4899-7641-3
eBook Packages: Business and ManagementBusiness and Management (R0)