Abstract
This chapter introduces the basic concepts and methods of machine learning that are related to this book. The classical machine learning methods, like neural network (CNN), support vector machine (SVM), clustering, Bayesian networks, sparse learning, Boosting, and deep learning, are presented in this chapter.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Mitchell T (1997) Machine learning. McGraw Hill, New York. ISBN 0-07-042807-7
Duda RO, Hart PE, Stork DG (2009) Unsupervised learning and clustering, chapter 10 in pattern classification. Wiley, New York, p 571. ISBN 0-471-05669-3
Bishop CM (2006) Pattern recognition and machine learning. Springer, New York. ISBN 0-387-31073-8
Golovko V, Imada A (1990) Neural networks in artificial intelligence. Ellis Horwood Limited, Chichester. ISBN 0-13-612185-3
Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273
Support Vector Machine (2015) In: Wikipedia, The free encyclopedia. http://en.wikipedia.org/w/index.php?title=Support_vector_machine&oldid=654587935. Accessed 8 Apr 2015
Drucker H et al (1997) Support vector regression machines In: Advances in neural information processing systems 9, NIPS 1997. MIT Press, Cambridge, pp 155–161
Suykens JAK, Vandewalle J, Joos PL (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300
Pearl J, Russel S (2001) Bayesian networks, report (R-277), November 2000, In: Arbib M (ed) Handbook of brain theory and neural networks. MIT Press, Cambridge, pp 157–160
Bengioy Y, Courville A, Vincent P (2014) Representation learning: a review and new perspectives. 1206.5538v3[cs.LG]. Accessed 23 April 2014
Jolliffe IT (2002) Principal component analysis. In: Series: springer series in statistics, 2nd ed. Springer, New York, XXIX, 487, p. 28 illus
Smith LI ( 2002) A tutorial on pricipal component analysis. http://www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf. Accessed Feb 2002
Coates A, YN Andrew (2012) Learning feature representations with k-means. In: Neural networks: tricks of the trade. Springer LNCS, Heidelberg (reloaded)
Kenneth K-D et al (2003) Dictionary learning algorithms for sparse representation. Neural Comput 15.2: 349–396
Aharon M, Elad M, Bruckstein A, Katz Y (2006) K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Signal Process 54(11):4311–4322
Freund Y, Schapire RE (1999) A short introduction to boosting. In: Proceedings of the 16-th international joint conference on artificial intelligence, vol 2, pp 1401–1406
AdaBoost (2015) In: Wikipedia, The free encyclopedia. http://en.wikipedia.org/w/index.php?title=AdaBoost&oldid=647686369. Accessed 8 Apr 2015
Deep Learning (2015) In: Wikipedia, The free encyclopedia. http://en.wikipedia.org/w/index.php?title=Deep_learningoldid=655313266. Accessed 8 Apr 2015
Fukushima K (1980) Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 36:193–202
Werbos P (1974) Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis, Harvard University
LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1:541–551
Hinton GE (2007) Learning multiple layers of representation. Trends Cogn Sci 11(10):428–434
Ciresan DC, Meier U, Gambardella LM, Schmidhuber J (2010) Deep big simple neural nets for handwritten digit recognition. Neural Comput 22:3207–3220
Raina R, Madhavan A, YN Andrew (2009) Large-scale deep unsupervised learning using graphics processors. Proceedings of 26th international conference on machine learning
Mikolov T, Karafiat M, Burget L, Cernnocky J, Khudanpur S (2010) Recurrent neural network based language model. In: Proceedings of NTERSPEECH 2010, 11th annual conference of the international speech communication association, Makuhari, Chiba, Japan, 26–30 Sept 2010
Hinton GE, Li D, Dong Y, Dahl GE et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–87
Bengio Y, Boulanger-Lewandowski N, Pascanu R (2013) Advances in optimizing recurrent networks. In: Proceedings of IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 8624–8628, May 2013
Dahl GE, Sainath TN, Hinton GE (2013) Improving deep neural networks for LVCSR using rectified linear units and dropout. In: Proceedings of IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 8609–8623, May 2013
Hinton GE (2010) A practical guide to training restricted Boltzmann machines. Technical report, UTML TR 2010–003. Universityof Toronto, Department of Computer Science
Hinton GE (2009) Deep belief networks. Scholarpedia 4(5):5947
Larochelle H, Erhan D, Courville A, Bergstra J, Bengio Y (2007) An empirical evaluation of deep architectures on problems with many factors of variation. In: Proceedings of 24th international conference machine learning, pp 473–480
Hinton GE (2002) Training product of experts by minimizing contrastive divergence. Neural Comput 14:1771–1800
Fischer A, Igel C (2014) Training restricted Boltzmann machines: an introduction. Pattern Recogn 47(1):25–39
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2015 The Author(s)
About this chapter
Cite this chapter
Xu, L., Lin, W., Kuo, CC.J. (2015). Fundamental Knowledge of Machine Learning. In: Visual Quality Assessment by Machine Learning. SpringerBriefs in Electrical and Computer Engineering(). Springer, Singapore. https://doi.org/10.1007/978-981-287-468-9_2
Download citation
DOI: https://doi.org/10.1007/978-981-287-468-9_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-287-467-2
Online ISBN: 978-981-287-468-9
eBook Packages: EngineeringEngineering (R0)