Skip to main content

Foundation of Deep Machine Learning in Neural Networks

  • Chapter
  • First Online:
Image Texture Analysis

Abstract

This chapter introduces several basic neural network models, which are used as the foundation for the further development of deep machine learning in neural networks. The deep machine learning is a very different approach in terms of feature extraction compared with the traditional feature extraction methods. This conventional feature extraction method has been widely used in the pattern recognition approach. The deep machine learning in neural networks is to automatically “learn” the feature extractors, instead of using human knowledge to design and build feature extractors in the pattern recognition approach. We will describe some typical neural network models that have been successfully used in image and video analysis. One type of the neural networks introduced here is called supervised learning such as the feed-forward multi-layer neural networks, and the other type is called unsupervised learning such as the Kohonen model (also called self-organizing map (SOM)). Both types are widely used in visual recognition before the nurture of the deep machine learning in the convolutional neural networks (CNN). Specifically, the following models will be introduced: (1) the basic neuron model and perceptron, (2) the traditional feed-forward multi-layer neural networks using the backpropagation, (3) the Hopfield neural networks, (4) Boltzmann machines, (5) Restricted Boltzmann machines and Deep Belief Networks, (6) Self-organizing Maps, and (7) the Cognitron and Neocognitron. Both Cognitron and Neocognitron are deep neural networks that can perform the self-organizing without any supervision. These models are the foundation for discussing texture classification by using deep neural networks models.

Our greatest glory is not in never falling, but in rising every time we fall.

—Confucius

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 69.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ackley DH, Hinton GE, Sejnowski TJ (1985) A Learning Algorithm for Boltzman Machines. Cognit Sci 9:147–169

    Article  Google Scholar 

  2. Binder K (1994) Ising model. in Hazewinkel, Michiel, Encyclopedia of mathematics. Springer Science + Business Media B.V./Kluwer Academic Publishers, Berlin. ISBN 978-1-55608-010-4

    Google Scholar 

  3. Fukushima K (1975) Cognitron: a self-organizing multilayered neural network. Biol Cybern 20:121–136

    Article  Google Scholar 

  4. Fukushima K (1980) Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 36:193–202

    Article  Google Scholar 

  5. Fukushima K, Miyake S (1982) Neocognitron: A new algorithm for pattern recognition tolerant of deformation and shifts in position. Pattern Recogn 15(6):455–469

    Article  Google Scholar 

  6. Fukushima K, Miyake S, Takayuki I (1983) Neocognitron: a neural network model for a mechanism of visual pattern recognition. IEEE Trans Syst Man Cybern SMC-13(5):826–834

    Google Scholar 

  7. Giebel H (1971) Feature extraction and recognition of handwritten characters by homogeneous layers. In: Griisser O-J, Klinke R (eds) Pattern recognition in biological and technical systems. Springer, Berlin, pp. 16−169

    Google Scholar 

  8. Haykin S (1994) Neural networks: a comprehensive foundation. IEEE Press

    Google Scholar 

  9. Heaton J (2015) Artificial Intelligence for Humans (Volume 3): Deep Learning and Neural Networks, Heaton Research Inc. 2015

    Google Scholar 

  10. Hecht-Nielsen R (1990) Neurocomputing. Addison-Wesley, Boston. (Good in Hopfield and Boltzmann machines)

    Google Scholar 

  11. Hertz J, Krogh A, Palmer RG (1991) Introduction to the theory of neural computation. Addision-Wesley, Boston

    Google Scholar 

  12. Hinton GE (2002) Training Products of Experts by Minimizing Contrastive Divergence. Neural Comput 14:1771–1800

    Article  Google Scholar 

  13. Hinton GE, Sejnowski TJ (1983) Optimal perceptual inference. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Washington, pp 448–453

    Google Scholar 

  14. Hinton GE, Sejnowski TJ (1986) Learning and relearning in Boltzmann machines. In: Rumelhart M et al (ed) Parallel distributed processing, vol 1

    Google Scholar 

  15. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks, science, vol 313, 28 JULY 2006

    Google Scholar 

  16. Holland JH (1975) Adaptation in natural and artificial systems.The MIT Press, Cambridge

    Google Scholar 

  17. Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. In: Proceedings of the national academy of sciences of the USA, vol 79, pp 2554–2558

    Google Scholar 

  18. Hubel DH, Wiesel TN (1959) Receptive fields of single neurones in the cat’s visual cortex. J Physiol 148:574–591

    Article  Google Scholar 

  19. Hubel DH, Wiesel TN (1962) Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol 160:106–154

    Article  Google Scholar 

  20. Hubel DH, Wiesel TN (1965) Receptive fields and functional architecture in two nonstriate visual areas (18 and 19) of the cat. J Neurophysiol 28:229–289

    Article  Google Scholar 

  21. Hung C-C (1993) Competitive learning networks for unsupervised training. Int J Remote Sens 14(12):2411–2415

    Article  Google Scholar 

  22. Hung C-C, Fahsi A, Coleman T (1999) Image classification. In: Encyclopedia of electrical and electronics engineering. Wiley, pp 506–521

    Google Scholar 

  23. Kabrisky M (1967) A proposed model for visual information processing in the human brain. Psyccritiques 12(1):34−36

    Google Scholar 

  24. Kohonen T (1989) Self-organization and associative memory. Springer, New York

    Book  Google Scholar 

  25. Lee H (2010) Unsupervised feature learning via sparse hierarchical representations, Ph.D. dissertation, Stanford University

    Google Scholar 

  26. Lippmann RP (1987) Introduction to computing with neural nets. IEEE ASSP Mag

    Google Scholar 

  27. McCulloch WW, Pitts W (1943) A logical calculus of the ideas imminent in nervous activity. Bull Math Biophys 5:115–133

    Article  MathSciNet  Google Scholar 

  28. Minsky ML, Papert S (1969) Perceptrons. MIT Press, Cambridge

    MATH  Google Scholar 

  29. Mitchell, M., An Introduction to Genetic Algorithms, The MIT Press, 1999

    Google Scholar 

  30. Parker DB (1982) “Learning Logic’’, Invention Report S81–64, File 1. Stanford University, Stanford, CA, Office of Technology Licensing

    Google Scholar 

  31. Rosenblatt F (1962) Principles of neurodynamics, perceptrons and the theory of brain mechanisms. Spartan Books, Washington

    MATH  Google Scholar 

  32. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Parallel distributed processing, Vol. 1, pp 318–362. MIT Press, Cambridge

    Google Scholar 

  33. Simpson PK (1990) Artificial neural systems: foundations, paradigms, applications, and implementation. Pergamon Press, Oxford

    Google Scholar 

  34. Smolensky P (1986) In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing: volume 1: foundations. MIT Press, Cambridge, pp 194–281

    Google Scholar 

  35. Storkey AJ (1999) Efficient covariance matrix methods for Bayesian Gaussian processes and hopfield neural networks. Ph.D. thesis, University of London

    Google Scholar 

  36. Trudeau RJ (2013) Introduction to graph theory, Courier Corporation, 15 Apr 2013

    Google Scholar 

  37. Wasserman PD (1989) Neural computing: theory and practice. Van Nostrand Reinhold, New York

    Google Scholar 

  38. Watanabe S, Sakamoto J, Wakita M (1995) Pigeons’ discrimination of paintings by Monet and Picasso. J Exp Anal Behav 63(2):165–174. https://doi.org/10.1901/jeab.1995.63-165

    Article  Google Scholar 

  39. Werbos PJ (1974) Beyond regression: New tools for prediction and analysis in the behavioral sciences. Master Thesis, Harvard University

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chih-Cheng Hung .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hung, CC., Song, E., Lan, Y. (2019). Foundation of Deep Machine Learning in Neural Networks. In: Image Texture Analysis. Springer, Cham. https://doi.org/10.1007/978-3-030-13773-1_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-13773-1_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-13772-4

  • Online ISBN: 978-3-030-13773-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics