Abstract
In the previous chapters, you have looked at fully connected networks and all the problems encountered while training them. The network architecture we have used, one in which each neuron in a layer is connected to all neurons in the previous and following layer, is not really good at many fundamental tasks, such as image recognition, speech recognition, time series prediction, and many more. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are the advanced architectures most often used today. In this chapter, you will look at convolution and pooling, the basic building blocks of CNNs. Then you will check how RNNs work on a high level, and you will look at a select number of examples of applications. I will also discuss a complete, although basic, implementation of CNNs and RNNs in TensorFlow. The topic of CNNs and RNNs is much too vast to cover in a single chapter. Therefore, I will cover here only the fundamental concepts, to show you how those architectures work, but a complete treatment would require a separate book.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Cat image source: www.shutterstock.com/
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Umberto Michelucci
About this chapter
Cite this chapter
Michelucci, U. (2018). Convolutional and Recurrent Neural Networks. In: Applied Deep Learning. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3790-8_8
Download citation
DOI: https://doi.org/10.1007/978-1-4842-3790-8_8
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-3789-2
Online ISBN: 978-1-4842-3790-8
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)