Abstract
In Chapter 2, we did some amazing things with one neuron, but that is hardly flexible enough to tackle more complex cases. The real power of neural networks comes to light when several (thousand, even million) neurons interact with each other to solve a specific problem. The network architecture (how neurons are connected to each other, how they behave, and so on) plays a crucial role in how efficient the learning of a network is, how good its predictions are, and what kind of problems it can solve.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Bias is a measure of the error originating from models that are too simple to capture the real features of the data.
- 2.
To conduct a proper error analysis, we will require at least three parts, perhaps four. But to get a basic understanding of the process, two parts suffice.
- 3.
Wikipedia, “Zalando,” https://en.wikipedia.org/wiki/Zalando , 2018.
- 4.
Wikipedia, “MIT License,” https://en.wikipedia.org/wiki/MIT_License , 2018.
- 5.
As a side note, this technique is often used to feed categorical variables to machine-learning algorithms.
- 6.
Stochastic means that the updates have a random probability distribution and cannot be predicted exactly.
- 7.
See, for example, Xavier Glorot and Yoshua Bengio, “Understanding the Difficulty of Training Deep Feedforward Neural Networks,” available at https://goo.gl/bHB5BM .
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Umberto Michelucci
About this chapter
Cite this chapter
Michelucci, U. (2018). Feedforward Neural Networks. In: Applied Deep Learning. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3790-8_3
Download citation
DOI: https://doi.org/10.1007/978-1-4842-3790-8_3
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-3789-2
Online ISBN: 978-1-4842-3790-8
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)