Skip to main content

Feedforward Neural Networks

  • Chapter
  • First Online:
Applied Deep Learning

Abstract

In Chapter 2, we did some amazing things with one neuron, but that is hardly flexible enough to tackle more complex cases. The real power of neural networks comes to light when several (thousand, even million) neurons interact with each other to solve a specific problem. The network architecture (how neurons are connected to each other, how they behave, and so on) plays a crucial role in how efficient the learning of a network is, how good its predictions are, and what kind of problems it can solve.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Bias is a measure of the error originating from models that are too simple to capture the real features of the data.

  2. 2.

    To conduct a proper error analysis, we will require at least three parts, perhaps four. But to get a basic understanding of the process, two parts suffice.

  3. 3.

    Wikipedia, “Zalando,” https://en.wikipedia.org/wiki/Zalando , 2018.

  4. 4.

    Wikipedia, “MIT License,” https://en.wikipedia.org/wiki/MIT_License , 2018.

  5. 5.

    As a side note, this technique is often used to feed categorical variables to machine-learning algorithms.

  6. 6.

    Stochastic means that the updates have a random probability distribution and cannot be predicted exactly.

  7. 7.

    See, for example, Xavier Glorot and Yoshua Bengio, “Understanding the Difficulty of Training Deep Feedforward Neural Networks,” available at https://goo.gl/bHB5BM .

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Umberto Michelucci

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Michelucci, U. (2018). Feedforward Neural Networks. In: Applied Deep Learning. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3790-8_3

Download citation

Publish with us

Policies and ethics