Skip to main content

Abstract

The previously discussed architecture of ANNs is called FC neural networks (FCNNs). The reason is that each neuron in a layer i is connected to all neurons in layers i-1 and i+1. Each connection between two neurons has two parameters: the weight and the bias. Adding more layers and neurons increases the number of parameters. As a result, it is very time-consuming to train such networks even on devices on multiple graphics processing units (GPUs) and multiple central processing units (CPUs). It becomes impossible to train such networks on PCs with limited processing and memory capabilities.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Ahmed Fawzy Gad

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Gad, A.F. (2018). Convolutional Neural Networks. In: Practical Computer Vision Applications Using Deep Learning with CNNs. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-4167-7_5

Download citation

Publish with us

Policies and ethics