Advertisement

Backpropagation

  • Alex M. Andrew
Chapter
Part of the IFSR International Series on Systems Science and Engineering book series (IFSR, volume 26)

Following the initiative of McCulloch and Pitts (1943), there has been much speculation about the achievement of artificial intelligence using networks of model neurons. The advent of the “perceptron” principle (Rosenblatt 1961; Nilsson 1965) crystallised something definite and functional out of a mass of diffuse speculation, but it is not difficult to show that the “simple perceptron” has limited capability (Minsky and Papert 1969). This can be attributed to the fact that all of the changes in weights constituting its learning are restricted to a single functional layer. The simple training algorithm is possible because all the places where changes occur are in this one layer and contribute directly to the output of the device, but the range of tasks that can be learned is drastically limited.

Keywords

Hind Limb Significance Feedback Hide Unit Cochlear Nucleus Rubber Sheet 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.ReadingUK

Personalised recommendations