Abstract
As it was mentioned in Sect. 8.1, an untrained perceptron can be treated as a family of functions \(\mathbb {R}^n\rightarrow \mathbb {R}^m\) indexed by a vector set of all its weights. A given training set, in turn, can be regarded as a set of the points to which a mapping should be approximated in the best way. The investigations of approximation abilities of neural networks are focused on the existence of an arbitrarily close approximation. They are also focused on the problem how accuracy depends on a complexity of a perceptron. In this chapter a few basic theorems that concern the approximation properties of perceptrons are discussed. The presented theorems are the classical results. In this monograph they are presented without the proofs which can be found in literature.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Bielecki, A. (2019). Approximation Properties of Perceptrons. In: Models of Neurons and Perceptrons: Selected Problems and Challenges. Studies in Computational Intelligence, vol 770. Springer, Cham. https://doi.org/10.1007/978-3-319-90140-4_13
Download citation
DOI: https://doi.org/10.1007/978-3-319-90140-4_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-90139-8
Online ISBN: 978-3-319-90140-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)