Abstract
So far, we have considered neural networks with two types of resource constraints: time, and the Kolmogorov complexity of the weights. Here, we consider rational-weight neural networks in which a bound is set on the precision available for the neurons. The issue of precision comes up when simulating a neural network on a digital computer. Any implementation of real arithmetic in hardware will handle “reals” of limited precision, seldom larger than 64 bits. When more precision is necessary, one must resort to a software implementation of real arithmetic (sometimes provided by the compiler), and even in this case a physical limitation on the length of the mantissa of each state of a neural network under simulation is imposed by the amount of available memory. This observation suggests that some connection can be established between the space requirements needed to solve a problem and the precision required by the activations of the neural networks that solve it.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer Science+Business Media New York
About this chapter
Cite this chapter
Siegelmann, H.T. (1999). Space and Precision. In: Neural Networks and Analog Computation. Progress in Theoretical Computer Science. Birkhäuser, Boston, MA. https://doi.org/10.1007/978-1-4612-0707-8_6
Download citation
DOI: https://doi.org/10.1007/978-1-4612-0707-8_6
Publisher Name: Birkhäuser, Boston, MA
Print ISBN: 978-1-4612-6875-8
Online ISBN: 978-1-4612-0707-8
eBook Packages: Springer Book Archive