Abstract
We showed in the last chapter that the learning problem is NP-complete for a broad class of neural networks. Learning algorithms may require an exponential number of iterations with respect to the number of weights until a solution to a learning task is found. A second important point is that in backpropagation networks, the individual units perform computations more general than simple threshold logic. Since the output of the units is not limited to the values 0 and 1, giving an interpretation of the computation performed by the network is not so easy. The network acts like a black box by computing a statistically sound approximation to a function known only from a training set. In many applications an interpretation of the output is necessary or desirable. In all such cases the methods of fuzzy logic can be used.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1996 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Rojas, R. (1996). Fuzzy Logic. In: Neural Networks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-61068-4_11
Download citation
DOI: https://doi.org/10.1007/978-3-642-61068-4_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-60505-8
Online ISBN: 978-3-642-61068-4
eBook Packages: Springer Book Archive