Abstract
As a nonmonotonic learning system, C-IL2P has interesting connections with Belief Revision [Gar92] and, more specifically, Truth Maintenance Systems (TMS) [Doy79] (see Chapter 2 for an overview of TMSs). When background knowledge is encoded as defeasible into a neural network, the set of training examples may specify a revision of that knowledge. In addition, the set of examples can be inconsistent with the background knowledge, and the resulting network can itself contain inconsistencies. In this chapter, we investigate how to detect and treat inconsistencies in the system. We also show the equivalence between C-IL2P networks and TMSs. As a result, the learning process of C-IL2P can be seen as a technique for handling changes of belief in a TMS.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag London
About this chapter
Cite this chapter
d’Avila Garcez, A.S., Broda, K.B., Gabbay, D.M. (2002). Handling Inconsistencies in Neural Networks. In: Neural-Symbolic Learning Systems. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0211-3_7
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0211-3_7
Publisher Name: Springer, London
Print ISBN: 978-1-85233-512-0
Online ISBN: 978-1-4471-0211-3
eBook Packages: Springer Book Archive