Abstract
Now that the ME-principles have been motivated and characterized in detail it is time to ask how reasoning at optimum entropy actually works. We will throw light on this question from two different sides.
The first part of this chapter (cf. also [KI98b]) investigates how infering at optimum entropy fits the formal framework for nonmonotonic reasoning of [Mak94]. In particular, we show that any inference operation based on ME-reasoning is cumulative and satisfies loop. Moreover, it turns out to be supraclassical with respect to standard probabilistic consequence which obviously generalizes classical consequence within a probabilistic setting (cf. Section 6.1). We also focus on the relationships between nonmonotonic and conditional ME-reasoning. Once more, it becomes obvious that material implication and conditionals differ substantially. To make the differences clear, we extend the conditional probabilistic language we are working in so as to contain probabilistic formulas corresponding to material implication, too. We show that conditionalization in its usual sense relates to material implication, whereas the connections between nonmonotonic reasoning and conditionals are more complex.
Though the ME-methods genuinely manipulate knowledge of a numerical nature, ME-reasoning is not easily understood by observing the probabilities in change. ME-logic is not truth-functional, as fuzzy logic is, nor is its aim to raise or to lower probabilities, as in the framework of upper and lower probabilities, and there is no straightforward calculation algorithm, as for Bayesian networks. ME-infering rather makes use of the intensional structures of probabilistic knowledge (cf. [Par94, SJ80, KI98a]), so it seems to be better classified and appreciated by describing its formal properties as a nonmonotonic inference operation.
Nevertheless, some examples and practical inference schemes in simple but typical cases are important to illustrate ME-inference beyond formal results; they will be presented in the second part of this chapter (see also [KI97b]). The representation of the ME-distribution central to the argumentation in Chapter 5 (see equations (5.5), (5.6) and (5.7), page 76) then turns out to be not only of theoretical but also of practical use, allowing us to calculate ME-probability values explicitly. For instance, we will show how knowledge is propagated transitively, and we will deal with cautious cut and cautious monotonicity. These inference schemes, however, are global, not local, i.e. all knowledge available has to be taken into account in their premises to give correct results. But they provide useful insights into the practice of ME-reasoning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
(2001). Reasoning at Optimum Entropy. In: Kern-Isberner, G. (eds) Conditionals in Nonmonotonic Reasoning and Belief Revision. Lecture Notes in Computer Science(), vol 2087. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44600-1_6
Download citation
DOI: https://doi.org/10.1007/3-540-44600-1_6
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42367-6
Online ISBN: 978-3-540-44600-2
eBook Packages: Springer Book Archive