Skip to main content

Using maximum entropy in a defeasible logic with probabilistic semantics

  • Non Monotonic Reasoning
  • Conference paper
  • First Online:
IPMU '92—Advanced Methods in Artificial Intelligence (IPMU 1992)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 682))

  • 173 Accesses

Abstract

In this paper we make defeasible inferences from conditional probabilities using the Principle of Total Evidence. This gives a logic that is a simple extension of the axiomatization of probabilistic logic as defined by Halpern's AX 1. For our consequence relation, the reasoning is further justified by an assumption of the typicality of individuals mentioned in the data. For databases which do not determine a unique probability distribution, we select by default the distribution with Maximum Entropy. We situate this logic in the context of preferred models semantics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Fahiem Bacchus. Representing and Reasoning with Probabilistic Knowledge: A Logical Approach to Probabilities. MIT Press, Cambridge, MA, 1990.

    Google Scholar 

  2. P. Cheeseman. A method of computing generalized bayesian probability values. In IJCAI'83, pages 198–202. Morgan Kaufmann, 1983.

    Google Scholar 

  3. James Cussens and Anthony Hunter. Using defeasible logic for a window on a probabilistic database: some preliminary notes. In Symbolic and Qualitative Approaches for Uncertainty, pages 146–152. Lecture Notes in Computer Science 548, Springer, 1991.

    Google Scholar 

  4. Dov Gabbay. Theoretical foundations for non-monotonic reasoning in expert systems. In K. R. Apt, editor, Proceedings NATO Advanced Study Institute on Logics and Models of Concurrent Systems, pages 439–457. Springer, 1985.

    Google Scholar 

  5. B. Grosof. Non-monotonicity in probabilistic reasoning. In K. R. Apt, editor, Uncertainty in Artificial Intelligence 3, pages 237–249. North-Holland, 1988.

    Google Scholar 

  6. Joseph Y. Halpern. An analysis of first-order logics of probability. Artificial Intelligence, 46:311–350, 1990.

    Google Scholar 

  7. H. Kautz and B. Selman. Hard problems for simple default logics. Artificial Intelligence, 49:243–279, 1991.

    Google Scholar 

  8. B. Lewis. Approximating probability distributions to reduce storage requirements. Information and Control, 2:214–225, 1959.

    Article  Google Scholar 

  9. N. Nilsson. Probabilistic logic. Artificial Intelligence, 28:71–87, 1986.

    Article  Google Scholar 

  10. J. Paris and A. Vencovská. A note on the inevitability of maximum entropy. International Journal of Approximate Reasoning, 4:183–223, 1990.

    Article  Google Scholar 

  11. Judea Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988.

    Google Scholar 

  12. Y. Shoham. Reasoning about Change. MIT Press, 1988.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Bernadette Bouchon-Meunier Llorenç Valverde Ronald R. Yager

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag

About this paper

Cite this paper

Cussens, J., Hunter, A. (1993). Using maximum entropy in a defeasible logic with probabilistic semantics. In: Bouchon-Meunier, B., Valverde, L., Yager, R.R. (eds) IPMU '92—Advanced Methods in Artificial Intelligence. IPMU 1992. Lecture Notes in Computer Science, vol 682. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56735-6_42

Download citation

  • DOI: https://doi.org/10.1007/3-540-56735-6_42

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-56735-6

  • Online ISBN: 978-3-540-47643-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics