Skip to main content

Reasoning with Uncertainty

  • Chapter
  • First Online:
Introduction to Artificial Intelligence

Part of the book series: Undergraduate Topics in Computer Science ((UTICS))

  • 417k Accesses

Abstract

Reasoning under uncertainty with limited resources and incomplete knowledge plays a big role in everyday situations and also in many technical applications of AI. Probabilistic reasoning is the modern AI method for solving these problems. After a brief introduction to probability theory we present the powerful method of maximum entropy and Bayesian networks which are used in many applications. The medical diagnosis expert system Lexmed, developed by the author, is used to demonstrate the power of these formalisms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    The computed probabilities can only be used for continued propositions if the measured sample (100 vehicles) is representative. Otherwise only propositions about the observed 100 vehicles can be made.

  2. 2.

    For definitions of sensitivity and specificity see Eqs. 7.16 and 7.17.

  3. 3.

    See http://www.prostata.de/pca_haeufigkeit.html for a 55-year-old man.

  4. 4.

    The author is not a medical doctor. Therefore these computations should not be used as a basis for personal medical decisions by potentially afflicted individuals. If necessary, please consult a specialist physician or the relevant specialist literature.

  5. 5.

    A set of probabilistic equations is called consistent if there is at least one solution, that is, one distribution which satisfies all equations.

  6. 6.

    The reader may calculate this result by maximization of the entropy under the normalization condition (Exercise 7.5 on page 132).

  7. 7.

    QP([penguin=yes]-|> [flies=yes]) is an alternative form of the PIT syntax for QP([flies=yes] | [penguin=yes]).

  8. 8.

    The project was financed by the German state of Baden-Württemberg, the health insurance company AOK Baden-Württemberg, the Ravensburg-Weingarten University of Applied Sciences, and the 14 Nothelfer Hospital in Weingarten.

  9. 9.

    These negative diagnoses are denoted “non-specific abdominal pain” (NSAP).

  10. 10.

    The task of generating a function from a set of data is known as machine learning . We will cover this thoroughly in Chap. 8.

  11. 11.

    A version with limited functionality is accessible without a password.

  12. 12.

    Instead of individual numerical values, intervals can also be used here (for example [0.06, 0.12]).

  13. 13.

    For a systematic introduction to machine learning we refer the reader to Chap. 8.

  14. 14.

    The difference between this and a Bayesian network is, for example, that the rules are equipped with probability intervals and that only after applying the principle of maximum entropy is a unique probability model produced.

  15. 15.

    Ambulant observation means that the patient is released to stay at home.

  16. 16.

    In the naive Bayes method, the independence of all attributes is assumed, and this method has been successfully applied to text classification (see Sect. 8.7).

  17. 17.

    The binary variables J and M stand for the two events “John calls”, and “Mary calls”, respectively, Al for “alarm siren sounds”, Bur for “burglary” and Ear for “earthquake”.

  18. 18.

    For the case of a node without ancestors the product in this sum is empty. For this we substitute the value 1 because the CPT for nodes without ancestors contains, with its a priori probability, exactly one value.

  19. 19.

    If for example three nodes X 1, X 2, X 3 form a cycle, then there are the edges (X 1, X 2), (X 2, X 3) and (X 3, X 1) where X 1 has X 3 as a successor.

  20. 20.

    This is also not always quite so simple.

  21. 21.

    In Sect. 8.7 and in Exercise 8.17 on page 242 we will show that the scores are equivalent to the special case naive Bayes , that is, to the assumption that all symptoms are conditionally independent given the diagnosis.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wolfgang Ertel .

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ertel, W. (2017). Reasoning with Uncertainty. In: Introduction to Artificial Intelligence. Undergraduate Topics in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-58487-4_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-58487-4_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-58486-7

  • Online ISBN: 978-3-319-58487-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics