# Mathematical Theory of Probabilistic Consistency and Universal Probabilistic Soundness

Chapter
Part of the Synthese Library book series (SYLI, volume 86)

## Abstract

This chapter will be primarily concerned with the question: it is possible for the premises of an inference schema to be highly probable at the same time that its conclusion is improbable, and more generally, how low a conclusion probability is compatible with given high premise probabilities for inferences of that pattern? This is the problem of determining universal probabilistic soundness, but as noted in Section I.8 it proves most convenient to approach this problem via a consideration of probabilistic consistency. In what follows we will give mathematical precizations of the concepts of universal probabilistic soundness and probabilistic consistency which have so far been used in a rough informal way, and then derive some general results concerning them. As we will be mainly concerned with mathematical theory in what follows, basic methodological issues will be left aside for the present. A few preliminary foundational remarks are in order, however.

## Keywords

Factual Formula Factual Proposition Factual Language Probabilistic Consistency Material Counterpart
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## Notes

1. 1.
2. 2.
This basic theorem has been given in Suppes [46] and Adams [2], and is apparently widely known to probability theorists Adams and Levine [7] state a partial converse to it which holds when all of the propositions involved in an inference are factual, and all premises are essential: namely that the maximum possible uncertainty of the conclusion equals the sum of the uncertainties of the premises, or else 1, whichever is least. Adams and Levine also apply linear programming analysis to determine maximum conclusion uncertainties where there are various kinds of redundancies among the premises (where (A and B)V (A and C)V (B and C) is inferred from A, B,and C). Interesting unsolved problems remain in determining maximum conclusion uncertainties in inferences involving conditionals. The following is a striking fact which suggests that such an investigation might have intriguing results. The minimum probability of the conclusion, B,of the `direct’ Modus Ponens inference with premises A and A=B equals the product of the probabilities of the premises, while the maximum uncertainty of the conclusion —A of the `inverse’ Modus Tollens inference with premises AFB and —B equals the uncertainty of —B divided by the probability of A In the direct case minimum conclusion probability is directly proportional to the conditional premise’s probability, and in the inverse case the maximum conclusion uncertainty is inversely proportional to the conditional premise’s probability.Google Scholar
3. 3.
We must reiterate here that this holds only in cases where all premises, either of the Modus Ponens inference or of the Modus Tollens inference, are probable at the same time. We will see in 1V.1 that when the conditional AFB is accepted at one time and then —B is learned, it is not always rational to infer —A. This is connected with the fact that where the premise of a Modus Tollens inference is counterfactual,of the form “if A were the case then B would be”, it may not be rational to affirm —A even though —B is accepted at the same time (though this is controversial — see Section 1V.4).Google Scholar