Skip to main content

Epistemic Values and Disinformation

  • Chapter
  • First Online:
Book cover Virtue Epistemology Naturalized

Part of the book series: Synthese Library ((SYLI,volume 366))

Abstract

David Hume (1748) famously said, “when anyone tells me, that he saw a dead man restored to life, I immediately consider with myself, whether it be more probable, that this person should either deceive or be deceived, or that the fact, which he relates, should really have happened.” Of course, intentionally deceptive information on many topics (not just reports of miracles) can interfere with our ability to achieve our epistemic goals of acquiring true beliefs and avoiding false beliefs. Thus, it would be beneficial to reduce the spread of such disinformation. In order to do this, we need to identify what sorts of things affect the amount of disinformation and how they affect it. Toward this end, I offer an analysis of what disinformation is. I then use this analysis to develop a game-theoretic model (which is inspired by the work of Elliott Sober and of Brian Skyrms and which appeals to philosophical work on epistemic values) of the sending and receiving of disinformation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    With the phrase “or be deceived,” Hume might have wanted to include being “deceived” by one’s senses. But I am reading deception as being intentional, as most philosophers do (see Carson 2010, 47).

  2. 2.

    Several economists (e.g., Tullock 1967; Schelling 1968; Davis and Ferrantino 1996) have also suggested formal models of lying and deception.

  3. 3.

    I will not try to specify how likely it must be that the information will actually mislead people.

  4. 4.

    Even on this analysis of lying, the speaker need not intend someone to believe a falsehood outright. She might simply intend to increase someone’s degree of belief in a falsehood (see Fallis 2009, 45).

  5. 5.

    A few philosophers (e.g., Carson 2010, 15) claim that, in addition to being believed to be false by the speaker, a lie must actually be false. Thus, they would say that Wilde’s protagonist only tried to lie about his name. But in an earlier article (Fallis 2011, 207), I give an example of a lie that is false as well as intended to mislead, but that is not likely to mislead. A liar can be wrong about whether his claim is misleading even if he is right about its truth value.

  6. 6.

    Several philosophers claim that meaningful data must be true in order to count as information (see Fallis 2011, 202–03). But even if one accepts this claim, this sort of case shows that there can be disinformation that counts as information.

  7. 7.

    Despite being the standard example in the biological literature, Viceroys are not actually Batesian mimics. As David Ritland and Lincoln Brower (1991) discovered, Viceroys are actually as unpalatable to blue jays as Monarchs. But for purposes of this paper, I will follow Sober and treat this as an instance of Batesian mimicry of which there are many examples in nature.

  8. 8.

    In fact, reports of miracles are arguably another case where there is a systematic benefit to disseminating a falsehood. As Hume (1977 [1748], 78) pointed out, people tend to experience an “agreeable emotion,” a sense of “surprise and wonder,” when they hear that a miracle has occurred. As a result, the people describing the miracle can take “delight in exciting the admiration of others.”

  9. 9.

    We can essentially take the intentional stance toward “evolutionary liars,” such as the Viceroy butterflies. That is, attributing beliefs and desires to such “liars” can allow us to predict how often they “lie” (cf. Sober 1994; Skyrms 2010, 72–82).

  10. 10.

    In an earlier article (Fallis 2011, 210), I describe researchers who place false semantic content in Wikipedia in order to see how long it takes to be corrected. I also describe educators who place false semantic content on the Internet in order to teach students how to identify false semantic content. Even though they do not intend to mislead anyone, these researchers and educators seem to have created disinformation. They have certainly created something which has the function of misleading people. But since no one benefits from such disinformation being believed, it is not captured by the model that I construct in the following section.

  11. 11.

    It might be suggested that a big bet is simply an action rather than something that is clearly intended to represent the world as being a certain way. If so, not all bluffs may count as disinformation, strictly speaking. But we can easily imagine a poker game that requires you to say, “I have a winning hand” when you bet and to say, “I have a losing hand” when you fold.

  12. 12.

    The poker games that people actually play, such as Seven-Card Stud or Texas Hold’em, are sufficiently complicated that they can be somewhat difficult to model. The model of stripped-down poker is sufficient for our purpose here of identifying what affects the amount of bluffing and disinformation.

  13. 13.

    Since the sender knows whether or not the information is true, but the receiver does not, this is a game of asymmetric information (see Mansfield 1994, 47–48).

  14. 14.

    The column label is short for “The Receiver does not believe that the Sender has a winning hand when she says that she has a winning hand.”

  15. 15.

    In fact, the benefit of having a true belief might even depend on what the belief is about (see Fallis 2006, 181–82). For instance, the benefit of truly believing that the sender has a winning hand might be greater than the benefit of truly believing that the sender has a losing hand.

  16. 16.

    Some deceivers may simply value our having false beliefs for its own sake. As Augustine (1952 [395], 87) pointed out, some lies are “told solely for the pleasure of lying and deceiving.”

  17. 17.

    But Goldman (2002, 218–220) once tentatively suggested, to the contrary, that social epistemology should encompass attempts to bring about bad epistemic consequences.

  18. 18.

    In fact, the truth not being revealed is quite common when it comes to models of epistemic utilities. For instance, scientists certainly have epistemic goals. But they never find out for sure whether or not their hypotheses are true. As a result, they always have to make do with expected epistemic utilities to guide their decision making (see Fallis 2007, 219).

  19. 19.

    In addition, in a real poker game, a player who has been dealt a winning hand might want her opponent to think that she has been dealt a losing hand. That way, her opponent might continue to make bets that he will lose. But in this simplified model, there is no such motivation. Player B only gets to choose between calling player A’s bet and folding. Unlike with Seven-Card Stud and Texas Hold’em, there are no additional betting rounds.

  20. 20.

    In his model of the butterfly case, Sober (1994, 78) likewise assumes that plain butterflies are always palatable Viceroys.

  21. 21.

    Regardless of whether the jury believes that he is innocent or simply suspends judgment on his guilt, the defendant will be found not guilty. In this case, while the value of truly believing that the defendant is innocent is at least as great as the value of suspending judgment, it is no greater. Also, while the value of suspending judgment is at least as great as the value of falsely believing that the defendant is innocent, it is no greater.

  22. 22.

    Hume (1977 [1748], 75), for instance, recommends that “we entertain a suspicion concerning any matter of fact, when the witnesses contradict each other; when they are but few, or of a doubtful character; when they have an interest in what they affirm; when they deliver their testimony with hesitation, or on the contrary, with too violent asseverations.”

  23. 23.

    In general, a game may have more than one equilibrium point. But in this model of the sending and receiving of disinformation, there is always a unique equilibrium point.

  24. 24.

    If the payoffs are as given in Fig. 2, but the probability that the sender has a winning hand is greater than or equal to 3/4, then the receiver should always believe what the sender says.

  25. 25.

    This definition can easily be generalized to three or more possible actions.

  26. 26.

    Since she will definitely say that she has a winning hand whenever she has a winning hand, w is also the probability that the sender has a winning hand and says that she has a winning hand. p ∙ (1 − w) is the probability that the sender has a losing hand and says that she has a winning hand. So, w/(w + p∙(1 − w)) is the probability that the sender has a winning hand when she says that she has a winning hand. Thus, 1 − q = w/(w + p ∙ (1 − w)). Given q and w, we can solve for p. p = w/(1 − w) ∙ q/(1 − q).

  27. 27.

    The equilibrium point was calculated (and the game tree was drawn) using Gambit open source software (see McKelvey et al. 2007). Murray Gell-Mann (2009, ix) has suggested that, in a wide variety of contexts, people and animals decide to send disinformation about 1/7 of the time. But the amount of disinformation can actually vary greatly depending, for example, on the costs and benefits to the receiver of the information.

  28. 28.

    We can use an analogous technique to determine how often the receiver will believe what the sender says (see Sober 1994, 79). However, this arguably takes us beyond the scope of epistemology. The sender’s payoffs are the main determinants of the receiver’s level of credulity and, as noted above, we cannot characterize the sender’s payoffs in terms of epistemic utilities.

  29. 29.

    In other words, we are really interested in p ∙ (1 − w) rather than p itself.

  30. 30.

    It is a mixed bag if these three things change in different directions.

  31. 31.

    Why does this happen? Basically, if more players are dealt a winning hand, players who are dealt a losing hand have more players that they can plausibly mimic. For instance, if there are more Monarchs out there, then blue jays have to be fairly credulous because most gaudy butterflies are telling the truth.

  32. 32.

    For similar reasons, Michael Davis and Michael Ferrantino (1996) argue that we should expect to see more negative lies than positive lies in politics. During campaigns, politicians will be motivated to make false positive claims about themselves (and their policies) and to make false negative claims about their opponents (and their policies). However, if a politician is elected, voters will not have much opportunity to uncover any lies that she told about her opponents and their policies. So, the potential costs (e.g., in terms of gaining a reputation for insincerity) of such negative lies will be lower.

  33. 33.

    It is certainly possible to change the sender’s payoffs in this case. For instance, we might increase the penalty for perjury. But as noted above, changing the sender’s payoffs will not change the amount of disinformation.

References

  • Augustine. 1952 [395]. Lying. In Treatises on various subjects, ed. R.J. Deferrari, 53–120. New York: Catholic University of America.

    Google Scholar 

  • Carson, Thomas L. 2010. Lying and deception. New York: Oxford University Press.

    Book  Google Scholar 

  • Connolly, Terry. 1987. Decision theory, reasonable doubt, and the utility of erroneous acquittals. Law and Human Behavior 11: 101–112.

    Article  Google Scholar 

  • Davis, Michael L., and Michael Ferrantino. 1996. Towards a positive theory of political rhetoric: Why do politicians lie? Public Choice 88: 1–13.

    Article  Google Scholar 

  • Descartes, René. 1996 [1641]. Meditations on first philosophy. Cambridge: Cambridge University Press.

    Google Scholar 

  • Fallis, Don. 2006. Epistemic value theory and social epistemology. Episteme 2: 177–188.

    Article  Google Scholar 

  • Fallis, Don. 2007. Attitudes toward epistemic risk and the value of experiments. Studia Logica 86: 215–246.

    Article  Google Scholar 

  • Fallis, Don. 2009. What is lying? Journal of Philosophy 106: 29–56.

    Article  Google Scholar 

  • Fallis, Don. 2011. Floridi on disinformation. Etica & Politica 13: 201–214.

    Google Scholar 

  • Farid, Hany. 2009. Digital doctoring: Can we trust photographs? In Deception, ed. Brooke Harrington, 95–108. Stanford: Stanford University Press.

    Google Scholar 

  • Farquhar, Michael. 2005. A treasury of deception. New York: Penguin.

    Google Scholar 

  • Fetzer, James H. 2004. Disinformation: The use of false information. Minds and Machines 14: 231–240.

    Article  Google Scholar 

  • Floridi, Luciano. 2011. The philosophy of information. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Gell-Mann, Murray. 2009. Forward. In Deception, ed. Brooke Harrington, vii–xii. Stanford: Stanford University Press.

    Google Scholar 

  • Goldman, Alvin I. 1999. Knowledge in a social world. New York: Oxford University Press.

    Book  Google Scholar 

  • Goldman, Alvin I. 2002. Reply to commentators. Philosophy and Phenomenological Research 64: 215–227.

    Article  Google Scholar 

  • Good, I.J. 1967. On the principle of total evidence. British Journal for the Philosophy of Science 17: 319–322.

    Article  Google Scholar 

  • Hume, David. 1977 [1748]. An enquiry concerning human understanding. Indianapolis: Hackett.

    Google Scholar 

  • Jackson, Brooks, and Kathleen H. Jamieson. 2007. Unspun: Finding facts in a world of disinformation. New York: Random House.

    Google Scholar 

  • James, William. 1979 [1896]. The will to believe and other essays in popular philosophy. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Kant, Immanuel. 1959 [1785]. Foundations of the Metaphysics of Morals. Trans. Lewis W. Beck. New York: Macmillan.

    Google Scholar 

  • Levi, Isaac. 1962. On the seriousness of mistakes. Philosophy of Science 29: 47–65.

    Article  Google Scholar 

  • Mansfield, Edwin. 1994. Microeconomics, 8th ed. New York: W. W. Norton & Company.

    Google Scholar 

  • McKelvey, Richard D., Andrew M. McLennan, and Theodore L. Turocy. 2007. Gambit: Software tools for game theory. Version 0.2007.12.04. http://www.gambit-project.org.

  • Newman, Matthew L., James W. Pennebaker, Diane S. Berry, and Jane M. Richards. 2003. Lying words: Predicting deception from linguistic styles. Personality and Social Psychology Bulletin 29: 665–675.

    Article  Google Scholar 

  • Reiley, David H., Michael B. Urbancic, and Mark Walker. 2008. Stripped-down poker: A classroom game with signaling and bluffing. Journal of Economic Education 39: 323–341.

    Article  Google Scholar 

  • Riggs, Wayne D. 2003. Balancing our epistemic ends. Noûs 37: 342–352.

    Article  Google Scholar 

  • Ritland, David B., and Lincoln P. Brower. 1991. The viceroy butterfly is not a Batesian mimic. Nature 350: 497–498.

    Article  Google Scholar 

  • Rubin, Paul H. 1991. The economics of regulating deception. Cato Journal 10: 667–690.

    Google Scholar 

  • Schauer, Frederick, and Richard Zeckhauser. 2009. Paltering. In Deception, ed. Brooke Harrington, 38–54. Stanford: Stanford University Press.

    Google Scholar 

  • Schelling, Thomas C. 1968. Game theory and the study of ethical systems. Journal of Conflict Resolution 12: 34–44.

    Article  Google Scholar 

  • Serra-Garcia, Marta. 2009. Lying or truth-telling: Why does it matter in economics? Aenorm 62: 4–7.

    Google Scholar 

  • Skyrms, Brian. 2010. Signals. New York: Oxford University Press.

    Book  Google Scholar 

  • Sober, Elliott. 1994. The primacy of truth-telling and the evolution of lying. In From a biological point of view, 71–92. Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Sorensen, Roy. 2007. Bald-faced lies! Lying without the intent to deceive. Pacific Philosophical Quarterly 88: 251–264.

    Article  Google Scholar 

  • Tullock, Gordon. 1967. The economics of lying. In Toward a mathematics of politics, 133–143. Ann Arbor: University of Michigan Press.

    Google Scholar 

  • Williams, Bernard. 2002. Truth and truthfulness. Princeton: Princeton University Press.

    Google Scholar 

Download references

Acknowledgements

I would like to thank Erika Au, Derek Ball, Tony Doyle, Abrol Fairweather, James Mahon, Kay Mathiesen, Andrew Peet, and Andreas Stokke for very helpful conversations and suggestions on this topic. I would also like to thank the Epistemology Research Group at the University of Edinburgh for their feedback. Much of this work was completed while I was a Visiting Fellow in the Centre for Ethics, Philosophy and Public Affairs at the University of St. Andrews.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Don Fallis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Fallis, D. (2014). Epistemic Values and Disinformation. In: Fairweather, A. (eds) Virtue Epistemology Naturalized. Synthese Library, vol 366. Springer, Cham. https://doi.org/10.1007/978-3-319-04672-3_10

Download citation

Publish with us

Policies and ethics