Abstract
This paper offers practical advice about how to interact with machines that we have reason to believe could have minds. I argue that we should approach these interactions by assigning credences to judgements about whether the machines in question can think. We should treat the premises of philosophical arguments about whether these machines can think as offering evidence that may increase or reduce these credences. I describe two cases in which you should refrain from doing as your favored philosophical view about thinking machines suggests. Even if you believe that machines are mindless, you should acknowledge that treating them as if they are mindless risks wronging them. Suppose your considered philosophical view that a machine has a mind leads you to consider dating it. You may have reason to regret that decision should these dates lead on to a life-long relationship with a mindless machine. In the paper’s final section, I suggest that building a machine that is capable of performing all intelligent human behavior should produce a general increase in confidence that machines can think. Any reasonable judge should count this feat as evidence in favor of machines having minds. This rational nudge could lead to broad acceptance of the idea that machines can think.
Similar content being viewed by others
Notes
See Agar (2014) for a description of this approach to uncertainties about the directives of utilitarianism.
I have presented a credence of 0 as reflecting the judgement that there is no chance of wronging a computer in a way that requires it to have a mind. We should also accept very low positive credences—say .001—as reflecting the judgment that there is a negligible chance of harming a being in ways specific to beings with minds. Perhaps it is appropriate to assign a credence of .001 to the proposition that cutting down a tree causes it to suffer. A very low credence such as this suggests that those who cut down a tree can generally ignore the possibility that it suffers. It does not have the practical implications of a .3 credence assigned to the proposition that a computer that produces all human intelligent behavior can have thoughts and feelings.
See Pearcey (2015) for an argument that the theory of evolution contains a contradiction.
See Basl (2014) for discussion of what it would take for a machine to have interests that should be taken into account in our moral deliberations.
An anonymous referee makes the point that there is a moral dimension to this rejection. The decision to not date Sam suggests an assessment that Sam may find offensive. I argue that the principal reasons to not date Sam are prudential. These reasons should not be viewed as morally offensive in the same way as a straightforwardly false racist reason to discontinue a relationship.
See the discussion in Sparrow (2004).
See Erica Neely (2014) for the suggestion that it is appropriate to err on the side of caution in such cases. When we apply her reasoning to Alex’s story, we acknowledge that it is better to cause Alex’s creator the inconvenience of having to do without Alex’s recyclable materials than it is to cause suffering to a being with a mind. See David Gunkel (2018, section 3.11) for discussion of Neely’s argument.
References
Agar, N. (2014). How to insure against utilitarian overconfidence. Monash Bioethics Review, 32, 162–171.
Agar, N. (2019). How to be human in the digital economy. Cambridge: MIT Press.
Basl, J. (2014). Machines as moral patients we shouldn’t care about (yet): the interests and welfare of current machines. Philosophy and Technology, 27.1, 79–96.
Bryson, J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions. Amsterdam: John Benjamins.
Christensen, D. (2007). Epistemology of disagreement: the good news. Philosophical Review, 116, 187–218.
Christensen, D., & Lackey, J. (Eds.). (2013). The epistemology of disagreement: new essays. New York: Oxford University Press.
Danaher, J. (2018). The symbolic-consequences argument in the sex robot debate. In J. Danaher (Ed.). Robot Sex: Social and Ethical Implications. Cambridge: MIT Press.
Dennett, D. (1991). Consciousness explained. Boston: Little, Brown, and Co..
Gunkel, D. (2018). Robot rights. Cambridge: MIT Press.
Hájek, A. (2018). Pascal's wager, the Stanford encyclopedia of philosophy (summer 2018 edition), Edward N. Zalta (Ed.), URL = <https://plato.stanford.edu/archives/sum2018/entries/pascal-wager/>. Accessed 2 Feb 2019.
Hauskeller, M. (2018). Automatic sweethearts. In J. Danaher (Ed.), Robot sex: Social and ethical implications. Cambridge: MIT Press.
Kurzweil, R. (2005). The singularity is near: when humans transcend biology. London: Penguin.
Lycan, W. (1987). Consciousness. Cambridge: MIT Press.
Neely, E. L. (2014). Machines and the moral community. Philosophy & Technology, 27(1), 97–111.
Pascal, B. (1910). Pensées: Translated by W. F. Trotter: Dent.
Pearcey, N. (2015). Finding truth: 5 principles for unmasking atheism secularism, and other god substitutes. Colorado Springs: David C Cook.
Pettigrew, R. (2013). Epistemic utility and norms for Credences. Philosophy Compass, 8(10), 897–908.
Pettigrew, R. (2016). Epistemic utility arguments for Probabilism. The Stanford encyclopedia of philosophy (spring 2016 edition), Edward N. Zalta (Ed.), URL = <https://plato.stanford.edu/archives/spr2016/entries/epistemic-utility/>. Accessed 2 Feb 2019.
Rosenthal, D. (1986). Two concepts of consciousness. Philosophical Studies, 49, 329–359.
Schwitzgebel, E., & Garza, M. (2015). A defense of the rights of artificial intelligences. Midwest Studies in Philosophy, 39(1), 89–119.
Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417–424.
Sparrow, R. (2004). The Turing triage test. Ethics and Information Technology, 6, 203–213.
Acknowledgments
This paper has been improved by the comments of Pablo Barranquero, Stuart Brock, Lucinda Campbell, Juliet Floyd, Michael Hauskeller, Bengt Kayser, Simon Keller, Edwin Mares, Jonathan Pengelly, Russell Powell, Johann Roduit, Rob Sparrow, Nicole Vincent, and Mark Walker. I have also benefited from audiences at the University of Zurich, University of Malaga, Boston University, New Mexico State University, University of Texas at El Paso, Victoria University of Wellington, and Aarhus University, and two anonymous referees for this journal.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Agar, N. How to Treat Machines that Might Have Minds. Philos. Technol. 33, 269–282 (2020). https://doi.org/10.1007/s13347-019-00357-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13347-019-00357-8