Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Social choice ethics in artificial intelligence

Abstract

A major approach to the ethics of artificial intelligence (AI) is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom–up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. These decisions must be made up front in the initial AI design—designers cannot “let the AI figure it out”. Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results. Furthermore, non-social choice ethics face similar issues, such as whether to count future generations or the AI itself. These issues can be more important than the question of whether or not to use social choice ethics. Attention should focus on these issues, not on social choice.

This is a preview of subscription content, log in to check access.

Notes

  1. 1.

    Note that while consciousness may play a role in ethics learning among human children, it is not essential for AI. The essential feature is that ethics is learned via interaction with the environment, regardless of whether that interaction involves consciousness.

  2. 2.

    One exception, in which social choice is (briefly) discussed in the context of CEV, is Tarleton (2010). Keyword searches in Google Scholar identified no other discussions of social choice in CEV or bottom-up ethics. There is a more extensive study of “computational social choice” relating aspects of social choice theory and computer science (Brandt et al. 2015).

  3. 3.

    This is similar to the “boundary problem” in democracy (Arrhenius 2005).

  4. 4.

    Martin (2017) also considers having AIs set their own ethics or the ethics of other AIs; more on this below.

  5. 5.

    Tay was programmed to learn from (and thus give standing to) Twitter users who interact with it, which quickly devolved into deviance and obscenity as Twitter users taught it to misbehave. Microsoft has since been wrestling with the question of how to give standing to a more appropriate mix of people.

  6. 6.

    There is a certain irony that some proponents of CEV speak in terms of giving standing only to humanity but also favor a transition to posthumanity (e.g., Bostrom 2008).

  7. 7.

    For an argument against Benatar’s views, see Baum (2008).

  8. 8.

    This happened in 2000 and 2016, when Al Gore and Hillary Clinton, respectively, received more votes from individual voters, but George W. Bush and Donald Trump, respectively, received more votes in the electoral college.

  9. 9.

    There is no indication that Tay was designed with bottom–up ethics in mind, but the net result is the same in that Tay acquired its principles for behavior via input from the people it interacted with.

References

  1. Adams FC (2008) Long-term astrophysical processes. In: Bostrom N, Ćirković MM (eds) Global catastrophic risks. Oxford University Press, Oxford, pp 33–47

  2. Allen C, Varner G, Zinser J (2000) Prolegomena to any future artificial moral agent. J Exp Theor Artif Intell 12:251–261

  3. Allen C, Smit I, Wallach W (2005) Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf Technol 7(3):149–155

  4. Anomaly J (2015) What’s wrong with factory farming? Public Health Ethics 8(3):246–254

  5. Arrhenius G (2005) The boundary problem in democratic theory. In: Tersman F (ed) Democracy unbound: basic explorations I. Filosofiska Institutionen, Stockholm, pp 14–29

  6. Arrhenius G (2011) The impossibility of a satisfactory population ethics. In: Dzhafarov E, Lacey P (eds) Descriptive and normative approaches to human behavior. World Scientific, Singapore, pp 1–26

  7. Arrhenius G, Rabinowicz W (2015) The value of existence. In: Hirose I, Olson J (eds) The Oxford handbook of value theory. Oxford University Press, Oxford, pp 424–443

  8. Arrow KJ (1951) Social choice and individual values. Wiley, New York

  9. Balliet D, Wu J, De Dreu CKW (2014) Ingroup favoritism in cooperation: a meta-analysis. Psychol Bull 140(6):1556–1581

  10. Baron RS (2005) So right it’s wrong: groupthink and the ubiquitous nature of polarized group decision making. Adv Exp Soc Psychol 37:219–253

  11. Baum SD (2008) Better to exist: a reply to Benatar. J Med Ethics 34(12):875–876

  12. Baum SD (2009) Description, prescription and the choice of discount rates. Ecol Econ 69(1):197–205

  13. Benatar D (2006) Better never to have been: the harm of coming into existence. Oxford University Press, Oxford

  14. Bohannon J (2015) Fears of an AI pioneer. Science 349(6245):252

  15. Borenstein J, Arkin R (2016) Robotic nudges: the ethics of engineering a more socially just human being. Sci Eng Ethics 22(1):31–46

  16. Bostrom N (2008) Why I want to be a posthuman when I grow up. In: Gordijn B, Chadwick R (eds) Medical enhancement and posthumanity. Springer, Berlin, pp 107–136

  17. Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford

  18. Brandt F, Conitzer V, Endriss U, Lang J, Procaccia AD (2015) Handbook of computational social choice. Cambridge University Press, Cambridge

  19. Buchanan A (2009) Moral status and human enhancement. Philos Public Aff 37(4):346–381

  20. Clark J (2016) Artificial intelligence has a ‘sea of dudes’ problem. Bloomberg, New York

  21. Cockell CS (2007) Originism: ethics and extraterrestrial life. J Br Interplanet Soc 60:147–153

  22. de Condorcet M (1785) Essai sur l’Application de l’Analyse à la Probabilité des Décisions Rendues à la Pluralité des Voix. L’imprimerie Royale, Paris

  23. Fossat P, Bacqué-Cazenave J, De Deurwaerdère P, Delbecque JP, Cattaert D (2014) Anxiety-like behavior in crayfish is controlled by serotonin. Science 344(6189):1293–1297

  24. Foucault M (1961) Folie et Déraison: Histoire de la Folie à l’âge Classique. Plon, Paris

  25. Frederick S, Loewenstein G, O’donoghue T (2002) Time discounting and time preference: a critical review. J Econ Lit 40(2):351–401

  26. Funk C, Kennedy B, Podrebarac Sciupac E (2016) U.S. public wary of biomedical technologies to ‘enhance’ human abilities. Pew Research Center

  27. Gibbs S (2016) Microsoft’s racist chatbot returns with drug-smoking Twitter meltdown. The Guardian

  28. Ginges J, Atran S, Medin D, Shikaki K (2007) Sacred bounds on rational resolution of violent political conflict. Proc Natl Acad Sci 104(18):7357–7360

  29. Goertzel B (2016) Infusing advanced AGIs with human-like value systems: two theses. J Evol Technol 26(1):50–72

  30. Hannon B (1998) How might nature value man? Ecol Econ 25:265–279

  31. Harsanyi JC (1996) Utilities, preferences, and substantive goods. Soc Choice Welf 14(1):129–145

  32. Holbrook D (1997) The consequentialistic side of environmental ethics. Environ Values 6:87–96

  33. Hubbard FP (2011) ‘Do androids dream?’: Personhood and intelligent artifacts. Temple Law Rev 83:405–441

  34. Klein A (2016) Robot ranchers monitor animals on giant Australian farms. New Scientist

  35. Lin P (2016) Why ethics matters for autonomous cars. In: Maurer M, Gerdes JC, Lenz B, Winner H (eds) Autonomous driving: technical, legal and social aspects. Springer, Berlin, pp 69–85

  36. Marglin SA (1963) The social rate of discount and the optimal rate of investment. Q J Econ 77(1):95–111

  37. Martin D (2017) Who should decide how machines make morally laden decisions? Sci Eng Ethics 23(4):951–967

  38. Mersky AC, Samaras C (2016) Fuel economy testing of autonomous vehicles. Transp Res Part C Emerg Technol 65:31–48

  39. Metz R (2014) Startup Knightscope is preparing to roll out human-size robot patrols. MIT Technol Rev

  40. Muehlhauser L, Helm L (2012) Intelligence explosion and machine ethics. In: Eden A, Søraker J, Moor JH, Steinhart E (eds) Singularity hypotheses: a scientific and philosophical assessment. Springer, Berlin, pp 101–126

  41. Ng YK (1990) Welfarism and utilitarianism: a rehabilitation. Utilitas 2(2):171–193

  42. Ng YK (1999) Utility, informed preference, or happiness: following Harsanyi’s argument to its logical conclusion. Soc Choice Welf 16(2):197–216

  43. O’Malley-James JT, Cockell CS, Greaves JS, Raven JA (2014) Swansong biospheres II: the final signs of life on terrestrial planets near the end of their habitable lifetimes. Int J Astrobiol 13:229–243

  44. Openshaw S (1983) The modifiable areal unit problem. Geo Books, Norwich

  45. Pew Research Center (2017) Changing attitudes on gay marriage

  46. Picard R (1997) Affective computing. MIT Press, Cambridge

  47. Ritov I, Baron J (1999) Protected values and omission bias. Organ Behav Hum Decis Process 79(2):79–94

  48. Rolston H III (1986) The preservation of natural value in the solar system. In: Hargrove EC (ed) Beyond spaceship Earth: environmental ethics and the solar system. Sierra Club Books, San Francisco, pp 140–182

  49. Rose JD, Arlinghaus R, Cooke SJ, Diggles BK, Sawynok W, Stevens ED, Wynne CDL (2014) Can fish really feel pain? Fish Fish 15(1):97–133

  50. Schienke EW, Tuana N, Brown DA, Davis KJ, Keller K, Shortle JS, Stickler M, Baum SD (2009) The role of the NSF Broader Impacts Criterion in enhancing research ethics pedagogy. Soc Epistemol 23(3–4):317–336

  51. Schienke EW, Baum SD, Tuana N, Davis KJ, Keller K (2011) Intrinsic ethics regarding integrated assessment models for climate management. Sci Eng Ethics 17(3):503–523

  52. Stone C (1972) Should trees have standing? Toward legal rights for natural objects. South Calif Law Rev 45:450–501

  53. Stone J, Fernandez NC (2008) To practice what we preach: the use of hypocrisy and cognitive dissonance to motivate behavior change. Soc Personal Psychol Compass 2(2):1024–1051

  54. Sunstein CR (2000) Standing for animals. UCLA Law Rev 47(5):1333–1368

  55. Tarleton N (2010) Coherent extrapolated volition: a meta-level approach to machine ethics. The Singularity Institute, Berkeley, CA

  56. Thaler R, Sunstein C (2008) Nudge: improving decisions about health, wealth, and happiness. Yale University Press, New Haven

  57. Tonn B (1996) A design for future-oriented government. Futures 28(5):413–431

  58. Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford

  59. Wallach W, Allen C, Smit I (2008) Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI & Soc 22(4):565–582

  60. Yampolskiy RV (2013) Artificial intelligence safety engineering: why machine ethics is a wrong approach. In: Müller VC (ed) Philosophy and theory of artificial intelligence. Springer, Berlin, pp 389–396

  61. Yazawa M (2016) Contested conventions: the struggle to establish the constitution and save the union, 1787–1789. Johns Hopkins University Press, Baltimore

  62. Yudkowsky E (2004) Coherent extrapolated volition. The Singularity Institute, San Francisco

Download references

Acknowledgements

Anders Sandberg provided helpful discussion for the development of this paper. Tony Barrett and two anonymous reviewers provided helpful feedback on earlier drafts. Any errors or shortcomings in the paper are the author’s alone. Work on this paper was funded in part by Future of Life Institute Grant Number 2015-143911. The views in this paper are the author’s and are not necessarily the views of the Future of Life Institute or the Global Catastrophic Risk Institute.

Author information

Correspondence to Seth D. Baum.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Baum, S.D. Social choice ethics in artificial intelligence. AI & Soc 35, 165–176 (2020). https://doi.org/10.1007/s00146-017-0760-1

Download citation

Keywords

  • Artificial intelligence
  • Ethics
  • Social choice
  • Standing
  • Measurement
  • Aggregation