Minds and Machines

, Volume 21, Issue 3, pp 465–474 | Cite as

Can we Develop Artificial Agents Capable of Making Good Moral Decisions?

Wendell Wallach and Colin Allen: Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2009, xi + 273 pp, ISBN: 978-0-19-537404-9
  • Herman T. Tavani

Can an artificial moral agent (AMA) qualify as a legitimate moral agent or, for that matter, as any kind of “agent” at all? Is an autonomous system (AS) capable of satisfying the conditions required for (genuine) autonomous behavour? Questions pertaining to agency and autonomy—whether viewed as distinct concepts or as notions that are inextricably linked—have traditionally been the province of philosophers and ethicists. More recently, these questions have also piqued the interest of many computer scientists and engineers, as some issues in the emerging fields of “artificial morality” and “machine ethics” overlap with concerns associated with the field of artificial intelligence (AI). However, a cluster of “practical” issues affecting AMA development also arise. These include questions about which kinds of policies should inform and guide the development of AMAs, and about how much decision-making responsibility should be given to these systems. In Moral Machines: Teaching Robots Right...


  1. 1.
    Buechner, J., & Tavani, H. T. (2011). Trust and multi-agent systems: Applying the “diffuse, default model” of trust to experiments involving artificial agents. Ethics and information technology, 13(1), 39–53.Google Scholar
  2. 2.
    Floridi, L. (2008). Foundations of information ethics. In Himma, K. E., & Tavani, H. T. (Eds.) The handbook of information and computer ethics. Hoboken, NJ: Wiley, pp. 3–23.Google Scholar
  3. 3.
    Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.CrossRefGoogle Scholar
  4. 4.
    Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?. Ethics and Information Technology, 11(1), 19–29.CrossRefGoogle Scholar
  5. 5.
    Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204.CrossRefGoogle Scholar
  6. 6.
    Jonas, H. (1984). The imperative of responsibility: In search of an ethics for the technological age. Chicago, IL: University of Chicago Press.Google Scholar
  7. 7.
    McDermott, D. (2008). Why ethics is a high hurdle for Al. Paper presented at the 2008 North American Conference on Computing and Philosophy. Bloomington, IN, July 12. (Note that the quoted passages above from McDermott, which are included on p. 35 in Moral Machines, are cited from McDermott’s conference paper.)Google Scholar
  8. 8.
    Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.CrossRefGoogle Scholar
  9. 9.
    Royal Academy of Engineering. (2009). Autonomous systems: Social, legal and ethical issues., London.

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  1. 1.Department of PhilosophyRivier CollegeNashuaUSA

Personalised recommendations