Artificial Morality. Concepts, Issues and Challenges
Artificial morality is an emerging field in artificial intelligence which explores whether and how artificial systems can be furnished with moral capacities. Since this will have a deep impact on our lives it is important to discuss the possibility of artificial morality and its implications for individuals and society. Starting with some examples of artificial morality, the article turns to conceptual issues that are important for delineating the possibility and scope of artificial morality, in particular, what an artificial moral agent is; how morality should be understood in the context of artificial morality; and how human and artificial morality compare. Outlined next is how moral capacities can be implemented in artificial systems in general and in more detail with respect to an elder care system. On the basis of these findings some of the arguments that can be found in public discourse about artificial morality will be reviewed and the prospects and challenges of artificial morality are discussed.
KeywordsArtificial morality Artificial moral agents Moral implementation Self-driving cars Care robots
- Anderson, M., & Anderson, S. L. 2011. A Prima Facie Duty Approach to Machine Ethics: Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles Through a Dialogue with Ethicists. In M. Anderson, & S. L. Anderson (Eds.), Machine Ethics (pp. 476–494). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
- Arkin, R. C., & Ulam, P. 2009. An Ethical Adaptor: Behavioral Modification Derived from Moral Emotions. IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA-09). Daejeon, KR.Google Scholar
- Bendel, O. 2017. LADYBIRD: the Animal-Friendly Robot Vacuum Cleaner. The 2017 AAAI Spring Symposium Series. AAAI Press, Palo Alto 2017.Google Scholar
- Dancy, J. 2013. Moral Particularism. The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), E. N. Zalta (ed.). URL = http://plato.stanford.edu/archives/fall2013/entries/moral-particularism/.
- Dennett, D. C. 1987. The Intentional Stance. Cambridge:MIT Press.Google Scholar
- Fong, T., Illah, N., & Dautenhahn, K. 2002. A Survey of Socially Interactive Robots: Concepts, Design, and Applications. Technical Report CMU-RI-TR, pp. 2–29.Google Scholar
- Foot, P. 1978. The Problem of Abortion and the Doctrine of Double Effect. In P. Foot (Ed.), Virtues and Vices (pp. 19–32). Oxford:Basil Blackwell.Google Scholar
- List, C., & Pettit, P. 2011. Group agency: the possibility, design, and status of corporate agents. Oxford: Oxford University Press.Google Scholar
- McCarthy, J., & Hayes, P. J. 1969. Some Philosophical Problems from the Standpoint of Artificial Intelligence. In B. Meltzer, & D. Michie (Eds.), Machine Intelligence (pp. 463–502). Edinburgh: Edinburgh University Press.Google Scholar
- Misselhorn, C., et al. 2013. Ethical Considerations Regarding the Use of Social Robots in the Fourth Age. In GeroPsych–The Journal of Gerontopsychology and Geriatric Psychiatry, 26 (Special Issue: Emotional and Social Robots for Aging Well?), 121–133.Google Scholar
- Rawls, J. 1993. Political Liberalism. New York:Columbia University Press.Google Scholar