Abstract
In this chapter, I examine the work of Floridi (with Sanders) on the notion of Levels of Abstraction (LoA) and its importance for the morality of artificial agents. I critique their attempt to characterise artificial agents specifically (and systems generally) as moral agents through the use of LoA, threshold functions, and computer systems concepts such as state transitions and interactivity. I do this by first examining their notion of morality and then their notion of agency, particularly contrasting agents versus patient and the agent as system. Essentially, they view moral agents as systems viewed through a particular LoA; this moral level of abstraction they specify as LoA 2 . Their use of interactivity, autonomy, and adaptability is criticised and difficulties are noted. To cache out levels of abstraction, they give several examples. These, I claim, are particularly problematic. I then provide a systematic and comprehensive table of the relationships between interaction, autonomy, and adaptability to suggest where these relationships might be strengthened. Finally, I take issue with the notion of natural LoAs, claiming that there are no natural LoAs. In the end, I conclude that the construction of LoA 2 is too artificial and too simple to count as a natural characterisation of morality.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Allen, C., G. Varner, and J. Zinser. 2000. Prolegomena to any future artificial moral agent. Journal of Experimental and Theoretical Artificial Intelligence 12(3): 252–261.
Chaitin, G. 1998. The limits of mathematics. Singapore: Springer.
Chaitin, G. 1999. The unknowable. Singapore: Springer.
Clarke, A.C. 1972. Report on planet three. New York: Harper & Row.
Floridi, L. (ed.). 2004. The Blackwell guide to the philosophy of computing and information. Malden: Blackwell Publishing, Ltd.
Floridi, L. 2008. The method of levels of abstraction. Minds and Machines 18: 303–329.
Floridi, L., and J.W. Sanders. 2001. Artificial evil and the foundations of computer ethics. Minds and Machines 3(1): 55–66.
Floridi, L., and J.W. Sanders. 2003. The method of abstraction. In The yearbook of the artificial, ed. M. Negrotti. Bern: Peter Lang.
Floridi, L., and J.W. Sanders. 2004. On the morality of artificial agents. Minds and Machines 14: 349–379.
Gill, A. 1962. Introduction to the theory of finite-state machines. New York: McGraw-Hill Book Company.
Lucas, R. 2009. Machina Ethica. Berlin: Verlag Dr. Müller.
Putnam, H. 1975. The meaning of meaning. In Mind, language and reality, ed. H. Putnam, 215–271. Cambridge: Cambridge University Press.
Weckert, John. 1986. Putnam, reference and essentialism. Dialogue 25: 509–521.
Wittgenstein, L. 1997. Philosophical investigations, 2nd ed. Cambridge, MA: Basil Blackwell.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer Science+Business Media B.V.
About this chapter
Cite this chapter
Lucas, R. (2012). Levels of Abstraction and Morality. In: Demir, H. (eds) Luciano Floridi’s Philosophy of Technology. Philosophy of Engineering and Technology, vol 8. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-4292-5_3
Download citation
DOI: https://doi.org/10.1007/978-94-007-4292-5_3
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-007-4291-8
Online ISBN: 978-94-007-4292-5
eBook Packages: Humanities, Social Sciences and LawPhilosophy and Religion (R0)