Abstract
In an influential paper Sparrow argues that it is immoral to deploy autonomous weapon systems (AWS) in combat. The general idea is that nobody can be held responsible for wrongful actions committed by an AWS because nobody can predict or control the AWS. I argue that this view is incorrect. The programmer remains in control when and how an AWS learns from experience. Furthermore, the programmer can predict the non-local behaviour of the AWS. This is sufficient to ensure that the programmer can be held responsible. I present a consequentialist argument arguing in favour of using AWS. That is, when an AWS classifies non-legitimate targets less often as legitimate targets, compared to human soldiers, then it is to be expected that using the AWS saves lives. However, there are also a number of reasons, e.g. risk of hacking, why we should still be cautious about the idea of introducing AWS to modern warfare.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
See (Robillard 2017) for the idea.
- 2.
Note that neither Matthias nor Sparrow are explicit in what they mean with control. However, Matthias refers to Fischer’s and Ravizza’s work. For an alternative notion of control, which is also based on Fischer’s and Ravizza’s work and applied to AWS, see Santoni de Sio and Van den Hoven (2018).
- 3.
The programmer is not limited to such simple variables. It could also be video or audio data for example. If we were to make use only of video data, then the ANN would be an implicit ethical system, as there is no reference to ethical principles.
- 4.
Note, however, that this expectation can be vastly wrong. What the AWS has learned is a simplified model of the real world. Excluding relevant variables, biased or too few samples can lower the accuracy of the model, to name only a couple of issues that the programmer must be aware off.
- 5.
References
Anderson, M., Anderson, S.L., Armen, C.: An approach to computing ethics. IEEE Intell. Syst. 21(4), 56–63 (2006)
Beauchamp, T.L., Childress, J.F.: Principles of Biomedical Ethics. Oxford University Press, Oxford (1979)
Bostrom, N., Yudkowsky, E.: The ethics of artificial intelligence. In: Frankish, K., Ramsey, W. (eds.) Cambridge Handbook of Artificial Intelligence, pp. 316–334. Cambridge University Press, Cambridge (2014)
Dreyfus, H.L., Dreyfus, S.E.: What artificial experts can and cannot do. AI Soc. 6(1), 18–26 (1992)
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: ITCSC, pp. 214–226 (2012)
Fischer, J.M., Ravizza, M.: Responsibility and Control: A Theory of Moral Responsibility. Cambridge University Press, Cambridge (2000)
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, pp. 3315–3323 (2016)
Hellström, T.: On the moral responsibility of military robots. Ethics Inf. Technol. 15(2), 99–107 (2013)
Larson, J., Mattu, S., Kirchner, L., Angwin, J.: How we analyzed the compas recidivism algorithm. ProPublica (5 2016), 9 (2016)
Matthias, A.: The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf. Technol. 6(3), 175–183 (2004)
Mester, L.J.: Whats the point of credit scoring? Bus. Rev. 3(Sep/Oct), 3–16 (1997)
Moor, J.H.: The nature, importance, and difficulty of machine ethics. IEEE Intell. Syst. 21(4), 18–21 (2006)
Müller, V.C.: Autonomous killer robots are probably good news. In: Di Nucci, E., Santonio de Sio, F. (eds.) Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons, pp. 67–81. Ashgate, London (2016)
Robillard, M.: No such thing as killer robots. J. Appl. Philos. (2017). https://doi.org/10.1111/japp.12274
Santoni de Sio, F., Van den Hoven, J.: Meaningful human control over autonomous systems: a philosophical account. Front. Robot. AI 5, 15 (2018)
Simpson, T.W., Müller, V.C.: Just war and robots killings. Philos. Q. 66(263), 302–322 (2015)
Sparrow, R.: Killer robots. J. Appl. Philos. 24(1), 62–77 (2007)
Strawson, P.: Freedom and resentment. Proc. Br. Acad. 48, 187–211 (1962)
Van de Poel, I., Royakkers, L., Zwart, S.D.: Moral Responsibility and the Problem of Many Hands, vol. 29. Routledge, New York (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Swoboda, T. (2018). Autonomous Weapon Systems - An Alleged Responsibility Gap. In: Müller, V. (eds) Philosophy and Theory of Artificial Intelligence 2017. PT-AI 2017. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 44. Springer, Cham. https://doi.org/10.1007/978-3-319-96448-5_32
Download citation
DOI: https://doi.org/10.1007/978-3-319-96448-5_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-96447-8
Online ISBN: 978-3-319-96448-5
eBook Packages: Computer ScienceComputer Science (R0)