Skip to main content

Autonomous Weapon Systems - An Alleged Responsibility Gap

  • Conference paper
  • First Online:
Book cover Philosophy and Theory of Artificial Intelligence 2017 (PT-AI 2017)

Part of the book series: Studies in Applied Philosophy, Epistemology and Rational Ethics ((SAPERE,volume 44))

Included in the following conference series:

Abstract

In an influential paper Sparrow argues that it is immoral to deploy autonomous weapon systems (AWS) in combat. The general idea is that nobody can be held responsible for wrongful actions committed by an AWS because nobody can predict or control the AWS. I argue that this view is incorrect. The programmer remains in control when and how an AWS learns from experience. Furthermore, the programmer can predict the non-local behaviour of the AWS. This is sufficient to ensure that the programmer can be held responsible. I present a consequentialist argument arguing in favour of using AWS. That is, when an AWS classifies non-legitimate targets less often as legitimate targets, compared to human soldiers, then it is to be expected that using the AWS saves lives. However, there are also a number of reasons, e.g. risk of hacking, why we should still be cautious about the idea of introducing AWS to modern warfare.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See (Robillard 2017) for the idea.

  2. 2.

    Note that neither Matthias nor Sparrow are explicit in what they mean with control. However, Matthias refers to Fischer’s and Ravizza’s work. For an alternative notion of control, which is also based on Fischer’s and Ravizza’s work and applied to AWS, see Santoni de Sio and Van den Hoven (2018).

  3. 3.

    The programmer is not limited to such simple variables. It could also be video or audio data for example. If we were to make use only of video data, then the ANN would be an implicit ethical system, as there is no reference to ethical principles.

  4. 4.

    Note, however, that this expectation can be vastly wrong. What the AWS has learned is a simplified model of the real world. Excluding relevant variables, biased or too few samples can lower the accuracy of the model, to name only a couple of issues that the programmer must be aware off.

  5. 5.

    For a critical discussion on algorithmic fairness and discrimination see e.g. Dwork et al. (2012) and Hardt et al. (2016).

References

  • Anderson, M., Anderson, S.L., Armen, C.: An approach to computing ethics. IEEE Intell. Syst. 21(4), 56–63 (2006)

    Article  Google Scholar 

  • Beauchamp, T.L., Childress, J.F.: Principles of Biomedical Ethics. Oxford University Press, Oxford (1979)

    Google Scholar 

  • Bostrom, N., Yudkowsky, E.: The ethics of artificial intelligence. In: Frankish, K., Ramsey, W. (eds.) Cambridge Handbook of Artificial Intelligence, pp. 316–334. Cambridge University Press, Cambridge (2014)

    Chapter  Google Scholar 

  • Dreyfus, H.L., Dreyfus, S.E.: What artificial experts can and cannot do. AI Soc. 6(1), 18–26 (1992)

    Article  Google Scholar 

  • Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: ITCSC, pp. 214–226 (2012)

    Google Scholar 

  • Fischer, J.M., Ravizza, M.: Responsibility and Control: A Theory of Moral Responsibility. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  • Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, pp. 3315–3323 (2016)

    Google Scholar 

  • Hellström, T.: On the moral responsibility of military robots. Ethics Inf. Technol. 15(2), 99–107 (2013)

    Article  MathSciNet  Google Scholar 

  • Larson, J., Mattu, S., Kirchner, L., Angwin, J.: How we analyzed the compas recidivism algorithm. ProPublica (5 2016), 9 (2016)

    Google Scholar 

  • Matthias, A.: The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf. Technol. 6(3), 175–183 (2004)

    Article  Google Scholar 

  • Mester, L.J.: Whats the point of credit scoring? Bus. Rev. 3(Sep/Oct), 3–16 (1997)

    Google Scholar 

  • Moor, J.H.: The nature, importance, and difficulty of machine ethics. IEEE Intell. Syst. 21(4), 18–21 (2006)

    Article  Google Scholar 

  • Müller, V.C.: Autonomous killer robots are probably good news. In: Di Nucci, E., Santonio de Sio, F. (eds.) Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons, pp. 67–81. Ashgate, London (2016)

    Google Scholar 

  • Robillard, M.: No such thing as killer robots. J. Appl. Philos. (2017). https://doi.org/10.1111/japp.12274

  • Santoni de Sio, F., Van den Hoven, J.: Meaningful human control over autonomous systems: a philosophical account. Front. Robot. AI 5, 15 (2018)

    Article  Google Scholar 

  • Simpson, T.W., Müller, V.C.: Just war and robots killings. Philos. Q. 66(263), 302–322 (2015)

    Article  Google Scholar 

  • Sparrow, R.: Killer robots. J. Appl. Philos. 24(1), 62–77 (2007)

    Article  Google Scholar 

  • Strawson, P.: Freedom and resentment. Proc. Br. Acad. 48, 187–211 (1962)

    Google Scholar 

  • Van de Poel, I., Royakkers, L., Zwart, S.D.: Moral Responsibility and the Problem of Many Hands, vol. 29. Routledge, New York (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Torben Swoboda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Swoboda, T. (2018). Autonomous Weapon Systems - An Alleged Responsibility Gap. In: Müller, V. (eds) Philosophy and Theory of Artificial Intelligence 2017. PT-AI 2017. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 44. Springer, Cham. https://doi.org/10.1007/978-3-319-96448-5_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-96448-5_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-96447-8

  • Online ISBN: 978-3-319-96448-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics