Skip to main content

Feasible Precautions in Attack and Autonomous Weapons

  • Chapter
  • First Online:

Abstract

The future of warfare will undoubtedly include autonomous systems capable of making complex decisions without human operator involvement. These systems will react blindingly fast, possess exceptional precision, and operate reliably and consistently without human supervision. While the promise of autonomy seems almost boundless, questions remain about the lawfulness of allowing such systems to select and lethally engage targets on their own. One of the consistent issues raised is whether nations wishing to employ autonomous weapon systems on the battlefield could do so in a manner that would comply with the fundamental principles of the law of armed conflict. This chapter addresses the crux of that concern by examining the legal impacts these future systems might have on targeting during international armed conflicts. In particular, the chapter focuses on the requirement of nations to take feasible precautions in attack and seeks to determine whether a nation employing autonomous weapons on a battlefield might be able to fully comply. First, this chapter strives to define what is meant by autonomous weapon systems. Second, the chapter will examine the technological advances and operational benefits which portend these weapon systems may become a reality in the future. The emphasis will next shift to the unique challenges the systems present to the requirement to take feasible precautions in attack. Lastly, the author concludes that while questions regarding precaution in attack requirements raise valid concerns, the use of autonomous weapons under many circumstances will likely be deemed lawful.

J. Thurnher, International and Operational Law Department, United States Army Office of the Judge Advocate General, United States Army, Judge Advocate. The views expressed are those of the author and should not be understood as necessarily representing those of the United States Army, the United States Department of Defense, or any other government entity.

The original version of this chapter was revised. An erratum to this chapter can be found at DOI 10.1007/978-3-319-67266-3_13.

This is a preview of subscription content, log in via an institution.

Notes

  1. 1.

    More than forty nongovernmental organisations have formed the Campaign to Stop Killer Robots, an umbrella organisation dedicated to seeking a comprehensive and pre-emptive ban on the development, production, and use of autonomous weapons. Campaign to Stop Killer Robots (2013). http://www.stopkillerrobots.org/. Accessed 13 December 2016.

  2. 2.

    Human Rights Watch is one of the founding organisations of the coalition. For a full description of their reservations and criticism of autonomous weapon systems, see Human Rights Watch (2012), p. 1.

  3. 3.

    United Nations A/HRC/23/47, p. 21.

  4. 4.

    Convention on Conventional Weapons CCW/MSP/2013/CRP.1, p. 4.

  5. 5.

    In fact, nations such as the United States and United Kingdom have declared they are not pursuing such weapons other than human supervised ones. House of Lords Debate 26 March 2013 (The UK Ministry of Defense ‘currently has no intention of developing [weapon] systems that operate without human intervention’.); United States Department of Defense (2012a), p. 3 (The United States has no ‘plans to develop lethal autonomous weapon systems other than human-supervised systems for the purposes of local defense of manned vehicles or installations’.).

  6. 6.

    Krishnan (2009), p. 45.

  7. 7.

    United States Department of Defense (2012c), pp. 13–14.

  8. 8.

    United States Department of Defense (2012c), pp. 13–14.

  9. 9.

    Human Rights Watch (2012), p. 2.

  10. 10.

    Krishnan (2009), p. 43.

  11. 11.

    Krishnan (2009), p. 44.

  12. 12.

    It is conceivable that advances in artificial intelligence technology in the future may allow systems to possess human-like reasoning. However, it is far from certain that the technology will successfully develop in such a manner, and even Dr. Krishnan contends that any such advances would be unlikely to materialize until well beyond the year 2030. Krishnan (2009), p. 44.

  13. 13.

    Schmitt (2013a), p. 4.

  14. 14.

    Singer (2009), p. 128.

  15. 15.

    For example, the former chief scientist for the United States Air Force postulates that technology currently exists to facilitate ‘fully autonomous military strikes’; Dahm (2012), p. 11.

  16. 16.

    Guarino (2013).

  17. 17.

    Poitras (2012).

  18. 18.

    Guarino (2013). For a more general overview of machine learning capabilities and possibilities, see Russell and Norvig (2010), ch. 18. For a discussion about how computer systems are learning, in approaches similar to how humans learn by examples, see Public Broadcasting Service (2011).

  19. 19.

    von Heinegg (2011), p. 184 (asserting that such mines are ‘quite common and legally uncontested’).

  20. 20.

    Myers (2015).

  21. 21.

    United States Defense Advanced Research Projects Agency (2013). Note, however, that at least initially the vessel is designed to require human approval before launching an attack. The United States Navy is developing similar underwater systems to conduct de-mining operations; Ackerman (2013).

  22. 22.

    Guarino (2013).

  23. 23.

    Guarino (2013).

  24. 24.

    Healey (2013).

  25. 25.

    Guarino (2013).

  26. 26.

    United States Air Force (2009), p. 16 (stating that ‘[a]s autonomy and automation merge, [systems] will be able to swarm … creating a focused, relentless, and scaled attack’). The United States Air Force’s Proliferated Autonomous Weapons may represent an early prototype of future swarming systems. See Singer (2009), p. 232; Alston (2011), p. 43.

  27. 27.

    Singer (2009), p. 74; Kellenberger (2011), p. 27. Note, consensus does not exist as to if and when general artificial intelligence might become available. Artificial intelligence has previously failed to live up to some expectations. Computer scientist Noel Sharkey doubts that artificial intelligence advances will achieve human-like abilities in even the next 15 years; Sharkey (2011), p. 140.

  28. 28.

    Anderson and Waxman (2013), p. 2.

  29. 29.

    United States Department of Defense (2013), p. 25. Under a heading labelled ‘A Look to the Future’ it explains: ‘Currently personnel costs are the greatest single cost in (the Department of Defense), and unmanned systems must strive to reduce the number of personnel required to operate and maintain the systems. Great strides in autonomy, teaming, multi-platform control, tipping, and cueing have reduced the number of personnel required, but much more work needs to occur.’

  30. 30.

    ‘Enable humans to delegate those tasks that are more effectively done by computer … thus freeing humans to focus on more complex decision making’; United States Department of Defense (2012b), p. 1.

  31. 31.

    Sharkey (2012), p. 110.

  32. 32.

    Singer (2009), p. 128.

  33. 33.

    For example, the United States has expressed an interest in seeking an expansion of autonomous features, albeit not lethal targeting capabilities, into its systems in the future; United States Department of Defense (2012b), pp. 1–3; United States Department of Defense (2013), p. 25.

  34. 34.

    Legality of the Threat or Use of Nuclear Weapons (Advisory Opinion), ICJ Reports 1996, p. 226 (hereinafter Nuclear Weapons); Schmitt and Thurnher (2013), p. 243. Schmitt (2013b), p. 8.

  35. 35.

    Additional Protocol I, Article 57.

  36. 36.

    The International Court of Justice has recognised distinction as a ‘cardinal’ principle of the law of armed conflict. Nuclear Weapons, paras. 78–79.

  37. 37.

    Additional Protocol I, Articles 49, 51–52.

  38. 38.

    Henckaerts and Doswald-Beck (2005), r. 1; Nuclear Weapons Case, paras. 78–79; Cadwalader (2011), p. 157.

  39. 39.

    See for example, Human Rights Watch (2012), pp. 30–32.

  40. 40.

    HPCR (2009), r. 39.

  41. 41.

    Henckaerts and Doswald-Beck (2005), r. 15; HPCR (2009), r. 1(q).

  42. 42.

    Additional Protocol I, Articles 51(5)(b), 57(2)(a)(iii).

  43. 43.

    Henckaerts and Doswald-Beck (2005), r. 14; Cadwalader (2011), pp. 157–158.

  44. 44.

    Additional Protocol I, Article 51(5)(b).

  45. 45.

    For a discussion of the collateral damage methodology used by the United States military, see Thurnher and Kelly (2012).

  46. 46.

    For example, Human Rights Watch maintains that an autonomous weapon ‘could not be programmed to duplicate the psychological processes in human judgment that are necessary to assess proportionality.’ Human Rights Watch (2012), p. 33.

  47. 47.

    Additional Protocol I, Article 57(2)(b).

  48. 48.

    United States Department of Defense (2015), r. 5.11.

  49. 49.

    HPCR (2009), r. 38.

  50. 50.

    United States Department of Defense (2015), r. 5.11.

  51. 51.

    For example, the United States has issued statements challenging the notion that this provision of Additional Protocol I reflects customary international law. United States Department of Defense (2015), r. 5.11.5.

  52. 52.

    HPCR (2009), r. 33.

  53. 53.

    Additional Protocol I, Article 57.

  54. 54.

    United States Department of Defense (2015), r. 5.11.2.

  55. 55.

    International Committee of the Red Cross (2013).

  56. 56.

    The United States issued a policy directive in 2012 establishing a strict approval process for any AWS acquisitions or development and mandating various safety measures be incorporated into future AWS designs. United States Department of Defense (2012c).

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeffrey S. Thurnher .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this chapter

Cite this chapter

Thurnher, J.S. (2018). Feasible Precautions in Attack and Autonomous Weapons. In: Heintschel von Heinegg, W., Frau, R., Singer, T. (eds) Dehumanization of Warfare. Springer, Cham. https://doi.org/10.1007/978-3-319-67266-3_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67266-3_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67264-9

  • Online ISBN: 978-3-319-67266-3

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics