Advertisement

Artificial intelligence, ethics and human values: the cases of military drones and companion robots

  • Thibault de SwarteEmail author
  • Omar Boufous
  • Paul Escalle
Original Article
  • 21 Downloads

Abstract

Can artificial intelligence (AI) be more ethical than human intelligence? Can it respect human values better than a human? This article examines some issues raised by the AI with respect to ethics. The utilitarian approach can be a solution, especially the one that uses agent-based theory. We have chosen two extreme cases: combat drones, vectors of death, and life supporting companion robots. The ethics of AI and unmanned aerial vehicles (UAV) must be studied on the basis of military ethics and human values when fighting. Despite the fact that they are not programmed to hurt humans or harm their dignity, companion robots can potentially endanger their social, moral as well as their physical integrity. An important ethical condition is that companion robots help the nursing staff to take better care of patients while not replacing them.

Keywords

Ethics Artificial intelligence Human values Companion robots Military drones UAV 

Notes

References

  1. 1.
    Picard RW (2003) Affective computing: challenges. Int J Hum Comput Stud 59(1):55–64. https://www.sciencedirect.com/science/article/abs/pii/S1071581903000521. Accessed 27 Jan 2019MathSciNetCrossRefGoogle Scholar
  2. 2.
    Nilsson NJ (1980) Principles of artificial intelligence, pp 17–18.  https://www.springer.com/la/book/9783540113409. Accessed 27 Jan 2019
  3. 3.
    Russell S, Norvig P (1995) Artificial intelligence: a modern approach, pp 4–5. https://readyforai.com/download/artificial-intelligence-a-modern-approach-3rd-edition-pdf/. Accessed 27 Jan 2019
  4. 4.
    Bringsjord S, Schimanski B (2003) What is artificial intelligence? Psychometric AI as an answer, p 6. https://www.ijcai.org/Proceedings/03/Papers/128.pdf. Accessed 27 Jan 2019
  5. 5.
    Powers TM (2006) Prospects for a Kantian machine, pp 48–50.  https://www.academia.edu/31467771/Prospects_for_a_Kantian_Machine. Accessed 27 Jan 2019
  6. 6.
    Hanson R (2009) Prefer law to values. http://www.overcomingbias.com/2009/10/prefer-law-to-values.html. Accessed 27 Jan 2019
  7. 7.
    Asimov I (1950) “Runaround”. I, Robot, p 40. https://www.ttu.ee/public/m/mart.../Isaac_Asimov_-_I_Robot.pdf. Accessed 27 Jan 2019
  8. 8.
    Pettit P (2003) Akrasia, collective and individual. https://core.ac.uk/download/pdf/156616471.pdf. Accessed 27 Jan 2019
  9. 9.
    Wallach W, Allen C, Smit I (2008) Machine morality: bottom-up and top down approaches for modelling human moral faculties, pp 570–579. https://link.springer.com/article/10.1007/s00146-007-0099-0. Accessed 27 Jan 2019
  10. 10.
    McLaren B (2006) Computational models of ethical reasoning, pp 30–32.  https://www.cs.cmu.edu/~bmclaren/pubs/McLaren-CompModelsEthicalReasoning-MachEthics2011.pdf. Accessed 27 Jan 2019
  11. 11.
    Guarini M (2006) Particularism and the classification of moral cases, pp 23–26. https://ieeexplore.ieee.org/document/1667949/. Accessed 27 Jan 2019
  12. 12.
    Muehlhauser L, Helm L (2012) Intelligence explosion and machine ethics. https://intelligence.org/files/IE-ME.pdf. Accessed 27 Jan 2019
  13. 13.
    Sinnott-Armstrong W (2015) Consequentialism, Stanford Encyclopedia of Philosophy. https://stanford.library.sydney.edu.au/entries/consequentialism/. Accessed 27 Jan 2019 
  14. 14.
    Anderson M, Leigh Anderson S (2011) Machine ethics. https://www.cambridge.org/core/books/machine-ethics/D7992C92BD465B54CA0D91871398AE5A. Accessed 27 January 2019
  15. 15.
    Hibbard B (2011) Model-based utility functions. https://arxiv.org/vc/arxiv/papers/1111/1111.3934v1.pdf. Accessed 27 Jan 2019
  16. 16.
    Bonnefon J-F, Rahwan I, Shariff A (2015) Autonomous vehicles need experimental ethics: are we ready for utilitarian cars?. https://www.researchgate.net/publication/282843902_Autonomous_Vehicles_Need_Experimental_Ethics_Are_We_Ready_for_Utilitarian_Cars. Accessed 27 Jan 2019
  17. 17.
    Tian Y, Pei K, Jana S, Ray B (2018) DeepTest: automated testing of deep-neural-network-driven autonomous cars. https://arxiv.org/abs/1708.08559. Accessed 27 Jan 2019
  18. 18.
    Kirk B (2016) Business opportunities in automated vehicles. In Journal of Unmanned Systems. http://www.nrcresearchpress.com/doi/full/10.1139/juvs-2015-0038#.XE3kH88zYXp
  19. 19.
    Everitt T, Filan D, Daswani M, Hutter M (2016) Self-modification of policy and utility function in rational agents. https://arxiv.org/abs/1605.03142. Accessed 27 Jan 2019
  20. 20.
    Carrillo H, Dames P, Kumar V, José A. Castellanos (2017) Autonomous robotic exploration using a utility function based on Rényi’s general theory of entropy. https://link.springer.com/article/10.1007/s10514-017-9662-9. Accessed 27 Jan 2019
  21. 21.
    Keren S (2017) Redesigning stochastic environments for maximized utility. https://www.aaai.org/ocs/index.php/AAAI/AAAI17/paper/download/14549/14200. Accessed 27 Jan 2019
  22. 22.
    Arindam Bhakta C, Hollitt WN, Browne M, Frean (2018) Utility function generated saccade strategies for robot active vision: a probabilistic approach. https://link.springer.com/article/10.1007/s10514-018-9752-3. Accessed 27 Jan 2019
  23. 23.
    Hibbard B (2015) Ethical artificial intelligence. https://arxiv.org/abs/1411.1373. Accessed 27 Jan 2019
  24. 24.
    Kant E (1785) Fondements de la métaphysique des mœurs. Feedbooks. http://fr.feedbooks.com/book/114/fondements-de-la-m%25C3%25A9taphysique-des-moeurs. Accessed 25 Jan 2019  
  25. 25.
    Danet D, Doaré R, Hanon JP, de Boisboissel G (2014) Robots on the battlefield: contemporary issues and implications for the future. Combat Studies Institute Press, France, p 301. https://apps.dtic.mil/docs/citations/ADA605889. Accessed 27 Jan 2019Google Scholar
  26. 26.
    Weber M (1917/1949) The meaning of ‘Ethical Neutrality’ in sociology and economics. In: The methodology of the social sciences. https://www.taylorfrancis.com/books/9781351505574/chapters/10.4324%2F9781315124445-1. Accessed 25 Jan 2019
  27. 27.
  28. 28.
    Ganascia J-G (2010) Epistemology of AI revisited in the light of the philosophy of information. Knowl Technol Policy 23(1–2):57–73. https://link.springer.com/article/10.1007/s12130-010-9101-0. Accessed 27 Jan 2019CrossRefGoogle Scholar
  29. 29.
    Niquet V (1988) L’Art de la guerre de Sun Zi, traduction et édition critique, Éditions Economica. https://www.economica.fr/livre-l-art-de-la-guerre-sun-zi-niquet-valerie,fr,4,9782717858969.cfm
  30. 30.
    Vigny de A (1835) Grandeur et servitude militaires. http://www.bouquineux.com/?ebooks=82&Vigny. Accessed 25 Jan 2019
  31. 31.
    Mishima Y (1985) le Japon moderne et l’éthique samouraï. Gallimard, Paris. http://www.gallimard.fr/Catalogue/GALLIMARD/Arcades/Le-Japon-moderne-et-l-ethique-samourai Google Scholar
  32. 32.
    European Court of Human Rights (1999–2014) Reports of judgments and decisions—cumulative index 1999–2014. https://www.echr.coe.int/Documents/Index_1999-2014_ENG.pdf. Accessed 25 Jan 2019  
  33. 33.
    MacDonald B, Kerse N, Broadbent R (2013) The psychosocial effects of a companion robot: a randomized controlled trial Hayley Robinson. https://www.ifa-fiv.org/wp-content/uploads/2015/11/2013-Paro-loneliness-RCT.pdf. Accessed 27 Jan 2019
  34. 34.
    Mullan TMC (2016) The guardian—how a robot could be grandma’s new career. https://www.theguardian.com/technology/2016/nov/06/robot-could-be-grandmas-new-care-assistant. Accessed Oct 2016
  35. 35.
    Sharkey A, Sharkey N (2010) Granny and the robots: ethical issues in robot care for the elderly. https://link.springer.com/article/10.1007/s10676-010-9234-6. Accessed 27 Jan 2019

Copyright information

© International Society of Artificial Life and Robotics (ISAROB) 2019

Authors and Affiliations

  • Thibault de Swarte
    • 1
    Email author
  • Omar Boufous
    • 1
  • Paul Escalle
    • 1
  1. 1.IMT Atlantique, LASCO LaboratoryRennesFrance

Personalised recommendations