Advertisement

Keeping the Driver in the Loop: The ‘Other’ Ethics of Automation

  • Victoria Banks
  • Emily Shaw
  • David R. Large
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 823)

Abstract

Automated vehicles are expected to revolutionise everyday travel with anticipated benefits of improved road safety, comfort and mobility. However, they also raise complex ethical challenges. Ethical debates have primarily centred around moral judgements that must be made by autonomous vehicles in safety-critical situations, with proposed solutions typically based on deontological principles or consequentialism. However, ethics should also be acknowledged in the design, development and deployment of partially-automated systems that invariably rely upon the human driver to monitor and intervene when required, even though they may be ill-prepared to do so. In this literature review, we explore the lesser-discussed ethics associated with the role of, and expectations placed upon, the human driver in partially-automated vehicles, discussing factors such as the marketing and deployment of these vehicles, and the impact upon the human driver’s development of trust and complacency in automated functionality, concluding that the human driver must be kept ‘in the loop’ at all times.

Keywords

Ethics Automated driving Literature review 

References

  1. 1.
    Stanton NA, Marsden P (1996) From fly-by-wire to drive-by-wire: safety implications of automation in vehicles. Saf Sci 24(1):35–49CrossRefGoogle Scholar
  2. 2.
    Kyriakidis M, De Winter JCF, Stanton N., Bellet T, Van Arem B, Brookhuis K, Martens MH, Bengler K, Andersson J, Merat N, Reed N, Flament M, Hagenzieker M, Happee R (2016). A human factors perspective on automated driving. Theoretical Issues in Ergonomics Science, pp 1–27Google Scholar
  3. 3.
    Neilsen J (2010) Mental models. https://www.nngroup.com/articles/mental-models/. Accessed 15 Aug 2017
  4. 4.
    Shladover SE (2016) The truth about self-driving cars. Sci Am 314(6):52–57CrossRefGoogle Scholar
  5. 5.
    Abraham H, Seppelt B, Mehler B, Reimer B (2017) What’s in a name: vehicle technology branding consumer expectation for automation. In: Proceedings of Automotive UI 2017, Oldenburg, GermanyGoogle Scholar
  6. 6.
    Stilgoe J (2017) Tesla crash report blames human error – this is a missed opportunity. The Guardian. https://www.theguardian.com/science/political-science/2017/jan/21/tesla-crash-report-blames-human-error-this-is-a-missed-opportunity. Accessed 14 Aug 2017
  7. 7.
    National Highway Traffic Safety Administration (2017) ODI Resume. https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF. Accessed 15 Aug 2017
  8. 8.
    National Transportation Safety Board (2017) Driver assistance systems specialists factual report. https://dms.ntsb.gov/pubdms/. Accessed 09 Aug 2017
  9. 9.
    Norman DA (1990) The “problem” with automation: inappropriate feedback and interaction, not “over- automation”. Phil Trans R Soc Lond - Ser B Biol Sci, l327(1241), 585–593Google Scholar
  10. 10.
    Norman DA (2015) The human side of automation. In Road Vehicle Automation 2. Springer, Cham, pp 73–79Google Scholar
  11. 11.
    Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46:50–80CrossRefGoogle Scholar
  12. 12.
    Walker GH, Stanton NA, Salmon PM (2016) Trust in vehicle technology. Int J Veh Des 70(2):157–182CrossRefGoogle Scholar
  13. 13.
    Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Hum Factors 37(1):32–64CrossRefGoogle Scholar
  14. 14.
    Asimov I (1942) I Robot. Gnome Press, New YorkGoogle Scholar
  15. 15.
    Murphy RR, Woods DD (2009) Beyond Asimov: the three laws of responsible robotics. IEEE Intell Syst 24(4):14–20CrossRefGoogle Scholar
  16. 16.
    Sütfeld LR, Gast R, König P, Pipa G (2017) Using virtual reality to assess ethical decisions in road traffic scenarios: applicability of value-of-life-based models and influences of time pressure. Front Behav Neurosci 11:122CrossRefGoogle Scholar
  17. 17.
    Skulmowski A, Bunge A, Kaspar K, Pipa G (2014) Forced-choice decision-making in modified trolley dilemma situations: a virtual reality and eye tracking study. Front Behav Neurosci 8:426CrossRefGoogle Scholar
  18. 18.
    Oxford English Dictionary (2017). https://en.oxforddictionaries.com/definition/autonomous. Accessed 15 Aug 2017
  19. 19.
    Society of Automotive Engineers (2016) Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems. http://standards.sae.org/j3016_201401/. Accessed 12 Oct 2015
  20. 20.
    Parasuraman R, Sheridan TB, Wickens CD (2000) A model for types and levels of human interaction with automation. IEEE Trans Syst, Man, Cybern Part A, Syst Hum. A publication of the IEEE Systems, Man, and Cybernetics Society, 30(3), 286–297Google Scholar
  21. 21.
    Sheridan TB, Verplanck WL (1978) Human and computer control of undersea teleoperators. MIT Man-Machine Laboratory, CambridgeCrossRefGoogle Scholar
  22. 22.
    Endsley MR, Kaber DB (1999) Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics 42(3):462–492CrossRefGoogle Scholar
  23. 23.
    Kaber DB, Endsley MR (2004) The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task. Theor Issues Ergon Sci 5(2):113–153CrossRefGoogle Scholar
  24. 24.
    Poulin C, Stanton NA, Cebon D, Epple W (2015) Responses to autonomous vehicles. Ingenia 62:8–11Google Scholar
  25. 25.
    Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6(3):175–183CrossRefGoogle Scholar
  26. 26.
    Johnson DG, Norman M (2014) Recommendations for future development of artificial agents. IEEE Technol Soc Mag, Winter 2014:22–28CrossRefGoogle Scholar
  27. 27.
    Stanton NA, Young MS, Walker GH (2007) The psychology of driving automation: a discussion with Professor Don Norman. Int J Veh Des 45(3):289–306CrossRefGoogle Scholar
  28. 28.
    Bainbridge L (1983) Ironies of automation. Automatica 19(6):775–779CrossRefGoogle Scholar
  29. 29.
    Young MS, Stanton NA (2002) Malleable attentional resources theory: a new explanation for the effects of mental underload on performance. Hum Factors 44(3):365–375CrossRefGoogle Scholar
  30. 30.
    Sarter NB, Woods DD (1995) How in the world did we ever get into that mode? mode error and awareness in supervisory control. Hum Factors 37:5–19CrossRefGoogle Scholar
  31. 31.
    Sarter NB, Woods DD, Billings CE (1997) Automation surprises. Handb Hum Factors Ergonom 2:1926–1943Google Scholar
  32. 32.
    Hancock PA (2016) Imposing limits on autonomous systems. Ergonomics 60(2):284–291CrossRefGoogle Scholar
  33. 33.
    Wilson JR, Rajan JA (1995) Human-machine interfaces for systems control. In: Wilson JR, Corlett EN (Eds.), Evaluation of human work: a practical ergonomics methodology. Taylor Francis, London, pp 357–405Google Scholar
  34. 34.
    Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors 39(2):230–253CrossRefGoogle Scholar
  35. 35.
    Kumfer WJ, Levulis SJ, Olson MD, Burgess RA (2016) A human factors perspective on ethical concerns of vehicle automation. Hum Factors 60(1):1844–1848Google Scholar
  36. 36.
    Hancock PA (2014) Automation: how much is too much? Ergonomics 57(3):449–454CrossRefGoogle Scholar
  37. 37.
    Grote G, Weik S, Wafler T, Zolch M (1995) Criteria for the complementary allocation of functions in automated work systems and their use in simultaneous engineering projects. Int J Ind Ergon 16:326–382CrossRefGoogle Scholar
  38. 38.
    Stanton NA, Stewart R, Harris D, Houghton RJ, Baber C, McMaster R, Salmon PM, Hoyle G, Walker G, Young MS, Linsell M, Dymott R, Green D (2006) Distributed situation awareness in dynamic systems: theoretical development and application of an ergonomics methodology. Ergonomics 49(12–13):1288–1311CrossRefGoogle Scholar
  39. 39.
    Cuevas HM, Fiore SM, Caldwell BS, Strater L (2007) Augmenting team cognition in human– automation teams performing in complex operational environments. Aviat Space Environ Med 78:B63–B70Google Scholar
  40. 40.
    Hoc JM (2000) From human-machine interaction to human-machine cooperation. Ergonomics 43(7):833–843CrossRefGoogle Scholar
  41. 41.
    Parasuraman R, Wickens CD (2008) Humans: still vital after all these years of automation. Hum Factors 50:511–520CrossRefGoogle Scholar
  42. 42.
    Merat N, Lee JD (2012) Preface to the special section on human factors and automation in vehicles designing highly automated vehicles with the driver in mind. Hum Factors 54(5):681–686CrossRefGoogle Scholar
  43. 43.
    Brill JC, Bliss JP, Hancock PA, Manzey D, Meyer J, Vredenburgh A (2016) Matters of ethics, trust, and potential liability for autonomous systems. Hum Factors 60(1):308–312Google Scholar
  44. 44.
    Banks VA, Stanton NA (2015) Discovering driver-vehicle coordination problems in future automated control systems: Evidence from verbal commentaries. Procedia Manuf 3:2497–2504CrossRefGoogle Scholar
  45. 45.
    Endsley MR (2017) Autonomous driving systems: a preliminary naturalistic study of the Tesla models. J Cogn Eng Decis MakingGoogle Scholar
  46. 46.
    Tesla Motors (2016). A Tragic Loss. Tesla (press release). https://www.tesla.com/blog/tragic-loss. Accessed 15 Aug 2017
  47. 47.
    Brown B, Laurier E (2017) The trouble with autopilots: assisted and autonomous driving on the social road. In: Proceedings of Human Factors in Computing SystemsGoogle Scholar
  48. 48.
    Smith BW (2012) Driving at perfection. The Center for Internet and Society at Stanford Law School. http://cyberlaw.stanford.edu/blog/2012/03/driving-perfection. Accessed 15 Aug 2017
  49. 49.
    Goodall NJ (2014) Ethical decision making during automated vehicle crashes. Transp Res Board 2424:58–65CrossRefGoogle Scholar
  50. 50.
    Molloy R, Parasuraman R (1996) Monitoring an automated system for a single failure: vigilance and task complexity effects. Hum Factors 38:311–322CrossRefGoogle Scholar
  51. 51.
    Strand N, Nilsson J, Karlsson ICM, Nilsson L (2014) Semi-automated versus highly automated driving in critical situations caused by automation failures. Transp Res Part F Traffic Psychol Behav 27(Part B):218–228CrossRefGoogle Scholar
  52. 52.
    Damböck D, Bengler K, Farid M, Tönert L (2012) Übernahmezeiten beim hochautomatisierten Fahren [Takeover times for highly automated driving]. Tagung Fahrerassistenz 15:16–28Google Scholar
  53. 53.
    Gold C, Damböck D, Lorenz L, Bengler K (2013) ‘Take over!’ How long does it take to get the driver back into the loop? In: Proceedings of the Human Factors and Ergonomics Society 57th Annual Meeting. Santa Monica, CA: Human Factors and Ergonomics Society, pp 1938–1942Google Scholar
  54. 54.
    Zeeb K, Buchner A, Schrauf M (2015) What determines the take-over time? an integrated model approach of driver takeover after automated driving. Accid Anal Prev 78:212–221CrossRefGoogle Scholar
  55. 55.
    Merat N, Jamson AH, Lai FCH, Daly M, Carsten OMJ (2014) Transition to manual: driver behaviour when resuming control from a highly automated vehicle. Transportation Research Part F: Traffic Psychology and Behaviour 26(Part A):1–9Google Scholar
  56. 56.
    Eriksson A, Stanton NA (2017) Take-over time in highly automated vehicles: non-critical transitions to and from manual control. Hum Factors 59(4):689–705CrossRefGoogle Scholar
  57. 57.
    Porter JM, Case K, Marshall R, Gyi D, neé Oliver RS (2004) ‘Beyond Jack and Jill’: designing for individuals using HADRIAN. Int J Ind Ergonom 33(3):249–264CrossRefGoogle Scholar
  58. 58.
    Louw T, Merat N, Jamson H (June 2015) Engaging with highly automated driving: to be or not to be in the loop? In: Eighth International Driving Symposium on Human Factors in Driver Assessment, Training, and Vehicle Design, Salt Lake City, UtahGoogle Scholar
  59. 59.
    Thomas MJ, Schultz TJ, Hannaford N, Runciman WB (2013) Failures in transition: learning from incidents relating to clinical handover in acute care. J Healthc Qual 35(3):49–56CrossRefGoogle Scholar
  60. 60.
    Casner SM, Schooler JW (2015) Vigilance impossible: diligence, distraction, and daydreaming all lead to failures in a practical monitoring task. Conscious Cogn 35:33–41CrossRefGoogle Scholar
  61. 61.
    Parasuraman R, Molloy R, Singh IL (1993) Performance consequences of automation-induced ‘complacency’. Int J Aviat Psychole 3:1–23CrossRefGoogle Scholar
  62. 62.
    Hollnagel E, Woods DD (2005) Joint Cognitive Systems Foundations of Cognitive Systems Engineering. CRC Press, Boca RatonCrossRefGoogle Scholar
  63. 63.
    Weyer J, Fink D, Adelt F (2015) Human-machine cooperation in smart cars. An empirical investigation of the loss-of-control thesis. Saf Sci 72:199–208CrossRefGoogle Scholar
  64. 64.
    Rasmussen J (1990) Human error and the problem of causality in analysis of accidents. Philos Trans R Soc Lond 327:449–462CrossRefGoogle Scholar
  65. 65.
    Dekker SW (2006) The field guide to understanding human error. Ashgate, AldershotGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Human Factors EngineeringTransportation Research GroupSouthamptonUK
  2. 2.Human Factors Research GroupUniversity of NottinghamNottinghamUK

Personalised recommendations