Advertisement

Trust in Imperfect Automation

  • Alexandra Kaplan
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 824)

Abstract

The types of unreliability that an automated system may express can have effects on a user’s perception of that automation’s overall operational ability. A software program which makes one type of mistake might be judged more harshly than another program which makes a different sort of error; even if both have equal success rates. Here I use a Hidden Object Game to examine people’s different responses to a program when it appears to either miss its target objects or, alternatively, to make false alarms. Playing at both high and low clutter levels, participants who believed they were working with an automated system which missed targets decreased their trust in that automation, and judged its performance more harshly, compared to participants who believed the automation was making false alarms. Participants in the combined low clutter and miss condition showed the strongest decrease in trust. When asked to guess how often the program had been correct this group also gave it the lowest mean score. These results demonstrates that in a target detection task, automation that misses targets will be judged more harshly than automation that errs on the side of false alarms.

Keywords

Automation Trust Reliability 

References

  1. Biros DP, Daly M (2004) Gunsch, G: The influence of task load and automation trust on deception detection. Group Decis Negot 13(2):173–189CrossRefGoogle Scholar
  2. Dixon SR, Wickens CD, Chang D (2004) Unmanned aerial vehicle flight control: false alarms versus misses. In: Proceedings of the human factors and ergonomics society annual meeting, vol. 48, no. 1. SAGE Publications, pp 152–156Google Scholar
  3. Dixon SR, Wickens CD, McCarley JS (2007) On the independence of compliance and reliance: Are automation false alarms worse than misses? Hum Factors J Hum Factors Ergon Soc 49(4):564–572CrossRefGoogle Scholar
  4. Donnellan MB, Oswald FL, Baird BM, Lucas RE (2006) The mini-IPIP scales: tiny-yet-effective measures of the Big Five factors of personality. Psychol Assess 18(2):192CrossRefGoogle Scholar
  5. Drnec K, Marathe A, Lukos JS, Metcalfe JS (2016) From trust in automation to decision neuroscience: applying cognitive neuroscience methods to understand and improve interaction decisions involved in human automation interaction. Front Hum Neurosci 10:290CrossRefGoogle Scholar
  6. Hancock PA, Billings DR, Schaefer KE (2011) Can you trust your robot? Ergon Des 19(3):24–29Google Scholar
  7. Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors J Hum Factors Ergon Soc 46(1):50–80CrossRefGoogle Scholar
  8. Madhavan P, Wiegmann DA, Lacson FC (2006) Automation failures on tasks easily performed by operators undermine trust in automated aids. Hum Factors J Hum Factors Ergon Soc 48(2):241–256CrossRefGoogle Scholar
  9. Meyer J (2001) Effects of warning validity and proximity on responses to warnings. Hum Factors J Hum Factors Ergon Soc 43(4):563–572CrossRefGoogle Scholar
  10. Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors J Hum Factors Ergon Soc 39(2):230–253CrossRefGoogle Scholar
  11. Rovira E, McGarry K, Parasuraman R (2007) Effects of imperfect automation on decision making in a simulated command and control task. Hum Factors J Hum Factors Ergon Soc 49(1):76–87CrossRefGoogle Scholar
  12. Sanders T, Oleson KE, Billings DR, Chen JY, Hancock PA (2011) A model of human-robot trust theoretical model development. In: Proceedings of the human factors and ergonomics society annual meeting, vol. 55, no. 1. SAGE Publications, pp 1432–1436Google Scholar
  13. Schaefer K (2013) The perception and measurement of human-robot trust. (Doctoral Dissertation). University of Central Florida, FloridGoogle Scholar
  14. Sparaco P (1995) Airbus seeks to keep pilot, new technology in harmony. Aviat Week Space Technol 142(5):62–63Google Scholar
  15. de Visser E, Parasuraman R (2011) Adaptive aiding of human-robot teaming effects of imperfect automation on performance, trust, and clutter. J Cogn Eng Decis Making 5(2):209–231CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.University of Central FloridaOrlandoUSA

Personalised recommendations