Skip to main content

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 824))

Included in the following conference series:

  • 296 Accesses

Abstract

The types of unreliability that an automated system may express can have effects on a user’s perception of that automation’s overall operational ability. A software program which makes one type of mistake might be judged more harshly than another program which makes a different sort of error; even if both have equal success rates. Here I use a Hidden Object Game to examine people’s different responses to a program when it appears to either miss its target objects or, alternatively, to make false alarms. Playing at both high and low clutter levels, participants who believed they were working with an automated system which missed targets decreased their trust in that automation, and judged its performance more harshly, compared to participants who believed the automation was making false alarms. Participants in the combined low clutter and miss condition showed the strongest decrease in trust. When asked to guess how often the program had been correct this group also gave it the lowest mean score. These results demonstrates that in a target detection task, automation that misses targets will be judged more harshly than automation that errs on the side of false alarms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Biros DP, Daly M (2004) Gunsch, G: The influence of task load and automation trust on deception detection. Group Decis Negot 13(2):173–189

    Article  Google Scholar 

  • Dixon SR, Wickens CD, Chang D (2004) Unmanned aerial vehicle flight control: false alarms versus misses. In: Proceedings of the human factors and ergonomics society annual meeting, vol. 48, no. 1. SAGE Publications, pp 152–156

    Google Scholar 

  • Dixon SR, Wickens CD, McCarley JS (2007) On the independence of compliance and reliance: Are automation false alarms worse than misses? Hum Factors J Hum Factors Ergon Soc 49(4):564–572

    Article  Google Scholar 

  • Donnellan MB, Oswald FL, Baird BM, Lucas RE (2006) The mini-IPIP scales: tiny-yet-effective measures of the Big Five factors of personality. Psychol Assess 18(2):192

    Article  Google Scholar 

  • Drnec K, Marathe A, Lukos JS, Metcalfe JS (2016) From trust in automation to decision neuroscience: applying cognitive neuroscience methods to understand and improve interaction decisions involved in human automation interaction. Front Hum Neurosci 10:290

    Article  Google Scholar 

  • Hancock PA, Billings DR, Schaefer KE (2011) Can you trust your robot? Ergon Des 19(3):24–29

    Google Scholar 

  • Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors J Hum Factors Ergon Soc 46(1):50–80

    Article  Google Scholar 

  • Madhavan P, Wiegmann DA, Lacson FC (2006) Automation failures on tasks easily performed by operators undermine trust in automated aids. Hum Factors J Hum Factors Ergon Soc 48(2):241–256

    Article  Google Scholar 

  • Meyer J (2001) Effects of warning validity and proximity on responses to warnings. Hum Factors J Hum Factors Ergon Soc 43(4):563–572

    Article  Google Scholar 

  • Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors J Hum Factors Ergon Soc 39(2):230–253

    Article  Google Scholar 

  • Rovira E, McGarry K, Parasuraman R (2007) Effects of imperfect automation on decision making in a simulated command and control task. Hum Factors J Hum Factors Ergon Soc 49(1):76–87

    Article  Google Scholar 

  • Sanders T, Oleson KE, Billings DR, Chen JY, Hancock PA (2011) A model of human-robot trust theoretical model development. In: Proceedings of the human factors and ergonomics society annual meeting, vol. 55, no. 1. SAGE Publications, pp 1432–1436

    Google Scholar 

  • Schaefer K (2013) The perception and measurement of human-robot trust. (Doctoral Dissertation). University of Central Florida, Florid

    Google Scholar 

  • Sparaco P (1995) Airbus seeks to keep pilot, new technology in harmony. Aviat Week Space Technol 142(5):62–63

    Google Scholar 

  • de Visser E, Parasuraman R (2011) Adaptive aiding of human-robot teaming effects of imperfect automation on performance, trust, and clutter. J Cogn Eng Decis Making 5(2):209–231

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexandra Kaplan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kaplan, A. (2019). Trust in Imperfect Automation. In: Bagnara, S., Tartaglia, R., Albolino, S., Alexander, T., Fujita, Y. (eds) Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018). IEA 2018. Advances in Intelligent Systems and Computing, vol 824. Springer, Cham. https://doi.org/10.1007/978-3-319-96071-5_5

Download citation

Publish with us

Policies and ethics