Abstract
The current research discusses transparency as a means to enable trust of automated systems. Commercial pilots (N = 13) interacted with an automated aid for emergency landings. The automated aid provided decision support during a complex task where pilots were instructed to land several aircraft simultaneously. Three transparency conditions were used to examine the impact of transparency on pilot’s trust of the tool. The conditions were: baseline (i.e., the existing tool interface), value (where the tool provided a numeric value for the likely success of a particular airport for that aircraft), and logic (where the tool provided the rationale for the recommendation). Trust was highest in the logic condition, which is consistent with prior studies in this area. Implications for design are discussed in terms of promoting understanding of the rationale for automated recommendations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Onnasch, L., Wickens, C.D., Li, H., Manzey, D.: Human performance consequences of stages and levels of automation: an integrated meta-analysis. Hum Factors 56, 476–488 (2014)
Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum Factors 46, 50–80 (2004)
Lyons, J.B., Stokes, C.K.: Human-human reliance in the context of automation. Hum Factors 54, 111–120 (2012)
Chen, J.Y.C., Barnes, M.J.: Human-agent teaming for multirobot control: a review of the human factors issues. IEEE Transactions on Human-Machine Systems, 13–29 (2014)
Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors 57, 407–434 (2015)
Lyons, J.B.: Being transparent about transparency: a model for human-robot interaction. In: Sofge, D., Kruijff, G.J., Lawless, W.F. (eds.) Trust and Autonomous Systems: papers from the AAAI spring symposium (Technical Report SS-13-07). AAAI Press, Menlo Park, CA (2013)
Mercado, J.E., Rupp, M.A., Chen. J.Y.C., Barnes, M.J., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors (in press)
Wang, L., Jamieson, G.A., Hollands, J.G.: Trust and reliance on an automated combat identification system. Hum Factors 51, 281–291 (2009)
Lyons, J.B., Koltai, K.S., Ho, N.T., Johnson, W.B., Smith, D.E., Shively, J.R.: Engineering trust in complex automated systems. Ergon in Design 24, 13–17 (2016)
Meuleau, N., Plaunt, C., Smith, D., Smith, C.: Emergency landing planner for damaged aircraft. In: Proceedings of the Scheduling and Planning Applications Workshop (2008)
Brandt, S.L., Lachter, J., Battiste, V., Johnson, W.: Pilot situation awareness and its implications for single pilot operations: analysis of a human-in-the-loop study. Procedia Manufacturing 3, 3017–3024 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing Switzerland
About this paper
Cite this paper
Lyons, J.B. et al. (2017). Shaping Trust Through Transparent Design: Theoretical and Experimental Guidelines. In: Savage-Knepshield, P., Chen, J. (eds) Advances in Human Factors in Robots and Unmanned Systems. Advances in Intelligent Systems and Computing, vol 499. Springer, Cham. https://doi.org/10.1007/978-3-319-41959-6_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-41959-6_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-41958-9
Online ISBN: 978-3-319-41959-6
eBook Packages: EngineeringEngineering (R0)