Advertisement

Displaced Interactions in Human-Automation Relationships: Transparency over Time

  • Christopher A. Miller
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10906)

Abstract

Transparency (roughly, the provision of information about what the automated system is doing and why, potentially at multiple levels of abstraction and goal directedness) in automated systems has repeatedly been shown to improve human-machine interaction and performance, as well as human acceptance and trust. Nevertheless, there is a fundamental problem with providing transparency information at the time of action execution: specifically, that excessive human workload, which typically motivates the inclusion of automated systems, may not permit the absorption of the transparent information. We propose “displacing” the provision of transparent information in time and/or space from the time of execution and show how this approach is tied to beneficial findings for pre-mission planning and post-mission debriefing and explanations in human-automation interaction.

Keywords

Transparency Explanation Debriefing Mission planning Team interactions Trust Cognitive workload 

Notes

Acknowledgments

I am indebted to Dr. Jesse Chen for providing a forum for initial thoughts on this topic, and to Rao Kambhampati for the insight that explanations generally need to focus only on mismatches in mental models.

References

  1. 1.
    Billings, C.: Aviation Automation: The Search for a Human-Centered Approach. Erlbaum, Mahwah (1997)Google Scholar
  2. 2.
    Sarter, N.B., Woods, D.D.: How in the world did we ever get into that mode? Mode error and awareness in supervisory control. Hum. Factors 37(1), 5–19 (1995)CrossRefGoogle Scholar
  3. 3.
    Sarter, N.B., Woods, D.D., Billings, C.E.: Automation surprises. Handb. Hum. Factors Ergon. 2, 1926–1943 (1997)Google Scholar
  4. 4.
    Mercado, J.E., Rupp, M.A., Chen, J.Y., Barnes, M.J., Barber, D., Procci, K.: Intelligent agent transparency in human–agent teaming for Multi-UxV management. Hum. Factors 58(3), 401–415 (2016)CrossRefGoogle Scholar
  5. 5.
    Lyons, J.B., Havig, P.R.: Transparency in a human-machine context: approaches for fostering shared awareness/intent. In: Shumaker, R., Lackey, S. (eds.) VAMR 2014. LNCS, vol. 8525, pp. 181–190. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-07458-0_18CrossRefGoogle Scholar
  6. 6.
    Ososky, S., Sanders, T., Jentsch, F., Hancock, P., Chen, J.Y.: Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. In: SPIE Defense + Security, p. 90840E. ISOP (2014)Google Scholar
  7. 7.
    de Visser, E.J., Cohen, M., Freedy, A., Parasuraman, R.: A design methodology for trust cue calibration in cognitive agents. In: Shumaker, R., Lackey, S. (eds.) VAMR 2014. LNCS, vol. 8525, pp. 251–262. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-07458-0_24CrossRefGoogle Scholar
  8. 8.
    Miller, C.A.: Delegation and transparency: coordinating interactions so information exchange is no surprise. In: Shumaker, R., Lackey, S. (eds.) VAMR 2014. LNCS, vol. 8525, pp. 191–202. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-07458-0_19CrossRefGoogle Scholar
  9. 9.
    Miller, C., Parasuraman, R.: Designing for flexible interaction between humans and automation. Hum. Factors 49(1), 57–75 (2007)CrossRefGoogle Scholar
  10. 10.
    Sheridan, T.: Supervisory control. In: Salvendy, G. (ed.) Handbook of Human Factors, pp. 1244–1268. Wiley, New York (1987)Google Scholar
  11. 11.
    Entin, E., Serfaty, D.: Adaptive team coordination. Hum. Factors 41, 312–325 (1999)CrossRefGoogle Scholar
  12. 12.
    Chen, J.Y., Procci, K., Boyce, M., Wright, J., Garcia, A., Barnes, M.: Situation awareness-based agent transparency (No. ARL-TR-6905). ARL/HRED Aberdeen Proving Ground, MD (2014)Google Scholar
  13. 13.
    Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors 37(1), 32–64 (1995)CrossRefGoogle Scholar
  14. 14.
    Miller, C.: Delegation for single pilot operation. In: 2014 HCI-Aero. ACM, New York (2014)Google Scholar
  15. 15.
    Tannenbaum, S.I., Cerasoli, C.P.: Do team and individual debriefs enhance performance? A meta-analysis. Hum. Factors 55(1), 231–245 (2013)CrossRefGoogle Scholar
  16. 16.
    Hart, S.G., Staveland, L.E.: Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. In: Advances in Psychology, vol. 52, pp. 139–183. North-Holland (1988)Google Scholar
  17. 17.
    Chakraborti, T., Sreedharan, S., Zhang, Y., Kambhampati, S.: Plan explanations as model reconciliation: moving beyond explanation as soliloquy. In: Proceedings of IJCAI, pp. 156–163 (2017)Google Scholar
  18. 18.
    Talamadupula, K., Briggs, G., Chakraborti, T., Scheutz, M., Kambhampati, S.: Coordination in human-robot teams using mental modeling and plan recognition. In: IROS 2014, pp. 2957–2962. IEEE (2014)Google Scholar
  19. 19.
    Kambhampati, S.: Personal communication, Arlington, VA, 1 August 2017Google Scholar
  20. 20.
    Tversky, A., Kahneman, D.: Judgment under uncertainty: heuristics and biases. Science 185(4157), 1124–1131 (1974)CrossRefGoogle Scholar
  21. 21.
    Klein, G.: Naturalistic decision making. Hum. Factors 50(3), 456–460 (2008)CrossRefGoogle Scholar
  22. 22.
    Miller, C.: Learning to disagree: argumentative reasoning skill in development. Ph.D. thesis. University of Chicago, August 1991Google Scholar
  23. 23.
    Turner, M.E., Pratkanis, A.R.: Twenty-five years of groupthink theory and research: lessons from the evaluation of a theory. Organ. Behav. Hum. Decis. Process. 73(2–3), 105–115 (1998)CrossRefGoogle Scholar
  24. 24.
    Vaughan, D.: The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. University of Chicago Press, Chicago (1997)CrossRefGoogle Scholar
  25. 25.
    Brown, P., Levinson, S.C.: Politeness: Some Universals in Language Usage, vol. 4. Cambridge University Press, Cambridge (1987)Google Scholar
  26. 26.
    Eisenhower, D.D.: From a Speech to the National Defense Executive Reserve Conference in Washington, D.C., 14 November 1957. Eisenhower, D.D.: Public Papers of the Presidents of the United States, p. 818. National Archives and Records Service, Government Printing Office (1957). https://en.wikiquote.org/wiki/Dwight_D._Eisenhower. Accessed 3 Mar 2018
  27. 27.
    Miller, C.: Social relationships and etiquette with technical systems. In: Withworth, B., de Moor, A. (eds.) Handbook of Research on Socio-Technical Design and Social Networking Systems, Information Science Reference, Hershey, PA, pp. 472–486 (2009)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Smart Information Flow Technologies (SIFT)MinneapolisUSA

Personalised recommendations