Conducting Polyphonic Human-Robot Communication: Mastering Crescendos and Diminuendos in Transparency
- 526 Downloads
Intelligent Agent-based transparency is an important tool for improving trust and reliance calibration in human-agent teaming. Further, flexibility in amount and type of transparency information provided to human collaborators may allow for the accommodation of known biases (e.g., misuse and disuse). To understand the utility of transparency manipulation, it is important to consider the context of the manipulation. This report considers two contextual factors that might influence the impact of transparency: hysteresis and face threat. We describe the nature of the influence of these factors, and provide a short demonstration of their influence. Outcomes show that order of transparency and face threat affect the impact of transparency information on performance, reliance, and trust. This demonstration makes the case that an adaptive transparency paradigm should consider other aspects of human-agent interaction for successful application.
KeywordsIntelligent agents Transparency Human factors Etiquette Human-robot interaction Human-autonomy teaming Autonomy
This research is supported by the U.S. Department of Defense’s Autonomy Research Pilot Initiative. Our views and conclusions do not represent the official policies or position, either expressed or implied, of the U.S. Army Research Laboratory, or U.S. Government. We thank contributors Olivia Newton, Daniel Barber, Jonathon Harris, Jack Hart, Alexis San Javier, Harrison Spellman, Gloria Calhoun, and Mark Draper.
- 2.Gombolay, M., Jensen, R., Stigile, J., Shah, J., Son, S.-H.: Apprenticeship scheduling: learning to schedule from human experts. Presented at the International Joint Conferences on Artificial Intelligence, p. 8 (2016)Google Scholar
- 3.Behymer, K., et al.: Initial evaluation of the intelligent multi-UxV planner with adaptive collaborative/control technologies (IMPACT). Interim AFRL-RH-WP-TR-2017-0011, February 2017Google Scholar
- 5.Sukthankar, G., Shumaker, R., Lewis, M.: Intelligent agents as teammates. In: Salas, E., Fiore, S.M., Letsky, M.P. (eds.) Theories of Team Cognition: Cross-Disciplinary Perspectives, pp. 313–343. Routledge, Taylor & Francis Group, New York (2012)Google Scholar
- 7.Ososky, S., et al.: The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates. Presented at the SPIE Defense, Security, and Sensing, Baltimore, Maryland, USA, pp. 1–10 (2012)Google Scholar
- 8.Dzindolet, M.T., Beck, H.P., Pierce, L.G.: Network operations: developing trust in human and computer agents. In: Hayes, C.C., Miller, C.A. (eds.) Human-Computer Etiquette: Cultural Expectations and the Design Implications They Place on Computers and Technology, pp. 145–180. CRC Press, Boca Raton (2011)Google Scholar
- 9.Ribeiro, M.T., Singh, S., Guestrin, C.: ‘Why should i trust you?: explaining the predictions of any classifier, arXiv:1602.04938 [cs, stat], August 2016
- 13.Goffman, E.: Interaction Ritual: Essays on Face-to-Face Interaction. Aldine, Oxford (1967)Google Scholar
- 15.Panganiban, A.R., Matthews, G., Long, M.D.: Transparency in autonomous teammates: intention to support as teaming information. J. Cogn. Eng. Decis. Mak. 1–17 (2019)Google Scholar
- 16.Spriggs, S., Boyer, J., Bearden, G.: FUSION System Overview. Wright-Patterson Air Force Base, OH (2014)Google Scholar