Abstract
Intelligent Agent-based transparency is an important tool for improving trust and reliance calibration in human-agent teaming. Further, flexibility in amount and type of transparency information provided to human collaborators may allow for the accommodation of known biases (e.g., misuse and disuse). To understand the utility of transparency manipulation, it is important to consider the context of the manipulation. This report considers two contextual factors that might influence the impact of transparency: hysteresis and face threat. We describe the nature of the influence of these factors, and provide a short demonstration of their influence. Outcomes show that order of transparency and face threat affect the impact of transparency information on performance, reliance, and trust. This demonstration makes the case that an adaptive transparency paradigm should consider other aspects of human-agent interaction for successful application.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Phillips, E., Ososky, S., Grove, J., Jentsch, F.: From tools to teammates: toward the development of appropriate mental models for intelligent robots. Proc. Hum. Factors Ergon. Soc. Ann. Meet. 55(1), 1491–1495 (2011)
Gombolay, M., Jensen, R., Stigile, J., Shah, J., Son, S.-H.: Apprenticeship scheduling: learning to schedule from human experts. Presented at the International Joint Conferences on Artificial Intelligence, p. 8 (2016)
Behymer, K., et al.: Initial evaluation of the intelligent multi-UxV planner with adaptive collaborative/control technologies (IMPACT). Interim AFRL-RH-WP-TR-2017-0011, February 2017
Russell, S.J., Norvig, P., Davis, E.: Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall, Upper Saddle River (2010)
Sukthankar, G., Shumaker, R., Lewis, M.: Intelligent agents as teammates. In: Salas, E., Fiore, S.M., Letsky, M.P. (eds.) Theories of Team Cognition: Cross-Disciplinary Perspectives, pp. 313–343. Routledge, Taylor & Francis Group, New York (2012)
Chen, J.Y.C., Lakhmani, S.G., Stowers, K., Selkowitz, A.R., Wright, J.L., Barnes, M.: Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theor. Issues Ergon. Sci. 19(3), 259–282 (2018)
Ososky, S., et al.: The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates. Presented at the SPIE Defense, Security, and Sensing, Baltimore, Maryland, USA, pp. 1–10 (2012)
Dzindolet, M.T., Beck, H.P., Pierce, L.G.: Network operations: developing trust in human and computer agents. In: Hayes, C.C., Miller, C.A. (eds.) Human-Computer Etiquette: Cultural Expectations and the Design Implications They Place on Computers and Technology, pp. 145–180. CRC Press, Boca Raton (2011)
Ribeiro, M.T., Singh, S., Guestrin, C.: ‘Why should i trust you?: explaining the predictions of any classifier, arXiv:1602.04938 [cs, stat], August 2016
Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58(6), 697–718 (2003)
Miller, C.A., Wu, P., Funk, H.B.: A computational approach to etiquette: operationalizing brown and levinson’s politeness model. IEEE Intell. Syst. 23(4), 28–35 (2008)
Parasuraman, R., Miller, C.A.: Trust and etiquette in high-criticality automated systems. Commun. ACM 47(4), 51–55 (2004)
Goffman, E.: Interaction Ritual: Essays on Face-to-Face Interaction. Aldine, Oxford (1967)
Goldsmith, D.: Managing conflicting goals in supportive interaction: an integrative theoretical framework. Commun. Res. 19(2), 264–286 (1992)
Panganiban, A.R., Matthews, G., Long, M.D.: Transparency in autonomous teammates: intention to support as teaming information. J. Cogn. Eng. Decis. Mak. 1–17 (2019)
Spriggs, S., Boyer, J., Bearden, G.: FUSION System Overview. Wright-Patterson Air Force Base, OH (2014)
Stowers, K., Kasdaglis, N., Rupp, M.A., Newton, O.B., Chen, J.Y.C., Barnes, M.J.: The IMPACT of agent transparency on human performance. IEEE Trans. Hum.-Mach. Syst. 50(3), 245–253 (2020)
Jian, J.-Y., Bisantz, A.M., Drury, C.G.: Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 4(1), 53–71 (2000)
Acknowledgments
This research is supported by the U.S. Department of Defense’s Autonomy Research Pilot Initiative. Our views and conclusions do not represent the official policies or position, either expressed or implied, of the U.S. Army Research Laboratory, or U.S. Government. We thank contributors Olivia Newton, Daniel Barber, Jonathon Harris, Jack Hart, Alexis San Javier, Harrison Spellman, Gloria Calhoun, and Mark Draper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wohleber, R.W., Stowers, K., Chen, J.Y.C., Barnes, M. (2021). Conducting Polyphonic Human-Robot Communication: Mastering Crescendos and Diminuendos in Transparency. In: Cassenti, D., Scataglini, S., Rajulu, S., Wright, J. (eds) Advances in Simulation and Digital Human Modeling. AHFE 2020. Advances in Intelligent Systems and Computing, vol 1206. Springer, Cham. https://doi.org/10.1007/978-3-030-51064-0_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-51064-0_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-51063-3
Online ISBN: 978-3-030-51064-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)