Advertisement

Conducting Polyphonic Human-Robot Communication: Mastering Crescendos and Diminuendos in Transparency

Conference paper
  • 526 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1206)

Abstract

Intelligent Agent-based transparency is an important tool for improving trust and reliance calibration in human-agent teaming. Further, flexibility in amount and type of transparency information provided to human collaborators may allow for the accommodation of known biases (e.g., misuse and disuse). To understand the utility of transparency manipulation, it is important to consider the context of the manipulation. This report considers two contextual factors that might influence the impact of transparency: hysteresis and face threat. We describe the nature of the influence of these factors, and provide a short demonstration of their influence. Outcomes show that order of transparency and face threat affect the impact of transparency information on performance, reliance, and trust. This demonstration makes the case that an adaptive transparency paradigm should consider other aspects of human-agent interaction for successful application.

Keywords

Intelligent agents Transparency Human factors Etiquette Human-robot interaction Human-autonomy teaming Autonomy 

Notes

Acknowledgments

This research is supported by the U.S. Department of Defense’s Autonomy Research Pilot Initiative. Our views and conclusions do not represent the official policies or position, either expressed or implied, of the U.S. Army Research Laboratory, or U.S. Government. We thank contributors Olivia Newton, Daniel Barber, Jonathon Harris, Jack Hart, Alexis San Javier, Harrison Spellman, Gloria Calhoun, and Mark Draper.

References

  1. 1.
    Phillips, E., Ososky, S., Grove, J., Jentsch, F.: From tools to teammates: toward the development of appropriate mental models for intelligent robots. Proc. Hum. Factors Ergon. Soc. Ann. Meet. 55(1), 1491–1495 (2011)CrossRefGoogle Scholar
  2. 2.
    Gombolay, M., Jensen, R., Stigile, J., Shah, J., Son, S.-H.: Apprenticeship scheduling: learning to schedule from human experts. Presented at the International Joint Conferences on Artificial Intelligence, p. 8 (2016)Google Scholar
  3. 3.
    Behymer, K., et al.: Initial evaluation of the intelligent multi-UxV planner with adaptive collaborative/control technologies (IMPACT). Interim AFRL-RH-WP-TR-2017-0011, February 2017Google Scholar
  4. 4.
    Russell, S.J., Norvig, P., Davis, E.: Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall, Upper Saddle River (2010)zbMATHGoogle Scholar
  5. 5.
    Sukthankar, G., Shumaker, R., Lewis, M.: Intelligent agents as teammates. In: Salas, E., Fiore, S.M., Letsky, M.P. (eds.) Theories of Team Cognition: Cross-Disciplinary Perspectives, pp. 313–343. Routledge, Taylor & Francis Group, New York (2012)Google Scholar
  6. 6.
    Chen, J.Y.C., Lakhmani, S.G., Stowers, K., Selkowitz, A.R., Wright, J.L., Barnes, M.: Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theor. Issues Ergon. Sci. 19(3), 259–282 (2018)CrossRefGoogle Scholar
  7. 7.
    Ososky, S., et al.: The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates. Presented at the SPIE Defense, Security, and Sensing, Baltimore, Maryland, USA, pp. 1–10 (2012)Google Scholar
  8. 8.
    Dzindolet, M.T., Beck, H.P., Pierce, L.G.: Network operations: developing trust in human and computer agents. In: Hayes, C.C., Miller, C.A. (eds.) Human-Computer Etiquette: Cultural Expectations and the Design Implications They Place on Computers and Technology, pp. 145–180. CRC Press, Boca Raton (2011)Google Scholar
  9. 9.
    Ribeiro, M.T., Singh, S., Guestrin, C.: ‘Why should i trust you?: explaining the predictions of any classifier, arXiv:1602.04938 [cs, stat], August 2016
  10. 10.
    Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58(6), 697–718 (2003)CrossRefGoogle Scholar
  11. 11.
    Miller, C.A., Wu, P., Funk, H.B.: A computational approach to etiquette: operationalizing brown and levinson’s politeness model. IEEE Intell. Syst. 23(4), 28–35 (2008)CrossRefGoogle Scholar
  12. 12.
    Parasuraman, R., Miller, C.A.: Trust and etiquette in high-criticality automated systems. Commun. ACM 47(4), 51–55 (2004)CrossRefGoogle Scholar
  13. 13.
    Goffman, E.: Interaction Ritual: Essays on Face-to-Face Interaction. Aldine, Oxford (1967)Google Scholar
  14. 14.
    Goldsmith, D.: Managing conflicting goals in supportive interaction: an integrative theoretical framework. Commun. Res. 19(2), 264–286 (1992)CrossRefGoogle Scholar
  15. 15.
    Panganiban, A.R., Matthews, G., Long, M.D.: Transparency in autonomous teammates: intention to support as teaming information. J. Cogn. Eng. Decis. Mak. 1–17 (2019)Google Scholar
  16. 16.
    Spriggs, S., Boyer, J., Bearden, G.: FUSION System Overview. Wright-Patterson Air Force Base, OH (2014)Google Scholar
  17. 17.
    Stowers, K., Kasdaglis, N., Rupp, M.A., Newton, O.B., Chen, J.Y.C., Barnes, M.J.: The IMPACT of agent transparency on human performance. IEEE Trans. Hum.-Mach. Syst. 50(3), 245–253 (2020)CrossRefGoogle Scholar
  18. 18.
    Jian, J.-Y., Bisantz, A.M., Drury, C.G.: Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 4(1), 53–71 (2000)CrossRefGoogle Scholar

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Institute for Simulation and TrainingUniversity of Central FloridaOrlandoUSA
  2. 2.Department of ManagementUniversity of AlabamaTuscaloosaUSA
  3. 3.U.S. Army Research LaboratoryAberdeen Proving GroundUSA

Personalised recommendations