Skip to main content

Conducting Polyphonic Human-Robot Communication: Mastering Crescendos and Diminuendos in Transparency

  • Conference paper
  • First Online:
Advances in Simulation and Digital Human Modeling (AHFE 2020)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1206))

Included in the following conference series:

Abstract

Intelligent Agent-based transparency is an important tool for improving trust and reliance calibration in human-agent teaming. Further, flexibility in amount and type of transparency information provided to human collaborators may allow for the accommodation of known biases (e.g., misuse and disuse). To understand the utility of transparency manipulation, it is important to consider the context of the manipulation. This report considers two contextual factors that might influence the impact of transparency: hysteresis and face threat. We describe the nature of the influence of these factors, and provide a short demonstration of their influence. Outcomes show that order of transparency and face threat affect the impact of transparency information on performance, reliance, and trust. This demonstration makes the case that an adaptive transparency paradigm should consider other aspects of human-agent interaction for successful application.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Phillips, E., Ososky, S., Grove, J., Jentsch, F.: From tools to teammates: toward the development of appropriate mental models for intelligent robots. Proc. Hum. Factors Ergon. Soc. Ann. Meet. 55(1), 1491–1495 (2011)

    Article  Google Scholar 

  2. Gombolay, M., Jensen, R., Stigile, J., Shah, J., Son, S.-H.: Apprenticeship scheduling: learning to schedule from human experts. Presented at the International Joint Conferences on Artificial Intelligence, p. 8 (2016)

    Google Scholar 

  3. Behymer, K., et al.: Initial evaluation of the intelligent multi-UxV planner with adaptive collaborative/control technologies (IMPACT). Interim AFRL-RH-WP-TR-2017-0011, February 2017

    Google Scholar 

  4. Russell, S.J., Norvig, P., Davis, E.: Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall, Upper Saddle River (2010)

    MATH  Google Scholar 

  5. Sukthankar, G., Shumaker, R., Lewis, M.: Intelligent agents as teammates. In: Salas, E., Fiore, S.M., Letsky, M.P. (eds.) Theories of Team Cognition: Cross-Disciplinary Perspectives, pp. 313–343. Routledge, Taylor & Francis Group, New York (2012)

    Google Scholar 

  6. Chen, J.Y.C., Lakhmani, S.G., Stowers, K., Selkowitz, A.R., Wright, J.L., Barnes, M.: Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theor. Issues Ergon. Sci. 19(3), 259–282 (2018)

    Article  Google Scholar 

  7. Ososky, S., et al.: The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates. Presented at the SPIE Defense, Security, and Sensing, Baltimore, Maryland, USA, pp. 1–10 (2012)

    Google Scholar 

  8. Dzindolet, M.T., Beck, H.P., Pierce, L.G.: Network operations: developing trust in human and computer agents. In: Hayes, C.C., Miller, C.A. (eds.) Human-Computer Etiquette: Cultural Expectations and the Design Implications They Place on Computers and Technology, pp. 145–180. CRC Press, Boca Raton (2011)

    Google Scholar 

  9. Ribeiro, M.T., Singh, S., Guestrin, C.: ‘Why should i trust you?: explaining the predictions of any classifier, arXiv:1602.04938 [cs, stat], August 2016

  10. Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58(6), 697–718 (2003)

    Article  Google Scholar 

  11. Miller, C.A., Wu, P., Funk, H.B.: A computational approach to etiquette: operationalizing brown and levinson’s politeness model. IEEE Intell. Syst. 23(4), 28–35 (2008)

    Article  Google Scholar 

  12. Parasuraman, R., Miller, C.A.: Trust and etiquette in high-criticality automated systems. Commun. ACM 47(4), 51–55 (2004)

    Article  Google Scholar 

  13. Goffman, E.: Interaction Ritual: Essays on Face-to-Face Interaction. Aldine, Oxford (1967)

    Google Scholar 

  14. Goldsmith, D.: Managing conflicting goals in supportive interaction: an integrative theoretical framework. Commun. Res. 19(2), 264–286 (1992)

    Article  Google Scholar 

  15. Panganiban, A.R., Matthews, G., Long, M.D.: Transparency in autonomous teammates: intention to support as teaming information. J. Cogn. Eng. Decis. Mak. 1–17 (2019)

    Google Scholar 

  16. Spriggs, S., Boyer, J., Bearden, G.: FUSION System Overview. Wright-Patterson Air Force Base, OH (2014)

    Google Scholar 

  17. Stowers, K., Kasdaglis, N., Rupp, M.A., Newton, O.B., Chen, J.Y.C., Barnes, M.J.: The IMPACT of agent transparency on human performance. IEEE Trans. Hum.-Mach. Syst. 50(3), 245–253 (2020)

    Article  Google Scholar 

  18. Jian, J.-Y., Bisantz, A.M., Drury, C.G.: Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 4(1), 53–71 (2000)

    Article  Google Scholar 

Download references

Acknowledgments

This research is supported by the U.S. Department of Defense’s Autonomy Research Pilot Initiative. Our views and conclusions do not represent the official policies or position, either expressed or implied, of the U.S. Army Research Laboratory, or U.S. Government. We thank contributors Olivia Newton, Daniel Barber, Jonathon Harris, Jack Hart, Alexis San Javier, Harrison Spellman, Gloria Calhoun, and Mark Draper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryan W. Wohleber .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wohleber, R.W., Stowers, K., Chen, J.Y.C., Barnes, M. (2021). Conducting Polyphonic Human-Robot Communication: Mastering Crescendos and Diminuendos in Transparency. In: Cassenti, D., Scataglini, S., Rajulu, S., Wright, J. (eds) Advances in Simulation and Digital Human Modeling. AHFE 2020. Advances in Intelligent Systems and Computing, vol 1206. Springer, Cham. https://doi.org/10.1007/978-3-030-51064-0_2

Download citation

Publish with us

Policies and ethics