Advertisement

Bidirectional Communication for Effective Human-Agent Teaming

  • Amar R. Marathe
  • Kristin E. Schaefer
  • Arthur W. Evans
  • Jason S. Metcalfe
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10909)

Abstract

The recent proliferation of artificial intelligence research is reaching a point where machines are able to learn and adapt to dynamically make decisions independently or in collaboration with human team members. With such technological advancements on the horizon, there will come a mandate to develop techniques to deploy effective human-agent teams. One key challenge to the development of effective teaming has been enabling a shared, dynamic understanding of mission space, and a basic knowledge about the states and intents other teammates. Bidirectional communication is an approach that fosters communication between human and intelligent agents to improve mutual understanding and enable effective task coordination. This session focuses on current research and scientific gaps in three areas necessary to advance the field of bidirectional communication between human and intelligent agent team members. First, intelligent agents must be capable of understanding the state and intent of the human team member. Second, human team members must be capable of understanding the capabilities and intent of the intelligent agent. Finally, in order for the entire system to work, systems must effectively integrate information from and coordinate behaviors across all team members. The combination of these three areas will enable future human-agent teams to develop a shared understanding of the environment as well as a mutual understanding of each other, thereby enabling truly collaborative human-agent teams.

Keywords

Automation Autonomy Robot Mixed-initiative teams  Human automation interaction Bidirectional communication 

Notes

Acknowledgements

The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

References

  1. 1.
    Donaldson, M.S., Corrigan, J.M., Kohn, L.T., et al.: To Err is Human: Building A Safer Health System. National Academies Press, Washington D.C. (2000)Google Scholar
  2. 2.
    Salas, E., Stagl, K.C., Burke, C.S., Goodwin, G.F.: Fostering team effectiveness in organizations: toward an integrative theoretical framework. In: Nebraska Symposium on Motivation, p. 185 (2007)Google Scholar
  3. 3.
    Murphy, R., Shields, J.: The role of autonomy in DoD systems. Def. Sci. Board (2012)Google Scholar
  4. 4.
    Schaefer, K.E., Chen, J.Y., Szalma, J.L., Hancock, P.A.: A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum. Factors 58, 377–400 (2016)CrossRefGoogle Scholar
  5. 5.
    Detection of functional brain network reconfiguration during task-driven cognitive states. Neuroimage. 142, 198–210 (2016)CrossRefGoogle Scholar
  6. 6.
    Garcia, J.O., Brooks, J., Kerick, S., Johnson, T., Mullen, T.R., Vettel, J.M.: Estimating direction in brain-behavior interactions: Proactive and reactive brain states in driving. NeuroImage 150, 239–249 (2017)CrossRefGoogle Scholar
  7. 7.
    Kalia, A.K., Buchler, N., DeCostanza, A., Singh, M.P.: Computing Team Process Measures From the Structure and Content of Broadcast Collaborative Communications. IEEE Trans. Comput. Soc. Syst. 4, 26–39 (2017)CrossRefGoogle Scholar
  8. 8.
    Bonato, P.: Wearable sensors and systems. IEEE Eng. Med. Biol. Mag. 29, 25–36 (2010)CrossRefGoogle Scholar
  9. 9.
    Chen, J.Y., Haas, E.C., Barnes, M.J.: Human performance issues and user interface design for teleoperated robots. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 37, 1231–1245 (2007)CrossRefGoogle Scholar
  10. 10.
    Dean, R.M.S.: Common world model for unmanned systems. In: Unmanned Systems Technology XV, p. 87410O. International Society for Optics and Photonics (2013)Google Scholar
  11. 11.
    Dean, R.M.S., Oh, J., Vinokurov, J.: Common world model for unmanned systems: phase 2. In: Unmanned Systems Technology XVI, p. 90840I. International Society for Optics and Photonics (2014)Google Scholar
  12. 12.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 46, 50–80 (2004)CrossRefGoogle Scholar
  13. 13.
    de Visser, E.J., Cohen, M., Freedy, A., Parasuraman, R.: A design methodology for trust cue calibration in cognitive agents. In: Shumaker, R., Lackey, S. (eds.) VAMR 2014. LNCS, vol. 8525, pp. 251–262. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-07458-0_24CrossRefGoogle Scholar
  14. 14.
    Bitan, Y., Meyer, J.: Self-initiated and respondent actions in a simulated control task. Ergonomics 50, 763–788 (2007)CrossRefGoogle Scholar
  15. 15.
    Seppelt, B.D., Lee, J.D.: Making adaptive cruise control (ACC) limits visible. Int. J. Hum.-Comput. Stud. 65, 192–205 (2007)CrossRefGoogle Scholar
  16. 16.
    Stanton, N.A., Walker, G.H., Young, M.S., Kazi, T., Salmon, P.M.: Changing drivers’ minds: the evaluation of an advanced driver coaching system. Ergonomics 50, 1209–1234 (2007)CrossRefGoogle Scholar
  17. 17.
    Linegang, M.P., Stoner, H.A., Patterson, M.J., Seppelt, B.D., Hoffman, J.D., Crittendon, Z.B., Lee, J.D.: Human-automation collaboration in dynamic mission planning: a challenge requiring an ecological approach. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 2482–2486. SAGE Publications, Los Angeles (2006)CrossRefGoogle Scholar
  18. 18.
    Chen, J.Y., Procci, K., Boyce, M., Wright, J., Garcia, A., Barnes, M.: Situation awareness-based agent transparency. Army Research Laboratory (2014)Google Scholar
  19. 19.
    Chen, J.: Advances in Human Factors in Robots and Unmanned Systems (2018)Google Scholar
  20. 20.
    Schaefer, K.: The perception and measurement of human-robot trust (2013)Google Scholar
  21. 21.
    Schaefer, K.E., Straub, E.R.: Will passengers trust driverless vehicles? Removing the steering wheel and pedals. In: 2016 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), pp. 159–165. IEEE (2016)Google Scholar
  22. 22.
    Kostavelis, I., Gasteratos, A.: Semantic mapping for mobile robotics tasks: a survey. Robot. Auton. Syst. 66, 86–103 (2015)CrossRefGoogle Scholar
  23. 23.
    Wolf, D.F., Sukhatme, G.S.: Semantic mapping using mobile robots. IEEE Trans. Robot. 24, 245–258 (2008)CrossRefGoogle Scholar
  24. 24.
    Heitz, G., Koller, D.: Learning spatial context: using stuff to find things. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5302, pp. 30–43. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-88682-2_4CrossRefGoogle Scholar
  25. 25.
    Meger, D., Forssén, P.-E., Lai, K., Helmer, S., McCann, S., Southey, T., Baumann, M., Little, J.J., Lowe, D.G.: Curious george: an attentive semantic robot. Robot. Auton. Syst. 56, 503–511 (2008)CrossRefGoogle Scholar
  26. 26.
    Walter, M.R., Hemachandra, S., Homberg, B., Tellex, S., Teller, S.: Learning semantic maps from natural language descriptions (2013)Google Scholar
  27. 27.
    Maniadakis, M., Trahanias, P.: Time models and cognitive processes: a review. Front. Neurorobot. 8, (2014)Google Scholar
  28. 28.
    Lee, J.: Trust, trustworthiness, and trustability. In: Presentation at the Workshop on Human Machine Trust for Robust Autonomous Systems (2012)Google Scholar
  29. 29.
    Cook, M.B., Smallman, H.S.: Human factors of the confirmation bias in intelligence analysis: decision support from graphical evidence landscapes. Hum. Factors 50, 745–754 (2008)CrossRefGoogle Scholar
  30. 30.
    Neyedli, H.F., Hollands, J.G., Jamieson, G.A.: Beyond identity: incorporating system reliability information into an automated combat identification system. Hum. Factors 53, 338–355 (2011)CrossRefGoogle Scholar
  31. 31.
    Tufte, E.R.: The Visual Display of Quantitative Information. Graphics Press, Cheshire (2001)Google Scholar
  32. 32.
    Schaefer, K.E.: Measuring trust in human robot interactions: development of the “Trust Perception Scale-HRI”. In: Mittu, R., Sofge, D., Wagner, A., Lawless, W.F. (eds.) Robust Intelligence and Trust in Autonomous Systems, pp. 191–218. Springer, Boston, MA (2016).  https://doi.org/10.1007/978-1-4899-7668-0_10CrossRefGoogle Scholar
  33. 33.
    Drnec, K., Metcalfe, J.S.: Paradigm development for identifying and validating indicators of trust in automation in the operational environment of human automation integration. In: Schmorrow, D.D.D., Fidopiastis, C.M.M. (eds.) AC 2016. LNCS (LNAI), vol. 9744, pp. 157–167. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-39952-2_16CrossRefGoogle Scholar
  34. 34.
    Gremillion, G.M., Metcalfe, J.S., Marathe, A.R., Paul, V.J., Christensen, J., Drnec, K., Haynes, B., Atwater, C.: Analysis of trust in autonomy for convoy operations. In: SPIE Defense + Security, p. 98361Z. International Society for Optics and Photonics (2016)Google Scholar
  35. 35.
    Nothwang, W.D., Gremillion, G.M., Donavanik, D., Haynes, B., Atwater, C., Canady, J., Metcalfe, J.S., Marathe, A.R.: Multi-sensor fusion architecture for human-autonomy teaming. In: 2016 9th International Symposium on Resilient Control Systems (ISRCS). IEEE (2016)Google Scholar
  36. 36.
    Schaefer, K.E., Straub, E.R., Chen, J.Y., Putney, J., Evans, A.: Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cogn. Syst. Res. 46, 26–39 (2017)CrossRefGoogle Scholar
  37. 37.
    Metcalfe, J., Marathe, A., Haynes, B., Paul, V., Gremillion, G., Drnec, K., Atwater, C., Estepp, J., Lukos, J., Carter, E., et al.: Building a framework to manage trust in automation. In: Micro-and Nanotechnology Sensors, Systems, and Applications IX, p. 101941U. International Society for Optics and Photonics (2017)Google Scholar
  38. 38.
    Perelman, B., Evans III, A., Schaefer, K.: Mental model consensus and shifts during navigation system-assisted route planning. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 1183–1187. SAGE Publications, Los Angeles (2017)CrossRefGoogle Scholar
  39. 39.
    Fitts, P.M.: Human engineering for an effective air-navigation and traffic-control system (1951)Google Scholar
  40. 40.
    Sheridan, T.B.: Function allocation: algorithm, alchemy or apostasy? Int. J. Hum.-Comput. Stud. 52, 203–216 (2000)CrossRefGoogle Scholar
  41. 41.
    Cummings, M.M.: Man versus machine or man + machine? IEEE Intell. Syst. 29, 62–69 (2014)CrossRefGoogle Scholar
  42. 42.
    Billings, C.E.: Human-centered aircraft automation: a concept and guidelines (1991)Google Scholar
  43. 43.
    Sheridan, T.B.: Telerobotics, Automation, and Human Supervisory Control. MIT Press, Cambridge (1992)Google Scholar
  44. 44.
    Fong, T., Thorpe, C., Baur, C.: Multi-robot remote driving with collaborative control. IEEE Trans. Ind. Electron. 50, 699–704 (2003)CrossRefGoogle Scholar
  45. 45.
    Abbink, D.A., Mulder, M., Boer, E.R.: Haptic shared control: smoothly shifting control authority? Cogn. Technol. Work 14, 19–28 (2012)CrossRefGoogle Scholar
  46. 46.
    Woods, D.D., Branlat, M.: Hollnagel’s test: being “in control” of highly interdependent multi-layered networked systems. Cogn. Technol. Work 12, 95–101 (2010)CrossRefGoogle Scholar
  47. 47.
    Schaefer, K.E., Brewer, R.W., Putney, J., Mottern, E., Barghout, J., Straub, E.R.: Relinquishing manual control: collaboration requires the capability to understand robot intent. In: 2016 International Conference on Collaboration Technologies and Systems (CTS), pp. 359–366. IEEE (2016)Google Scholar
  48. 48.
    Woods, D.D.: Cognitive technologies: the design of joint human-machine cognitive systems. AI Mag. 6, 86 (1985)Google Scholar
  49. 49.
    Cummings, M., Clare, A.: Holistic modelling for human-autonomous system interaction. Theor. Issues Ergon. Sci. 16, 214–231 (2015)CrossRefGoogle Scholar
  50. 50.
    Marathe, A.R., Metcalfe, J.S., Lance, B.J., Lukos, J.R., Jangraw, D., Lai, K.T., Touryan, J., Stump, E., Sadler, B.M., Nothwang, W., McDowell, K.: The privileged sensing framework: A principled approach to improved human-autonomy integration. Theor. Issues Ergon. Sci. 19(3), 283–320 (2018)CrossRefGoogle Scholar
  51. 51.
    Billings, C.E.: Aviation automation: The search for a human-centered approach. Lawrence Erlbaum Associates Inc., Mahwah (1997)Google Scholar
  52. 52.
    Dekker, S.W., Woods, D.D.: MABA-MABA or abracadabra? Progress on human–automation co-ordination. Cogn. Technol. Work 4, 240–244 (2002)CrossRefGoogle Scholar
  53. 53.
    Lyons, J.B.: Being Transparent about Transparency (2013)Google Scholar
  54. 54.
    Chen, J.Y., Barnes, M.J.: Human–agent teaming for multirobot control: a review of human factors issues. IEEE Trans. Hum.-Mach. Syst. 44, 13–29 (2014)CrossRefGoogle Scholar
  55. 55.
    Canady, J.D., Marathe, A.R., Herman, D.H., Files, B.T.: A maximum likelihood method for estimating performance in a rapid serial visual presentation target-detection task. In: Schmorrow, D., Fidopiastis, C.M. (eds.) VAMR 2018. LNCS 10909, pp. 383–392. Springer, Heidelberg (2018)Google Scholar
  56. 56.
    Schaefer, K.E., Perelman, B.S., Brewer, R.W., Wright, J., Roy, N., Aksaray, D.: Quantifying human decision-making: implications for bidirectional communication in human-robot teams. In: Schmorrow, D., Fidopiastis, C.M. (eds.) VAMR 2018. LNCS 10909, pp. 361–379. Springer, Heidelberg (2018)Google Scholar
  57. 57.
    Drnec, K., Gremillion, G.M., Donavanik, D., Canady, J.D., Atwater, C.S., Haynes, B.A., Marathe, A.R., Carter, E.C., Metcalfe, J.S.: The role of psychophysiological measures as implicit communication within mixed-initiative teams. In: Schmorrow, D., Fidopiastis, C.M. (eds.) VAMR 2018. LNCS 10909, pp. 299–313. Springer, Heidelberg (2018)Google Scholar

Copyright information

© This is a U.S. government work and its text is not subject to copyright protection in the United States; however, its text may be subject to foreign copyright protection 2018

Authors and Affiliations

  • Amar R. Marathe
    • 1
  • Kristin E. Schaefer
    • 1
  • Arthur W. Evans
    • 1
  • Jason S. Metcalfe
    • 1
  1. 1.Human Research and Engineering DirectorateUnited States Army Research LaboratoryAberdeen Proving GroundUSA

Personalised recommendations