Advertisement

Manned-Unmanned Teaming: US Army Robotic Wingman Vehicles

  • Ralph W. BrewerIIEmail author
  • Eduardo Cerame
  • E. Ray Pursel
  • Anthony Zimmermann
  • Kristin E. Schaefer
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 780)

Abstract

Manned-unmanned teaming is the synchronization of Soldiers, manned and unmanned vehicles, and sensors that may improve situational understanding, greater lethality, and improved survivability during military operations. However, since unmanned vehicle autonomy capabilities are constantly advancing, it is difficult to integrate the human team and assess the performance of the team during early design. This work provides an overview of the US Army Wingman program and the human factors integration and assessment capabilities that support improved manned-unmanned teaming performance during joint gunnery operations. The discussion culminates with human integration and team assessment capabilities for interaction with respect to both fielded and software-in-the-loop simulation systems.

Keywords

Wingman Autonomy Human factors Manned-unmanned teaming Warfighter Machine Interface Software-in-the-loop Qualification 

Notes

Acknowledgments

The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

References

  1. 1.
    Phillips, E., Ososky, S., Grove, J., Jentsch, F.: From tools to teammates: toward the development of appropriate mental models for intelligent robots. Proc. Hum. Fact. Ergon. Soc. 55(1), 1491–1495 (2011).  https://doi.org/10.1177/1071181311551310CrossRefGoogle Scholar
  2. 2.
    Chen, J.Y.C., Procci, K., Boyce, M., Wright, J., Garcia, A., Barnes, M.: Situation awareness-based agent transparency. Army Research Laboratory (2014)Google Scholar
  3. 3.
    Bitan, Y., Meyer, J.: Self-initiated and respondent actions in a simulated control task. Ergonomics 50, 763–788 (2007)CrossRefGoogle Scholar
  4. 4.
    Seppelt, B.D., Lee, J.D.: Making adaptive cruise control (ACC) limits visible. Int. J. Hum.-Comput. Stud. 65, 192–205 (2007)CrossRefGoogle Scholar
  5. 5.
    Stanton, N.A., Walker, G.H., Young, M.S., Kazi, T., Salmon, P.M.: Changing drivers’ minds: the evaluation of an advanced driver coaching system. Ergonomics 50, 1209–1234 (2007)CrossRefGoogle Scholar
  6. 6.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Fact. J. Hum. Fact. Ergon. Soc. 46, 50–80 (2004)CrossRefGoogle Scholar
  7. 7.
    Schaefer, K.E., Straub, E.R.: Will passengers trust driverless vehicles? Removing the steering wheel and pedals. In: 2016 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), pp. 159–165. IEEE (2016)Google Scholar
  8. 8.
    Chen, J.Y.C., Durlach, P., Sloan, J., Bowens, L.: Human-robot interaction in the context of simulated route reconnaissance mission. Mil. Psychol. 20(3), 135–149 (2008).  https://doi.org/10.1080/08995600802115904CrossRefGoogle Scholar
  9. 9.
    Crandall, J.W., Goodrich, M.A., Olsen Jr., D.R., Nielsen, C.W.: Validating human-robot interaction schemes in multitasking environments. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 35(4), 438–449 (2005).  https://doi.org/10.1109/TSMCA.2005.850587CrossRefGoogle Scholar
  10. 10.
    Wang, H., Lewis, M.K., Velagapudi, P., Scerri, P., Sycara, K.: How search and its subtasks scale in N robots. In: Proceedings of the International Conference on Human Robot Interaction, pp. 141–148. ACM, New York (2009).  https://doi.org/10.1145/1514095.1514122
  11. 11.
    Burke, J.L., Murphy, R.R.: RSVP: an investigation of remote shared visual presence as common ground for human-robot teams. In: ACM/IEEE Human-Robot Interaction, pp. 161–168. ACM, New York (2007)Google Scholar
  12. 12.
    Murphy, R.R., Griffin, C., Stover, S., Pratt, K.: Use of micro air vehicles at Hurricane Katrina. In: Proceedings of the International Workshop on Safety, Security, and Rescue Robots, p. 27. IEEE Press, Gaithersburg (2006)Google Scholar
  13. 13.
    Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 30(3), 286–297 (2000)CrossRefGoogle Scholar
  14. 14.
    Schaefer, K.E., Straub, E.R., Chen, J.Y.C., Putney, J., Evans, A.W.: Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cogn. Syst. Res.: Spec. Issue Situat. Aware. Hum.-Mach. Interact. Syst. (2017).  https://doi.org/10.1016/j.cogsys.2017.02.002
  15. 15.
    US Army Training and Doctrine Command: Training and qualification, crew. Training Circular No.: TC 3-20.31. Department of the Army (US), Washington (DC), 17 March 2015Google Scholar
  16. 16.

Copyright information

© Springer International Publishing AG, part of Springer Nature (outside the USA) 2019

Authors and Affiliations

  • Ralph W. BrewerII
    • 1
    Email author
  • Eduardo Cerame
    • 2
  • E. Ray Pursel
    • 3
  • Anthony Zimmermann
    • 4
  • Kristin E. Schaefer
    • 1
  1. 1.United States Army Research LaboratoryAberdeen Proving GroundUSA
  2. 2.US Army TARDECWarrenUSA
  3. 3.Naval Surface Warfare CenterDahlgrenUSA
  4. 4.DCS Corp.AlexandriaUSA

Personalised recommendations