Advertisement

Assessing Multimodal Interactions with Mixed-Initiative Teams

  • Daniel BarberEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10904)

Abstract

The state-of-the-art in robotics is advancing to support the warfighters’ ability to project force and increase their reach across a variety of future missions. Seamless integration of robots with the warfighter will require advancing interfaces from teleoperation to collaboration. The current approach to meeting this requirement is to include human-to-human communication capabilities in tomorrow’s robots using multimodal communication. Though advanced, today’s robots do not yet come close to supporting teaming in dismounted military operations, and therefore simulation is required for developers to assess multimodal interfaces in complex multi-tasking scenarios. This paper describes existing and future simulations to support assessment of multimodal human-robot interaction in dismounted soldier-robot teams.

Keywords

Multimodal interfaces Human-robot interaction Simulation Tactile displays 

Notes

Acknowledgement

This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-10-2-0016. The views and conclusions contained in this document are those of the author’s and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

References

  1. 1.
    Amazon.: HTC VIVE Virtual Reality System (HTC), 08 February 2018. https://www.amazon.com/HTC-VIVE-Virtual-Reality-System-pc/dp/B00VF5NT4I?th=1. Accessed 02 Aug 2018
  2. 2.
    Barber, D.J., Leontyev, S., Sun, B., Davis, L., Nicholson, D., Chen, J.Y.: The mixed initiative experimental (MIX) Testbed for collaborative human robot interactions. In: Army Science Conference. DTIC, Orlando (2008)Google Scholar
  3. 3.
    Barber, D.J., Reinerman-Jones, L.E., Matthews, G.: Toward a tactile language for human-robot interaction: two studies of tacton learning performance. Hum. Factors 57(3), 471–490 (2014).  https://doi.org/10.1177/0018720814548063CrossRefGoogle Scholar
  4. 4.
    Barber, D., Abich IV, J., Phillips, E., Talone, A., Jentsch, F., Hill, S.: Field assessment of multimodal communication for dismounted human-robot teams. In: The Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Los Angeles, CA, vol. 59, pp. 921–925. SAGE Publications (2015)CrossRefGoogle Scholar
  5. 5.
    Barber, D., Carter, A., Harris, J., Reinerman-Jones, L.: Feasibility of wearable fitness trackers for adapting multimodal communication. In: Yamamoto, S. (ed.) HIMI 2017. LNCS, vol. 10273, pp. 504–516. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-58521-5_39CrossRefGoogle Scholar
  6. 6.
    Barber, D., Howard, T., Walter, T.: A multimodal interface for real-time soldier-robot teaming. In: SPIE Defense, Security, and Sensing - Unmanned Systems Technology, Baltimore, Maryland USA (2016)Google Scholar
  7. 7.
    Barber, D., Lackey, S., Reinerman-Jones, L., Hudson, I.: Visual and tactile interfaces for bi-directional human robot communication. In: SPIE Defense, Security, and Sensing - Unmanned Systems Technology. Baltimore, Maryland USA (2013)Google Scholar
  8. 8.
    Bischoff, R., Graefe, V.: Dependable multimodal communication and interaction with robotic assistants. In: 11th IEEE International Workshop on Robot and Human Interactive Communication, pp. 300–305. IEEE (2002)Google Scholar
  9. 9.
    Chen, J.Y., Barnes, M.J., Qu, Z.: RoboLeader: an agent for supervisory control of multiple robots. In: Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction (HRI 2010), pp. 81–82 (2010)Google Scholar
  10. 10.
    Chen, J., Joyner, C.: Concurrent performance in gunner’s and robotic tasks and effects of cueing in a simulated multi-tasking environment. In: Proceedings of the Human Factors and Ergonomics Society 52nd Annual Meeting, pp. 237–241 (2009)Google Scholar
  11. 11.
    Childers, M., Lennon, C., Bodt, B., Pusey, J., Hill, S., Camden, R., Navarro, S.: US army research laboratory (ARL) robotics collaborative technology alliance 2014 capstone experiment. Army Research Laboratory, Aberdeen Proving Ground (2016)Google Scholar
  12. 12.
    Cosenzo, K., Chen, J., Reinerman-Jones, L., Barnes, M., Nicholson, D.: Adaptive automation effects on operator performance during a reconnaissance mission with an unmanned ground vehicle. In: Proceedings of the Human Factors and Ergonomics Society 54th Annual Meeting, Los Angeles, CA, pp. 2135–2139 (2010)CrossRefGoogle Scholar
  13. 13.
    Elliot, L.R., Duistermaat, M., Redden, E., Van Erp, J.: Multimodal Guidance for Land Navigation. U.S. Army Research Laboratory, Aberdeen Proving Ground (2007)Google Scholar
  14. 14.
    Endeavor Robotics. (2018). Endeavor Robotics Products (uPOINT). (Endeavor Robotics). http://endeavorrobotics.com/products. Accessed 02 May 2018
  15. 15.
    EPIC.: Setting up UE4 to work with SteamVR, 08 February 2018 (EPIC). https://docs.unrealengine.com/latest/INT/Platforms/SteamVR/QuickStart/2/. Accessed 02 Aug 2018
  16. 16.
    Glass, D.R.: Taking Training to the EDGE, 14 March 2014 (Orlando Marketing & PR Firm Capital Communications). http://www.teamorlando.org/taking-training-to-the-edge/. Accessed 02 Feb 2018
  17. 17.
    Griffith, T., Ablanedo, J., Dwyer, T.: Leveraging a Virtual Environment to Prepare for School Shootings. In: Lackey, S., Chen, J. (eds.) VAMR 2017. LNCS, vol. 10280, pp. 325–338. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-57987-0_26CrossRefGoogle Scholar
  18. 18.
    Hearst, M., Allen, J., Guinn, C., Horvitz, E.: Mixed-initiative interaction: trends and controversies. IEEE Intell. Syst. 14, 14–23 (1999)Google Scholar
  19. 19.
    Kvale, K., Wrakagoda, N., Knudsen, J.: Speech centric multimodal interfaces for mobile communication. Telektronikk 2, 104–117 (2003)Google Scholar
  20. 20.
    Laboratory, U.A., Schaefer, K.E., Brewer, R.W., Pursel, R.E., Zimmermann, A., Cerame, E., Briggs, K.: Outcomes from the first wingman software-in-the-loop integration event: January 2017. US Army Research Laboratory (2017)Google Scholar
  21. 21.
    Lackey, S. J., Barber, D. J., Reinerman-Jones, L., Badler, N., Hudson, I.: Defining next-generation multi-modal communication in human-robot interaction. In: Human Factors and ERgonomics Society Conference. Las Vegas: HFES (2011)Google Scholar
  22. 22.
    Nigay, L., Coutaz, J. A Design Space for Multimodal Systems: Concurrent Processing and Data Fusion. In: INTERACT 1993 and CHI 1993 Conference on Human Factors in Computing Systems, pp. 172–178 (1993)Google Scholar
  23. 23.
    Oh, J., et al.: Integrated intelligence for human-robot teams. In: Kulić, D., Nakamura, Y., Khatib, O., Venture, G. (eds.) ISER 2016. SPAR, vol. 1, pp. 309–322. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-50115-4_28CrossRefGoogle Scholar
  24. 24.
    Open Source Robotics Foundation, 25 January 2018. Gazebo. http://gazebosim.org/. Accessed 14 Feb 2018
  25. 25.
    Parr, L.: Perceptual biases for multimodal cues in chimpanzee (Pan troglodytes) affect recognition. Anim. Cogn. 7, 171–178 (2004)CrossRefGoogle Scholar
  26. 26.
    Partan, S., Marler, P.: Communication goes multimodal. Science 283(5406), 1272–1273 (1999)CrossRefGoogle Scholar
  27. 27.
    Raisamo, R.: Multimodal Human-Computer Interaction: A Constructive and Empirical Study. University of Tampere, Tampere (1999)Google Scholar
  28. 28.
    Reinerman-Jones, L., Taylor, G., Sprouse, K., Barber, D., Hudson, I.: Adaptive automation as a task switching and task congruence challenge. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. vol. 55, pp. 197–201. Sage Publications (2011)CrossRefGoogle Scholar
  29. 29.
    Sutherland, J., Baillergeon, R., McKane, T.: Cordon and search operations: a deadly game of hide and seek. Air Land Sea Bull. Cordon Search, pp. 4–10 (2010)Google Scholar
  30. 30.
    U.S. Air Force: EOD craftsment balances family, mission, 24 May 2016. from http://www.af.mil/News/Article-Display/Article/779650/eod-craftsman-balances-family-mission/. Accessed 7 Feb 2018
  31. 31.
    U.S. Army Research Laboratory, 17 March 2017. Robotics. U.S. Army Research Laboratory: http://www.arl.army.mil/www/default.cfm?page=392. Accessed 7 Feb 2018
  32. 32.
    U.S. Congress: National Defense Authorization Act for Fiscal Year 2001, Washington, D.C (2001)Google Scholar
  33. 33.
    University of Central Florida.: Mixed Initiative Experimental (MIX) Testbed, 23 July 2013. http://active-ist.sourceforge.net/mix.php?menu=mix. Accessed 02 Sept 2018
  34. 34.
    US Army Research Laboratory Aberdeen Proving Ground United States.: Agent Reasoning Transparency: The Influence of Information Level on Automation Induced Complacency. US Army Research Laboratory Aberdeen Proving Ground United States (2017)Google Scholar
  35. 35.
    Wikipedia: JAUS, 06 July 2017. https://en.wikipedia.org/wiki/JAUS. Accessed 02 Sept 2018

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of Central Florida, Institute for Simulation and TrainingOrlandoUSA

Personalised recommendations