Abstract
We are designing and implementing a multi-modal interface to a team of dynamically autonomous robots. For this interface, we have elected to use natural language and gesture. Gestures can be either natural gestures perceived by a vision system installed on the robot, or they can be made by using a stylus on a Personal Digital Assistant. In this paper we describe the integrated modes of input and one of the theoretical constructs that we use to facilitate cooperation and collaboration among members of a team of robots. An integrated context and dialog processing component that incorporates knowledge of spatial relations enables cooperative activity between the multiple agents, both human and robotic.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Allen, J., Byron, D.K., Dzikovska, M., Ferguson, G., Galescu, L., and Stent, A. (2001). Toward Conversational Human-Computer Interaction. AI Magazine, (22)4:27–37.
Fong, T., Thorpe, C., and Baur, C. (2001). Advance Interfaces for Vehicle Teleoperation: Collaborative Control, Sensor Fusion Displays, and Remote Driving Tools. Autonomous Robots, 11: 77 – 85.
Grosz, B. and Sidner, C. (1986). Attention, Intentions, and the Structure of Discourse. Computational Linguistics, 12 (3): 175 – 204.
Grosz, B., Hunsberger, L. and Kraus, S. (1999). Planning and Acting Together. AI Magazine, 20 (4): 23 – 34.
Perzanowski, D., Schultz, A.C. and Adams, W. (1998). Integrating Natural Language and Gesture in a Robotics Domain. In Proc. IEEE Int’l Symp. Intelligent Control, pages 247 – 252, Piscataway, NJ.
Perzanowski, D., Schultz, A., Adams, W., and Marsh, E. (1999). Goal Tracking in a Natural Language Interface: Towards Achieving Adjustable Autonomy. In Proc. 1999 IEEE Int’l Symp. Computational Intelligence in Robotics and Automation, pages 144 – 149, Piscataway, NJ.
Perzanowski, D., Adams, W., Schultz, A., and Marsh, E. (2000). Towards Seamless Integration in a Multimodal Interface. In Proc. 2000 Workshop Interactive Robotics and Entertainment, pages 3 – 9, Menlo Park, CA.
Pollack, M. and McCarthy, C. (1999). Towards Focused Plan Monitoring: A Technique and an Application to Mobile Robots. In Proc. 1999 IEEE Int’l Symp. Computational Intelligence in Robotics and Automation, pages 144 – 149, Piscataway, NJ.
Skubic, M., Perzanowski, D., Schultz, A., and Adams, W. (2002). Using Spatial Language in a Human-Robot Dialog. In 2002 IEEE Int’l Conf. on Robotics and Automation.
Skubic, M., Chronis, G., Matasakis, P., and Keller, J. (2001a). Generating Linguistic Spatial Descriptions from Sonar Readings Using the Histogram of Forces. In Proc. of the 2001 IEEE Int ‘l Conf. on Robotics and Automation,Seoul, Korea.
Skubic, M., Chronis, G., Matasakis, P, and Keller, J. (2001b). Spatial Relations for Tactical Robot Navigation. In Proc. of the SPIE, Unmanned Ground Vehicle Technology III,Orlando, FL.
Rich, C., Sidner, C., and Lesh, N. (2001). COLLAGEN: Applying Collaborative Discourse Theory to Human-Computer Interaction. Al Magazine, 22 (4): 15 – 25.
Wauchope, K. (1994). Eucalyptus: Integrating Natural Language Input with a Graphical User Interface, Technical Report NRL/FR/5510-94-9711, Naval Research Laboratory, Washington, D.C.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer Science+Business Media Dordrecht
About this paper
Cite this paper
Perzanowski, D. et al. (2002). Communicating with Teams of Cooperative Robots. In: Schultz, A.C., Parker, L.E. (eds) Multi-Robot Systems: From Swarms to Intelligent Automata. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-2376-3_20
Download citation
DOI: https://doi.org/10.1007/978-94-017-2376-3_20
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-6046-4
Online ISBN: 978-94-017-2376-3
eBook Packages: Springer Book Archive