Spatial References with Gaze and Pointing in Shared Space of Humans and Robots

  • Patrick Renner
  • Thies Pfeiffer
  • Ipke Wachsmuth
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8684)

Abstract

For solving tasks cooperatively in close interaction with humans, robots need to have timely updated spatial representations. However, perceptual information about the current position of interaction partners is often late. If robots could anticipate the targets of upcoming manual actions, such as pointing gestures, they would have more time to physically react to human movements and could consider prospective space allocations in their planning.

Many findings support a close eye-hand coordination in humans which could be used to predict gestures by observing eye gaze. However, effects vary strongly with the context of the interaction. We collect evidence of eye-hand coordination in a natural route planning scenario in which two agents interact over a map on a table. In particular, we are interested if fixations can predict pointing targets and how target distances affect the interlocutor’s pointing behavior. We present an automatic method combining marker tracking and 3D modeling that provides eye and gesture measurements in real-time.

Keywords

shared space human-human experiment gaze tracking gesture prediction automatic interaction analysis 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abrams, R.A., Meyer, D.E., Kornblum, S.: Eye-hand coordination: Oculomotor control in rapid aimed limb movements. Journal of Experimental Psychology: Human Perception and Performance 16(2), 248–267 (1990)Google Scholar
  2. 2.
    Antonelli, M., Chinellato, E., del Pobil, A.P.: Implicit mapping of the peripersonal space of a humanoid robot. In: IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain, vol. 3, pp. 1–8. IEEE (2011)Google Scholar
  3. 3.
    Berti, A., Frassinetti, F.: When far becomes near: Remapping of space by tool use. Journal of Cognitive Neuroscience 12(3), 415–420 (2000)CrossRefGoogle Scholar
  4. 4.
    Biguer, B., Jeannerod, M., Prablanc, C.: The coordination of eye, head, and arm movements during reaching at a single visual target. Experimental Brain Research 46(2), 301–304 (1982)CrossRefGoogle Scholar
  5. 5.
    Breazeal, C., Kidd, C.D., Thomaz, A.L., Hoffman, G., Berlin, M.: Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: International Conference on Intelligent Robots and Systems (IROS 2005), pp. 708–713. IEEE (2005)Google Scholar
  6. 6.
    Carello, C., Grosofsky, A.: Visually perceiving what is reachable. Ecological Psychology 1(1), 27–54 (1989)CrossRefGoogle Scholar
  7. 7.
    Farnè, A., Làdavas, E.: Auditory peripersonal space in humans. Journal of Cognitive Neuroscience 14(7), 1030–1043 (2002)CrossRefGoogle Scholar
  8. 8.
    Green, A., Hüttenrauch, H.: Making a case for spatial prompting in human-robot communication. In: Language Resources and Evaluation (Workshop on Multimodal Corpora: From multimodal behavior theories to usable models) (2006)Google Scholar
  9. 9.
    Hadjidimitrakis, K., Breveglieri, R., Bosco, A., Fattori, P.: Three-dimensional eye position signals shape both peripersonal space and arm movement activity in the medial posterior parietal cortex. Frontiers in Integrative Neuroscience 6(37) (2012)Google Scholar
  10. 10.
    Holmes, N.P., Spence, C.: The body schema and the multisensory representation(s) of peripersonal space. Cognitive Processing 5(2), 94–105 (2004)CrossRefGoogle Scholar
  11. 11.
    Holthaus, P., Pitsch, K., Wachsmuth, S.: How can i help? spatial attention strategies for a receptionist robot. International Journal of Social Robots 3, 383–393 (2011)CrossRefGoogle Scholar
  12. 12.
    Holthaus, P., Wachsmuth, S.: Active peripersonal space for more intuitive hri. In: International Conference on Humanoid Robots, pp. 508–513. IEEE RAS, Osaka (2012)Google Scholar
  13. 13.
    Hüttenrauch, H., Eklundh, K., Green, A., Topp, E.: Investigating Spatial Relationships in Human-Robot Interaction. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5052–5059. IEEE, Beijing (2006)CrossRefGoogle Scholar
  14. 14.
    Kendon, A.: Conducting interaction: Patterns of behavior in focused encounters, vol. 7. CUP Archive (1990)Google Scholar
  15. 15.
    Làdavas, E.: Functional and dynamic properties of visual peripersonal space. Trends in Cognitive Sciences 6(1), 17–22 (2002)CrossRefGoogle Scholar
  16. 16.
    Mark, L., Nemeth, K., Gardner, D.: Postural dynamics and the preferred critical boundary for visually guided reaching. Journal of Experimental Psychology 23(5), 1365–1379 (1997)Google Scholar
  17. 17.
    Mumm, J., Mutlu, B.: Human-robot proxemics. In: Proceedings of the 6th International Conference on Human-Robot Interaction, HRI 2011, p. 331. ACM Press, New York (2011)Google Scholar
  18. 18.
    Neggers, S., Bekkering, H.: Ocular gaze is anchored to the target of an ongoing pointing movement. Journal of Neurophysiology 83(2), 639–651 (2000)Google Scholar
  19. 19.
    Nguyen, N., Wachsmuth, I.: From body space to interaction space-modeling spatial cooperation for virtual humans. In: 10th International Conference on Autonomous Agents and Multiagent Systems, pp. 1047–1054. International Foundation for Autonomous Agents and Multiagent Systems, Taipei (2011)Google Scholar
  20. 20.
    Nguyen, N., Wachsmuth, I.: A computational model of cooperative spatial behaviour for virtual humans. In: Representing Space in Cognition: Interrelations of Behaviour, Language, and Formal Models, pp. 147–168. Oxford University Press (2013)Google Scholar
  21. 21.
    Pfeiffer, T.: Understanding Multimodal Deixis with Gaze and Gesture in Conversational Interfaces. Shaker Verlag, Aachen (2011)Google Scholar
  22. 22.
    Pfeiffer, T.: Interaction between speech and gesture: Strategies for pointing to distant objects. In: Efthimiou, E., Kouroupetroglou, G., Fotinea, S.-E. (eds.) GW 2011. LNCS, vol. 7206, pp. 238–249. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  23. 23.
    Pfeiffer, T., Renner, P.: Eyesee3d: A low-cost approach for analyzing mobile 3d eye tracking data using computer vision and augmented reality technology. In: Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 195–202. ACM (2014)Google Scholar
  24. 24.
    Prablanc, C., Echallier, J., Komilis, E., Jeannerod, M.: Optimal response of eye and hand motor systems in pointing at a visual target. Biological Cybernetics 124, 113–124 (1979)CrossRefGoogle Scholar
  25. 25.
    Previc, F.H.: Functional specialization in the lower and upper visual fields in humans: Its ecological origins and neurophysiological implications. Behavioral and Brain Sciences 13, 519–542 (1990)CrossRefGoogle Scholar
  26. 26.
    Rizzolatti, G., Scandolara, C., Matelli, M., Gentilucci, M.: Afferent properties of periarcuate neurons in macaque monkeys. Behavioural Brain Research 2(2), 147–163 (1981)CrossRefGoogle Scholar
  27. 27.
    Staudte, M., Crocker, M.W.: Visual attention in spoken human-robot interaction. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, HRI 2009, p. 77. ACM Press, New York (2009)Google Scholar
  28. 28.
    Velichkovsky, B., Sprenger, A., Pomplun, M.: Auf dem Weg zur Blickmaus: Die Beeinflussung der Fixationsdauer durch kognitive und kommunikative Aufgaben. In: Software-Ergonomie 1997, pp. 317–327. Springer (1997)Google Scholar
  29. 29.
    Viola, P., Jones, M.J.: Robust real-time face detection. International Journal of Computer Vision 57(2), 137–154 (2004)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Patrick Renner
    • 1
  • Thies Pfeiffer
    • 2
  • Ipke Wachsmuth
    • 1
  1. 1.Artificial Intelligence GroupBielefeld UniversityBielefeldGermany
  2. 2.Cognitive Interaction Technology Center of ExcellenceBielefeld UniversityBielefeldGermany

Personalised recommendations