Skip to main content

Movement-Based Communication for Humanoid-Human Interaction

  • Reference work entry
  • First Online:
Humanoid Robotics: A Reference

Abstract

Humans are very good at interacting and collaborating with each other. This ability is based on mutual understanding and is supported by a continuous exchange of information mediated only in minimal part by language. The majority of messages are covertly embedded in the way the two partners move their eyes and their body. It is this silent, movement-based flow of information that enables a seamless coordination. It occurs without the two partners’ awareness and the delays in the interaction, by providing the possibility to anticipate the needs and intentions of the partner. Humanoid robot, thanks to their shape and motor structure, could greatly benefit from becoming able to send analogous cues with their behaviors, as well as from understanding similar signals covertly sent by their human partners. In this chapter we will describe the main categories of implicit signals for interaction, namely, those driven by oculomotor actions and by the movement of the body. We will first discuss the neural systems supporting the understanding of these signals in humans, and we will motivate the centrality of these mechanisms for humanoid robotics. At the end of the chapter, the reader should have a clear picture of what is an implicit signal, where in the human brain it is encoded and why a humanoid robot should be able to send and read it in its human partners.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 899.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 1,099.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. B. Akgün, D. Tunaoglu, E. Sahin, Action recognition through an action generation mechanism [Online]. Inter. Conf. Epigenetic . http://kovan.ceng.metu.edu.tr/pub/pdf/akgun-epirob-2010.pdf

  2. J. Aloimonos, I. Weiss, A. Bandyopadhyay, Active vision. Int. J. Comput. Vis. 1, 333–356 (1988)

    Article  Google Scholar 

  3. E. Ambrosini, G. Pezzulo, M. Costantini, The eye in hand: predicting others’ behavior by integrating multiple sources of information. J. Neurophysiol. (2015). https://doi.org/10.1152/jn.00464.2014

    Article  Google Scholar 

  4. S. Andrist, B. Mutlu, A. Tapus, Look like me: matching robot personality via gaze to increase motivation. Proc. ACM. CHI’15 Conf. Hum. Factors Comput. Syst. 1, 3603–3612 (2015)

    Google Scholar 

  5. S. Andrist, X.Z. Tan, M. Gleicher, B. Mutlu, Conversational gaze aversion for humanlike robots. ACM/IEEE Int. Conf. Human-Robot Interact. (2014). https://doi.org/10.1145/2559636.2559666

  6. S.M. Anzalone, S. Boucenna, S. Ivaldi, M. Chetouani, Evaluating the engagement with social robots. Int. J. Soc. Robot. 7, 465–478 (2015)

    Article  Google Scholar 

  7. B.D. Argall, S. Chernova, M. Veloso, B. Browning, A survey of robot learning from demonstration. Robot. Auton. Syst. 57, 469–483 (2009)

    Article  Google Scholar 

  8. E.I. Barakova, T. Lourens, Expressing and interpreting emotional movements in social games with robots. Pers. Ubiquit. Comput. 14, 457–467 (2010)

    Article  Google Scholar 

  9. J.N. Bassili, Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face. J. Pers. Soc. Psychol. 37, 2049–2058 (1979)

    Article  Google Scholar 

  10. A. Beck, L. Cañamero, A. Hiolle, L. Damiano, P. Cosi, F. Tesser, G. Sommavilla, Interpretation of emotional body language displayed by a humanoid robot: a case study with children. Int. J. Soc. Robot. 5, 325–334 (2013)

    Article  Google Scholar 

  11. C.C. Bennett, S. Šabanović, Deriving minimal features for human-like facial expressions in robotic faces. Int. J. Soc. Robot. 6, 367–381 (2014)

    Article  Google Scholar 

  12. D. Bernhardt, P. Robinson, Detecting affect from non-stylised body motions. Affect. Comput. Intell. Interact. (2007). https://doi.org/10.1007/978-3-540-74889-2_6

  13. F. Berton, G. Sandini, G. Metta. Anthropomorphic visual sensors [online]. Encycl. Sensors X, 1–16 (2006). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.169.2885&rep=rep1&type=pdf

  14. A.M. Bestick, S.A. Burden, G. Willits, N. Naikal, Personalized kinematics for human – robot collaborative manipulation. Proc. IEEE Conf. Intell. Robot. Syst. pp. 1037–1044

    Google Scholar 

  15. E. Bicho, L. Louro, W. Erlhagen, Integrating verbal and nonverbal communication in a dynamic neural field architecture for human-robot interaction. Front. Neurorobot. 4, 5 (2010)

    Google Scholar 

  16. A. Bisio, A. Sciutti, F. Nori, G. Metta, L. Fadiga, G. Sandini, T. Pozzo, Motor contagion during human-human and human-robot interaction. PLoS One 9, e106172 (2014)

    Article  Google Scholar 

  17. R. Blake, M. Shiffrar, Perception of human motion. Annu. Rev. Psychol. 58, 47–73 (2007)

    Article  Google Scholar 

  18. L. Bonini, P.F. Ferrari, L. Fogassi, Neurophysiological bases underlying the organization of intentional actions and the understanding of others’ intention. Conscious. Cogn. 22, 1095–1104 (2013)

    Article  Google Scholar 

  19. A. Borji, D. Parks, L. Itti, Complementary effects of gaze direction and early saliency in guiding fixations during free viewing. J. Vis. 14, 3 (2014)

    Article  Google Scholar 

  20. J.-D. Boucher, U. Pattacini, A. Lelong, G. Bailly, F. Elisei, S. Fagel, P.F. Dominey, J. Ventre-Dominey, I reach faster when I see you look: gaze effects in human-human and human-robot face-to-face cooperation. Front. Neurorobot. 6, 3 (2012)

    Article  Google Scholar 

  21. C. Breazeal, Toward sociable robots. Rob. Auton. Syst. 42, 167–175 (2003)

    Article  MATH  Google Scholar 

  22. C. Breazeal, A. Edsinger, P. Fitzpatrick, B. Scassellati, Active vision for sociable robots. IEEE Trans. Man, Cybern. Syst. XX, 1–12 (2000)

    Google Scholar 

  23. C. Breazeal, C.D. Kidd, A.L. Thomaz, G. Hoffman, M. Berlin, Effects of nonverbal communication on efficiency and robustness in human-robot teamwork, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 708–713

    Google Scholar 

  24. C. (Ferrell) Breazeal, J. Velasquez, Toward teaching a robot “infant” using emotive communication acts, in Proceedings of the 1998 Simulated Adaptive Behavior Workshop on Socially Situated Intelligence, pp. 25–40 (1998)

    Google Scholar 

  25. A.G. Brooks, R.C. Arkin, Behavioral overlays for non-verbal communication expression on a humanoid robot. Auton Robots 22, 55–74 (2006)

    Article  Google Scholar 

  26. R. Brooks, A.N. Meltzoff, The importance of eyes: how infants interpret adult looking behavior. Dev. Psychol. 38, 958–966 (2002)

    Article  Google Scholar 

  27. S.J. Burton, A.A. Samadani, R. Gorbet, D. Kulić, Laban movement analysis and affective movement generation for robots and other near-living creatures, in Dance Notations and Robot Motion, ed. by J.P. Laumond, N. Abe. Springer Tracts in Advanced Robotics, vol. 111 (Springer, Cham, 2016), pp. 25–48

    Google Scholar 

  28. L. Camaioni, P. Perucchini, F. Bellagamba, C. Colonnesi, The role of declarative pointing in developing a theory of mind. Infancy 5, 291–308 (2004)

    Article  Google Scholar 

  29. F. Campanella, G. Sandini, M.C. Morrone, Visual information gleaned by observing grasping movement in allocentric and egocentric perspectives. Proc. Biol. Sci. 278, 2142–2149 (2011)

    Article  Google Scholar 

  30. A. Cangelosi, T. Ogata, Speech and language in humanoid robots, in Section: Human-Humanoid Interaction, Humanoid Robotics: A Reference (Springer, London, 2017)

    Google Scholar 

  31. J.D. Carlin, A.J. Calder, N. Kriegeskorte, H. Nili, J.B. Rowe, A head view-invariant representation of gaze direction in anterior superior temporal sulcus. Curr. Biol. 21, 1817–1821 (2011)

    Article  Google Scholar 

  32. G. Castellano, S.D. Villalba, A. Camurri, Recognising human emotions from body movement and gesture dynamics, in Picard Affective Computing and Intelligent Interaction. ACII 2007, ed. by A.C.R. Paiva, R. Prada, R.W. Lecture Notes in Computer Science, vol. 4738 (Springer, Berlin/Heidelberg, 2007), pp. 71–82

    Google Scholar 

  33. T. Chaminade, G. Cheng, Social cognitive neuroscience and humanoid robotics. J. Physiol. Paris 103, 286–295 (2009)

    Article  Google Scholar 

  34. T. Charman, S. Baron-Cohen, J. Swettenham, G. Baird, A. Cox, A. Drew, Testing joint attention, imitation, and play as infancy precursors to language and theory of mind. Cogn. Dev. 15, 481–498 (2000)

    Article  Google Scholar 

  35. F. Cid, J.A. Prado, P. Bustos, P. Nunez, A real time and robust facial expression recognition and imitation approach for affective human-robot interaction using Gabor filtering. IEEE Int. Conf. Intell. Robot. Syst. (2013). https://doi.org/10.1109/IROS.2013.6696662

  36. M. Cook, Gaze and mutual gaze in social encounters. Am. Sci. 65, 328–333 (1977)

    Google Scholar 

  37. S. Costa, F. Soares, C. Santos, Facial expressions and gestures to convey emotions with a humanoid robot. Lect. Notes Comput. Sci. (Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinforma.) 8239 LNAI, 542–551, (2013)

    Google Scholar 

  38. A. Curioni, G. Knoblich, N. Sebanz, Joint action in humans – a model for human-robot interactions? in Section: Human-Humanoid Interaction, Humanoid Robotics: A Reference. (Springer, London, 2017)

    Google Scholar 

  39. Y. Demiris, B. Khadhouri, Hierarchical attentive multiple models for execution and recognition of actions. Robot. Auton. Syst. 54, 361–369 (2006)

    Article  Google Scholar 

  40. M.W. Doniec, G. Sun, B. Scassellati, Active learning of joint attention, in Humanoid Robots, 2006 6th IEEE-RAS International Conference on (2006), pp. 34–39

    Google Scholar 

  41. A.D. Dragan, K.C.T. Lee, S.S. Srinivasa, Legibility and predictability of robot motion. ACM/IEEE Int. Conf. Human-Robot Interact. (2013). https://doi.org/10.1109/HRI.2013.6483603

  42. J. Driver, G. Davis, P. Ricciardelli, P. Kidd, E. Maxwell, S. Baron-Cohen, Gaze perception triggers reflexive visuospatial orienting. Vis. Cogn. 6, 509–540 (1999)

    Article  Google Scholar 

  43. J. Duhamel, Rethink Robotics-Finding a Market. Stanford CasePublisher 204-2013-1. 20 May 2013. Stanford CasePublisher 204-2013-1. 20 May 2013. https://www.google.it/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwid7MitmarXAhUKJ8AKHQgLBaYQFggnMAA&url=httpsAFFweb.stanford.eduFclassFee204FPublicationsFRethink520Robotics5202013-204-1.pdf&usg=AOvVaw2TpI_SlsxqYJQV1M9HHHjQ

  44. P. Ekman, W.V. Friesen, Facial Action Coding System: A Technique for the Measurement of Facial Movement (Consulting Psychologists Press, Palo Alto, 1978)

    Google Scholar 

  45. M. Elsabbagh, E. Mercure, K. Hudry, S. Chandler, G. Pasco, T. Charman, A. Pickles, S. Baron-Cohen, P. Bolton, M.H. Johnson, Infant neural sensitivity to dynamic eye gaze is associated with later emerging autism. Curr. Biol. 22, 338–342 (2012)

    Article  Google Scholar 

  46. T. Falck-Ytter, G. Gredebäck, C. von Hofsten, Infants predict other people’s action goals. Nat. Neurosci. 9, 878–879 (2006)

    Article  Google Scholar 

  47. T. Farroni, G. Csibra, F. Simion, M.H. Johnson, Eye contact detection in humans from birth. Proc. Natl. Acad. Sci. 99, 9602–9605 (2002)

    Article  Google Scholar 

  48. F. Festante, A. Cilia, V. Loiacono, M. Bimbi, L. Fogassi, P.F. Ferrari, Mirror neurons of ventral premotor cortex are modulated by social cues provided by others’ gaze. J. Neurosci. 36, 3145–3156 (2016)

    Article  Google Scholar 

  49. J. Fink, Anthropomorphism and human likeness in the design of robots and human-robot interaction. Lect. Notes Comput. Sci. (Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinforma.) 7621, 199–208 (2012)

    Google Scholar 

  50. J.R. Flanagan, R.S. Johansson, Action plans used in action observation. Nature 424, 769–771 (2003)

    Article  Google Scholar 

  51. A. Frischen, A.P. Bayliss, S.P. Tipper, Gaze cueing of attention: visual attention, social cognition, and individual differences. Psychol. Bull. 133, 694–724 (2007)

    Article  Google Scholar 

  52. V. Gazzola, G. Rizzolatti, B. Wicker, C. Keysers, The anthropomorphic brain: the mirror neuron system responds to human and robotic actions. Neuroimage 35, 1674–1684 (2007)

    Article  Google Scholar 

  53. M.J. Gielniak, C.K. Liu, A.L. Thomaz, Generating human-like motion for robots. Int. J. Robot. Res. 32, 1275–1301 (2013)

    Article  Google Scholar 

  54. M.J. Gielniak, C.K. Liu, A.L. Thomaz, Secondary action in robot motion. Proc. IEEE Int. Work. Robot. Hum. Interact. Commun. (2010). https://doi.org/10.1109/ROMAN.2010.5598730

  55. M.J. Gielniak, A.L. Thomaz, Generating anticipation in robot motion, in RO-MAN, 2011 IEEE (IEEE, 2011), pp. 449–454

    Google Scholar 

  56. M.J. Gielniak, A.L. Thomaz, Enhancing interaction through exaggerated motion synthesis. Int. Conf. Human Robot Interact. (2012). https://doi.org/10.1145/2157689.2157813

  57. S. Glasauer, M. Huber, P. Basili, A. Knoll, T. Brandt, Interacting in time and space: investigating human-human and human-robot joint action. Int. Symp. Robot. Hum. Interact. Commun. (2010). https://doi.org/10.1109/ROMAN.2010.5598638

  58. D. Glowinski, N. Dael, A. Camurri, G. Volpe, M. Mortillaro, K. Scherer, Toward a minimal representation of affective gestures. IEEE Trans. Affect. Comput. 2, 106–118 (2011)

    Article  Google Scholar 

  59. M. Gori, A. Sciutti, D. Burr, G. Sandini, Direct and indirect haptic calibration of visual size judgments. PLoS One 6, e25599 (2011)

    Article  Google Scholar 

  60. G. Gredebäck, T. Falck-Ytter, Eye movements during action observation. Perspect. Cogn. Sci. 10, 591–598 (2015)

    Article  Google Scholar 

  61. H. Gunes, M. Pantic, Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners, in Intelligent Virtual Agents. IVA 2010, ed. by J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, A. Safonova. Lecture Notes in Computer Science, vol. 6356 (Springer, Berlin/Heidelberg, 2010), pp. 371–377

    Chapter  Google Scholar 

  62. H. Gunes, C. Shan, S. Chen, Y. Tian, Bodily expression for automatic affect recognition. Emot. Recognit. A Pattern Anal. Approach (2015). https://doi.org/10.1002/9781118910566.ch14

    Google Scholar 

  63. J. Hall, T. Tritton, A. Rowe, A. Pipe, C. Melhuish, U. Leonards, Perception of own and robot engagement in human–robot interactions and their dependence on robotics knowledge. Rob. Auton. Syst. 62, 392–399 (2014)

    Article  Google Scholar 

  64. K. Harada, K. Hauser, T. Bretl, J.C. Latombe, Natural motion generation for humanoid robots, in IEEE International Conference on Intelligent Robots and Systems, Beijing (2006), pp. 833–839

    Google Scholar 

  65. N. Hu, A. Bestick, G. Englebienne, R. Bajscy, B. Kr, Human intent forecasting using intrinsic kinematic constraints. Int. Conf. Intell. Robot. Syst, in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon (IEEE, 2016), pp. 787–793

    Google Scholar 

  66. J. Ido, Y. Matsumoto, T. Ogasawara, R. Nisimura, Humanoid with interaction ability using vision and speech information, in IEEE International Conference on Intelligent Robots and Systems, Beijing (2006), pp. 1316–1321

    Google Scholar 

  67. A.J. Ijspeert, J. Nakanishi, S. Schaal, Learning attractor landscapes for learning motor primitives [Online], in Proceedings of the 15th International Conference on Neural Information Processing Systems (MIT Press, Cambridge, MA), pp. 1547–1554. http://dl.acm.org/citation.cfm?id=2968618.2968810

  68. T. Iqbal, L.D. Riek, Human robot coordination, in Section: Human-Humanoid Interaction, Humanoid Robotics: A Reference, (Springer, London, 2017)

    Google Scholar 

  69. S. Ivaldi, S.M. Anzalone, W. Rousseau, O. Sigaud, M. Chetouani, Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement. Front Neurorobot. 8, 1–16 (2014)

    Article  Google Scholar 

  70. G. Johansson, Visual perception of biological motion and a model for its analysis. Percept. Psychophys. 14, 201–211 (1973)

    Article  Google Scholar 

  71. R.S. Johansson, G. Westling, A. Bäckström, J.R. Flanagan, Eye-hand coordination in object manipulation. J. Neurosci. 21, 6917–6932 (2001). http://www.ncbi.nlm.nih.gov/pubmed/11517279

    Article  Google Scholar 

  72. F. Kaplan, V.V. Hafner, The challenges of joint attention. Interact. Stud. 7, 135–169 (2006)

    Article  Google Scholar 

  73. M. Karg, A.A. Samadani, R. Gorbet, K. Kuhnlenz, J. Hoey, D. Kulic, Body movements for affective expression: a survey of automatic recognition and generation. IEEE Trans. Affect. Comput. 4, 341–359 (2013)

    Article  Google Scholar 

  74. A. Kendon, Some functions of gaze-direction in social interaction. Acta Psychol. 26, 22–63 (1967)

    Article  Google Scholar 

  75. S.G. Khan, S. Bendoukha, M.N. Mahyuddin, Dynamic control for human-humanoid interaction, in Section: Human-Humanoid Interaction, Humanoid Robotics: A Reference, (Springer, London, 2017)

    Google Scholar 

  76. H. Kim, H. Jasso, G. Deák, J. Triesch, A robotic model of the development of gaze following, in 2008 IEEE 7th International Conference on Development and Learning, ICDL, Monterey (2008), pp. 238–243

    Google Scholar 

  77. G. Knoblich, S. Butterfill, N. Sebanz, Psychological research on joint action: theory and data. Psychol. Learn. Motiv. Adv. Res. Theory 54. 54, 59–101 (2011)

    Article  Google Scholar 

  78. H.S. Koppula, A. Saxena, Anticipating human activities using object affordances for reactive robotic response. IEEE Trans. Pattern Anal. Mach. Intell. 38, 14–29 (2016)

    Article  Google Scholar 

  79. S. Kumano, K. Otsuka, J. Yamato, E. Maeda, Y. Sato, Pose-invariant facial expression recognition using variable-intensity templates. Int. J. Comput. Vis. 83, 178–194 (2009)

    Article  Google Scholar 

  80. R. Laban, F. Lawrence, Effort (Macdonald and Evans, London, 1947)

    Google Scholar 

  81. B. Laeng, S. Sirois, G. Gredebäck, Pupillometry. Perspect. Psychol. Sci. 7, 18–27 (2012)

    Article  Google Scholar 

  82. J. Lasseter, Principles of traditional animation applied to 3D computer animation. ACM SIGGRAPH Comput. Graph 21, 35–44 (1987)

    Article  Google Scholar 

  83. J. Li, The benefit of being physically present: a survey of experimental works comparing copresent robots, telepresent robots and virtual agents. Int. J. Hum. Comput. Stud. 77, 23–27 (2015)

    Article  Google Scholar 

  84. R. Liepelt, W. Prinz, M. Brass, When do we simulate non-human agents? Dissociating communicative and non-communicative actions. Cognition 115, 426–434 (2010)

    Article  Google Scholar 

  85. K.S. Lohan, S.S. Griffiths, A. Sciutti, T.C. Partmann, K.J. Rohlfing, Co-development of manner and path concepts in language, action, and eye-gaze behavior. Top. Cogn. Sci. 6, 492–512 (2014)

    Article  Google Scholar 

  86. K.S. Lohan, H. Lehmann, C. Dondrup, F. Broz, H. Kose, Enriching the human-robot interaction loop with natural, semantic and symbolic gestures, in Section: Human-Humanoid Interaction, Humanoid Robotics: A Reference, (Springer, London, 2017)

    Google Scholar 

  87. K.S. Lohan, K.J. Rohlfing, K. Pitsch, J. Saunders, H. Lehmann, C.L. Nehaniv, K. Fischer, B. Wrede, Tutor spotter: proposing a feature set and evaluating it in a robotic system. Int. J. Soc. Robot. 4, 131–146 (2011)

    Article  Google Scholar 

  88. T. Lorenz, A. Weiss, S. Hirche, Synchrony and reciprocity: key mechanisms for social companion robots in therapy and care. Int. J. Soc. Robot. 8, 125–143 (2016)

    Article  Google Scholar 

  89. M.N. Mahyuddin, G. Herrmann, Cooperative robot manipulator control with human “pinning”for robot assistive task execution, in International Conference on Social Robotics. (Springer, 2013), pp. 521–530

    Google Scholar 

  90. V. Manera, C. Becchio, A. Cavallo, L. Sartori, U. Castiello, Cooperation or competition? Discriminating between social intentions by observing prehensile movements. Exp. Brain Res. 211, 547–556 (2011)

    Article  Google Scholar 

  91. Y. Matsumoto, A. Zelinsky, An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement, in Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on, Grenoble (2000), pp. 499–504

    Google Scholar 

  92. N. Mavridis, A review of verbal and non-verbal human–robot interactive communication. Rob Auton Syst 63, 22–35 (2015)

    Article  MathSciNet  Google Scholar 

  93. G. Mehlmann, M. Häring, K. Janowski, T. Baur, P. Gebhard, E . André, Exploring a model of gaze for grounding in multimodal HRI, in Proceedings of the 16th International Conference on Multimodal Interaction – ICMI ‘14. ACM Press, pp. 247–254

    Google Scholar 

  94. D. Metaxas, S. Zhang, A review of motion analysis methods for human nonverbal communication computing. Image Vis. Comput. 31, 421–433 (2013)

    Article  Google Scholar 

  95. G. Metta, L. Natale, F. Nori, G. Sandini, D. Vernon, L. Fadiga, C. von Hofsten, K. Rosander, M. Lopes, J. Santos-Victor, A. Bernardino, L. Montesano, The iCub humanoid robot: an open-systems platform for research in cognitive development. Neural Netw. 23, 1125–1134 (2010)

    Article  Google Scholar 

  96. R.C. Miall, D.M. Wolpert, Forward models for physiological motor control. Neural Netw. 9, 1265–1279 (1996)

    Article  MATH  Google Scholar 

  97. B. Miller, D. Feil-Seifer, Embodiment, situatedness and morphology for humanoid interaction, in Section: Human-Humanoid Interaction, Humanoid Robotics: A Reference, (Springer, London, 2017)

    Google Scholar 

  98. S.A.L. Moubayed, G. Skantze, J. Beskow, The Furhat back-projected humanoid head–lip reading, gaze and multi-party interaction. Int. J. Humanoid Robot. 10, 1350005 (2013)

    Article  Google Scholar 

  99. J. Mumm, B. Mutlu, Human-robot proxemics: physical and psychological distancing in human-robot interaction. Design (2011). https://doi.org/10.1145/1957656.1957786

  100. B. Mutlu, T. Shiwa, T. Kanda, H. Ishiguro, N. Hagita, Footing in human-robot conversations: how robots might shape participant roles using gaze cues, in ACM/IEEE International Conference on Human-Robot Interaction (HRI), New York (2009), pp. 61–68

    Google Scholar 

  101. K. Nakagawa, K. Shinozawa, H. Ishiguro, T. Akimoto, N. Hagita, Motion modification method to control affective nuances for robots IEEE/RSJ Int. Conf. Intell. Robot. Syst. IROS 2009 (2009). https://doi.org/10.1109/IROS.2009.5354205

  102. S. Nishio, H. Ishiguro, N. Hagit, Geminoid: teleoperated android of an existing person, in Humanoid Robots: New Developments. (I-Tech Education and Publishing, Vienna), pp. 343–352

    Google Scholar 

  103. N. Noceti, A. Sciutti, G. Sandini, Cognition helps vision: recognizing biological motion using invariant dynamic cues. Lect. Notes Comput. Sci. (Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinforma.) 9280, 676–686 (2015)

    MathSciNet  Google Scholar 

  104. T. Nomura, Empathy as signaling feedback between (humanoid) robots and humans, in Section: Human-Humanoid Interaction, Humanoid Robotics: A Reference, (Springer, London, 2017)

    Google Scholar 

  105. D.G. Novick, B. Hansen, K. Ward, Coordinating turn-taking with gaze, in Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP ‘96 Philadelphia (IEEE, 1996), pp. 1888–1891

    Google Scholar 

  106. E. Oztop, D.W. Franklin, T. Chaminade, Human – humanoid interaction: is a humanoid robot perceived as a human. Int. J. Humanoid. Robot. 2, 537–559 (2005)

    Article  Google Scholar 

  107. E. Oztop, D. Wolpert, M. Kawato, Mental state inference using visual control parameters. Cogn. Brain Res. 22, 129–151 (2005)

    Article  Google Scholar 

  108. O. Palinko, F. Rea, G. Sandini, A. Sciutti, Eye gaze tracking for a humanoid robot, in 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids). IEEE, pp. 318–324

    Google Scholar 

  109. O. Palinko, F. Rea, G. Sandini, A. Sciutti, Robot reading human gaze: why eye tracking is better than head tracking for human-robot collaboration, in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 5048–5054

    Google Scholar 

  110. O. Palinko, A. Sciutti, L. Patanè, F. Rea, F. Nori, G. Sandini, Communicative lifting actions in human-humanoid interaction. in IEEE-RAS International Conference on Humanoid Robots (Madrid, 2014)

    Google Scholar 

  111. O. Palinko, A. Sciutti, L. Schillingmann, F. Rea, Y. Nagai, G. Sandini, Gaze contingency in turn-taking for human robot interaction: advantages and drawbacks, accepted. 24nd IEEE Int. Symp. Robot Hum. Interact. Commun. (IEEE RO-MAN 2015) (2015). https://doi.org/10.1109/ROMAN.2015.7333640

  112. O. Palinko, A. Sciutti, Y. Wakita, Y. Matsumoto, G. Sandini, If looks could kill: humanoid robots play a gaze-based social game with humans, in IEEE/RAS International Conference of Humanoids Robotics, Cancun (2016)

    Google Scholar 

  113. I.-W. Park, J.-Y. Kim, J. Lee, J.-H. Oh, Mechanical design of the humanoid robot platform, HUBO. Adv. Robot. 21, 1305–1322 (2007)

    Article  Google Scholar 

  114. G. Pezzulo, F. Donnarumma, H. Dindo, Human sensorimotor communication: a theory of signaling in online social interactions. PLoS One 8, e79876 (2013)

    Article  Google Scholar 

  115. S. Planalp, V.L. DeFrancisco, D. Rutherford, Varieties of cues to emotion in naturally occurring situations. Cogn. Emot. 10, 137–154 (1996)

    Article  Google Scholar 

  116. F.E. Pollick, H.M. Paterson, A. Bruderlin, A.J. Sanford, Perceiving affect from arm movement. Cognition 82, 51–61 (2001)

    Article  Google Scholar 

  117. F. Rea, P. Muratore, A. Sciutti, 13-year-olds approach human-robot interaction like adults, in 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob). IEEE, pp. 138–143

    Google Scholar 

  118. F. Rea, G. Sandini, G. Metta, Motor biases in visual attention for a humanoid robot, in 2014 IEEE-RAS International Conference on Humanoid Robots. IEEE, pp. 779–786

    Google Scholar 

  119. G. Rizzolatti, L. Fadiga, L. Fogassi, V. Gallese, Resonance behaviors and mirror neurons. Arch. Ital. Biol. 137, 85–100 (1999)

    Google Scholar 

  120. G. Rizzolatti, L. Fadiga, V. Gallese, L. Fogassi, Premotor cortex and the recognition of motor actions. Cogn. Brain Res. 3, 131–141 (1996)

    Article  Google Scholar 

  121. K. Ruhland, C.E. Peters, S. Andrist, J.B. Badler, N.I. Badler, M. Gleicher, B. Mutlu, R. McDonnell, A review of eye gaze in virtual agents, social robotics and HCI: behaviour generation, user interaction and perception. Comput. Graph Forum 34, 299–326 (2015)

    Article  Google Scholar 

  122. S. Runeson, G. Frykholm, Visual perception of lifted weight. J. Exp. Psychol. Hum. Percept. Perform. 7, 733–740 (1981)

    Article  Google Scholar 

  123. S. Runeson, G. Frykholm, Kinematic specification of dynamics as an informational basis for person-and-action perception: expectation, gender recognition, and deceptive intention. J. Exp. Psychol. Gen. 112, 585–615 (1983)

    Article  Google Scholar 

  124. A.-A. Samadani, E. Kubica, R. Gorbet, D. Kulić, Perception and generation of affective hand movements. Int. J. Soc. Robot. 5, 35–51 (2013)

    Article  Google Scholar 

  125. G. Sandini, V. Tagliasco, An anthropomorphic retina-like structure for scene analysis. Comput. Graph Image Process 14, 365–372 (1980)

    Article  Google Scholar 

  126. L. Sartori, G. Bucchioni, U. Castiello, When emulation becomes reciprocity. Soc. Cogn. Affect. Neurosci. 8, 662–669 (2013)

    Article  Google Scholar 

  127. K.R. Scherer, Expression of emotion in voice and music. J. Voice 9, 235–248 (1995)

    Article  Google Scholar 

  128. L. Schilbach, B. Timmermans, V. Reddy, A. Costall, G. Bente, T. Schlicht, K. Vogeley, Toward a second-person neuroscience. Behav. Brain Sci. 36, 393–414 (2013)

    Article  Google Scholar 

  129. A.C. Schütz, D.I. Braun, K.R. Gegenfurtner, Eye movements and perception: a selective review. J. Vis. 11, 1–30 (2011)

    Google Scholar 

  130. A. Sciutti, C. Ansuini, C. Becchio, G. Sandini, Investigating the ability to read others’ intentions using humanoid robots. Front. Psychol. 6 (2015)

    Google Scholar 

  131. A. Sciutti, A. Bisio, F. Nori, G. Metta, L. Fadiga, T. Pozzo, G. Sandini, Measuring human-robot interaction through motor resonance. Int. J. Soc. Robot. 4, 223–234 (2012)

    Article  Google Scholar 

  132. A. Sciutti, A. Bisio, F. Nori, G. Metta, L. Fadiga, G. Sandini, Robots can be perceived as goal-oriented agents. Interact. Stud. 14, 329–350 (2014)

    Article  Google Scholar 

  133. A. Sciutti, K.S. Lohan, G. Gredebäck, B. Koch, K.J. Rohlfing, Language meddles with infants’ processing of observed actions [online]. Front. Robot. AI 3, 46 (2016). http://journal.frontiersin.org/article/10.3389/frobt.2016.00046

  134. A. Sciutti, N. Noceti, F. Rea, F. Odone, A. Verri, G. Sandini. The informative content of optical flow features of biological motion, in 37th European Conference on Visual Perception (ECVP 2014), Belgrade (2014)

    Google Scholar 

  135. A. Sciutti, L. Patanè, F. Nori, G. Sandini, Understanding object weight from human and humanoid lifting actions. IEEE Trans. Auton. Ment. Dev. 6, 80–92 (2014)

    Article  Google Scholar 

  136. A. Sciutti, L. Patanè, O. Palinko, F. Nori, G. Sandini, Developmental changes in children understanding robotic actions: the case of lifting, in IEEE International conference of Development and Learning and Epigenetic Robotics (ICDL) (2014)

    Google Scholar 

  137. A. Sciutti, A. Del Prete, L. Natale, G. Sandini, M. Gori, D. Burr, Perception during interaction is not based on statistical context, in ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo (2013), pp. 225–226

    Google Scholar 

  138. S. Sheikhi, J.-M. Odobez, Combining dynamic head pose–gaze mapping with the robot conversational state for attention recognition in human–robot interactions. Pattern Recognit. Lett. (2014). https://doi.org/10.1016/j.patrec.2014.10.002

    Article  Google Scholar 

  139. S.V. Shepherd, J.T. Klein, R.O. Deaner, M.L. Platt, Mirroring of attention by neurons in macaque parietal cortex. Proc. Natl. Acad. Sci. U. S. A. 106, 9489–9494 (2009)

    Article  Google Scholar 

  140. F. Simion, L. Regolin, H. Bulf, A predisposition for biological motion in the newborn baby. Proc. Natl. Acad. Sci. U. S. A. 105, 809–813 (2008)

    Article  Google Scholar 

  141. E.A. Sisbot, R. Alami, A human-aware manipulation planner. IEEE Trans. Robot. 28, 1045–1057 (2012)

    Article  Google Scholar 

  142. M. Staudte, M.W. Crocker, Visual attention in spoken human-robot interaction. ACM/IEEE Int. Conf. Human-Robot Interact. (2009). https://doi.org/10.1145/1514095.1514111

  143. K. Strabala, M. Lee M. Towards seamless human-robot handovers [online]. J. Human-Robot. Interact. 2: 112–132 (2013). http://humanrobotinteraction.org/journal/index.php/HRI/article/view/114 [17 Nov 2014]

    Article  Google Scholar 

  144. K. Strabala, M.K. Lee, A. Dragan, J. Forlizzi, S.S. Srinivasa, Learning the communication of intent prior to physical collaboration, in 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. IEEE, pp. 968–973

    Google Scholar 

  145. L. Takayama, M. Park, D. Dooley, W. Ju, Expressing thought: improving robot readability with animation principles, in Proceedings of the 6th International Conference on Human-Robot Interaction (ACM Press. New York, 2011), pp. 69–76

    Google Scholar 

  146. D. Todorović, Geometrical basis of perception of gaze direction. Vis. Res. 46, 3549–3562 (2006)

    Article  Google Scholar 

  147. A. Vignolo, N. Noceti, F. Rea, A. Sciutti, F. Odone, G. Sandini, Detecting biological motion for human-robot interaction: a link between perception and action. Front. Robot. AI 4 (2017). http://journal.frontiersin.org/article/10.3389/frobt.2017.00014/full

  148. A. Vignolo, N. Noceti, A. Sciutti, F. Rea, F. Odone, G. Sandini, The complexity of biological motion: a temporal multi-resolution motion descriptor for human detection in videos, in IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL), Cergy-Pontoise (2016)

    Google Scholar 

  149. A. Vignolo, F. Rea, N. Noceti, A. Sciutti, F. Odone, G. Sandini, Biological movement detector enhances the attentive skills of humanoid robot iCub, in International Conference on Humanoids Robots (Humanoids), Cancun (2016)

    Google Scholar 

  150. D.M. Wolpert, M. Kawato, Multiple paired forward and inverse models for motor control. Neural Netw. 11, 1317–1329 (1998)

    Article  Google Scholar 

  151. K. Yamane, M. Revfi, T. Asfour. Synthesizing object receiving motions of humanoid robots with human motion database. IEEE Int. Conf. Robot. Autom. (2013). https://doi.org/10.1109/ICRA.2013.6630788

  152. Y. Yoshikawa, K. Shinozawa, H. Ishiguro, N. Hagita, T. Miyamoto, Responsive robot gaze to interaction partner. [Online]. Robot. Sci. Syst. http://www.mi-as.com/wp-content/uploads/datasheets/Responsive Robat Gaze to Interaction Partner.pdf [20 Nov 2014]

  153. M. Zecca, N. Endo, S. Momoki, K. Itoh, A. Takanishi, Design of the humanoid robot KOBIAN – preliminary analysis of facial and whole body emotion expression capabilities, in 2008 8th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2008, Daejeon (2008), pp. 487–492

    Google Scholar 

  154. J. Złotowski, D. Proudfoot, K. Yogeeswaran, C. Bartneck, Anthropomorphism: opportunities and challenges in human–robot interaction. Int. J. Soc. Robot. 7, 347–360 (2015)

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the European CODEFROR project (FP7-PIRSES-2013-612555). The authors thank Oskar Palinko, Alessia Vignolo, Nicoletta Noceti, Francesca Odone, Laura Patanè, and all the other collaborators for their support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giulio Sandini .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature B.V.

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Sandini, G., Sciutti, A., Rea, F. (2019). Movement-Based Communication for Humanoid-Human Interaction. In: Goswami, A., Vadakkepat, P. (eds) Humanoid Robotics: A Reference. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-6046-2_138

Download citation

Publish with us

Policies and ethics