Skip to main content

Gesture, Posture, Facial Interfaces

  • Living reference work entry
  • First Online:

Synonyms

Bodily expression; Body language; Kinesics; Nonverbal communication interfaces

Definition

A gesture is a form of nonvocal communication in which visible bodily actions communicate particular messages. Gestures include movement made with body parts, such as hands, arms, fingers, head, and legs. One gesture type is gestures that convey meaning by themselves and are assumed to be deliberately performed by the speaker. They are conventionalized symbols and strongly culture-dependent. Another type is conversational gestures, which accompany the speech but do not deliver the semantic content of the accompanying speech. An example is simple, repetitive, rhythmic movements like the beats.

Posture interfaceconveys information about interpersonal relations, personality traits (e.g., confidence, submissiveness, and openness), social standing, and current emotional states through the position and orientation of specific body parts, which can be expressed either in a fixed coordinate on...

This is a preview of subscription content, log in via an institution.

Notes

  1. 1.

    Velocity, acceleration, duration of the movement, and finger motion range are reported as important features in perception of affect.

References

  • Beck A, Hiolle A, Mazel A, Canamero L (2010) Interpretation of emotional body language displayed by robots. In: International workshop affective interaction in natural environments (AFFINE), pp 37–42

    Google Scholar 

  • Bethel C, Murphy R (2008) Survey of non-facial/non-verbal affective expressions for appearance-constrained robots. IEEE Trans Syst Man Cybern Part C Appl Rev 38(1):83–92

    Article  Google Scholar 

  • Breazeal C (2002) Designing sociable Robots. The MIT Press

    MATH  Google Scholar 

  • Breazeal C, Scassellati B (1999) How to build robots that make friends and influence people. IROS

    Book  Google Scholar 

  • Breazeal C, Kidd C, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: IEEE/RSJ international conference on intelligent robots and systems (IROS)

    Google Scholar 

  • Caccavale R, Saveriano M, Finzi A, Lee D (2018) Kinesthetic teaching and attentional supervision of structured tasks in human-robot interaction. In: Autonomous robots

    Google Scholar 

  • Calinon S, Lee D (2017) Learning control. In: Vadakkepat P, Goswami A (eds) Humanoid robotics: a Reference. Springer

    Google Scholar 

  • Cao X, Wei Y, Wen F, Sun J (2012) Face alignment by explicit shape regression. In: Conference on computer vision and pattern recognition, pp 2887–2894

    Google Scholar 

  • Crane E, Gross M (2007) Motion capture and emotion: affect detection in whole body movement. In: International conference on affective computing and intelligent interaction (ACII), pp 95–101

    Google Scholar 

  • Dantone M, Gall J, Fanelli G, Van Gool LJ (2012) Realtime facial feature detection using conditional regression forests. In: Conference on computer vision and pattern recognition, pp 2578–2585

    Google Scholar 

  • Dariush B, Gienger M, Jian B, Goerick C, Fujimura K (2008) Whole body humanoid control from human motion descriptors. In: IEEE international conference on robotics and automation (ICRA), pp 2677–2684

    Google Scholar 

  • Darwin C (1872) The expression of the emotions in man and animals. Oxford University Press

    Book  Google Scholar 

  • Dautenhahn K, Werry I (2004) Towards interactive robots in autism therapy: Background, motivation and challenges. Pragmatics & Cognition, 12:1–35. https://doi.org/10.1075/pc.12.1.03dau

    Article  Google Scholar 

  • de Gelder B, Van den Stock J, Meeren HK, Sinke C, Kret ME, Tamietto M (2010) Standing up for the body. Recent progress in uncovering the networks involved in the perception of bodies and bodily expressions. Neurosci Biobehav Rev 34(4):513–527

    Article  Google Scholar 

  • Destephe M, Hashimoto K, Takanishi A (2013) Emotional gait generation method based on emotion mental model – preliminary experiment with happiness and sadness. In: International conference on ubiquitous robots and ambient intelligence (URAI), pp 86–89

    Google Scholar 

  • Ekman P (1972) Universals and cultural differences in facial expression of emotion. In: Nebraska symposium on motivation. University of Nebraska Press, Lincoln, pp 207–283

    Google Scholar 

  • Endo N, Takanishi A (2011) Development of whole-body emotional expression humanoid robot for ADL-assistive RT services. J Robot Mechatronics 23(6). Fuji Press pp. 969–977

    Google Scholar 

  • Falco P, Saveriano M, Hasany EG, Kirk N, Lee D (2017) A human action descriptor based on motion coordination. IEEE Robot Autom Lett (RA-L) 2(2):811–818

    Article  Google Scholar 

  • Falco P, Saveriano M, Shah D, Lee D (2018) Representing human motion with FADE and U-FADE: an efficient frequency-domain approach. In: Autonomous robots (AURO)

    Google Scholar 

  • Fasel B, Luettin J (2003) Automatic facial expression analysis: a survey. Pattern Recogn 36:259–275

    Article  MATH  Google Scholar 

  • Fehr L, Langbein WE, Skaar SB (2000) Adequacy of power wheelchair control interfaces for persons with severe disabilities: a clinical survey. J Rehabil Res Dev 37(3):353–360

    Google Scholar 

  • Fod A, Mataric MJ, Jenkins OC (2002) Automated derivation of primitives for movement classification. Auton Robots 12(1):3954

    Article  MATH  Google Scholar 

  • Fujita M, Kitano H (1998) Development of an autonomous quadruped robot for robot entertainment. Auton Robots 5:7–20

    Article  Google Scholar 

  • Ge Y, Li B, Yan W, Zhao Y (2018) A real-time gesture prediction system using neural networks and multimodal fusion based on data glove. In: IEEE international conference on advanced computational intelligence (ICACI), pp 625–630

    Google Scholar 

  • Glowinski D, Mortillaro M, Scherer K, Dael N, Camurri GVA (2015) Towards a minimal representation of affective gestures. In: IEEE international conference on affective computing and intelligent interaction (ACII), pp 498–504

    Google Scholar 

  • Gonsior B, Sosnowski S, Buß M, Wollherr D, Kühnlenz K (2012) An emotional adaption approach to increase helpfulness towards a robot. In: IEEE/RSJ international conference on intelligent robots and systems (IROS)

    Google Scholar 

  • Gunes H, Pantic M (2010a) Automatic, dimensional and continuous emotion recognition. Int J Synth Emot 1(1):69–99

    Google Scholar 

  • Gunes H, Pantic M (2010b) Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In: International conference on affective computing and intelligent interaction (ACII), pp 371–377

    Google Scholar 

  • Gunes H, Piccardi M (2005) Affect recognition from face and body: early fusion vs. late fusion. In: IEEE international conference on systems, man and cybernetics, pp 3437–3443

    Google Scholar 

  • Gunes H, Schuller B, Pantic M, Cowie R (2011) Emotion representation, analysis and synthesis in continuous space: a survey. In: Proceedings of IEEE international conference on FG, workshop emotion synthesis, representation, and analysis in continuous space

    Book  Google Scholar 

  • Hegel F, Spexard T, Vogt T, Horstmann G, Wrede B (2006) Playing a different imitation game: interaction with an Empathic Android Robot. In: IEEE-RAS international conference on humanoid robots (Humanoids), pp 56–61

    Google Scholar 

  • Hook K (2009) Affective loop experiences: designing for interactional embodiment. Phil Trans R Soc B 364:3585–3595

    Article  Google Scholar 

  • Hu K, Lee D (2012) Biped locomotion primitive learning, control and prediction from human data. In: Proceedings of the 10th international IFAC symposium on robot control (SYROCO)

    Google Scholar 

  • Hu K, Ott C, Lee D (2014) Online human walking imitation in task and joint space based on quadratic programming. In: IEEE international conference on robotics and automation (ICRA), pp 3458–3464

    Google Scholar 

  • Karg M, Kuehnlenz K, Buss M (2010a) Recognition of affect based on gait patterns. IEEE Trans Syst Man Cybern Part B Cybern 40(4):1050–1061

    Article  Google Scholar 

  • Karg M, Schwimmbeck M, Kuehnlenz K, Buss M (2010b) Towards mapping emotive gait patterns from human to robot. In: IEEE international symposium RO-MAN, pp 258–263

    Google Scholar 

  • Karg M, Samadani A-A, Gorbet R, Kuehnlenz K, Hoey J, Kulic D (2013) Body movements for affective expression: a survey of automatic recognition and generation. IEEE Trans Affect Comput 4(4):341–359

    Article  Google Scholar 

  • Kishi T, Kojima T, Endo N, Destephe M, Otani T, Jamone L, Kryczka P, Trovato G, Hashimoto K, Cosentino S, Takanishi A (2013) Impression survey of the emotion expression humanoid robot with mental model based dynamic emotions. In: IEEE international conference on robotics and automation, pp 1655–1660

    Google Scholar 

  • Kleinsmith A, Bianchi-Berthouze N (2007) Recognizing affective dimensions from body posture. In: International conference on affective computing and intelligent interaction (ACII), pp 48–58

    Google Scholar 

  • Kleinsmith A, Bianchi-Berthouze N (2013) Affective body expression perception and recognition: a survey. IEEE Trans Affect Comput 4(1):15–38

    Article  Google Scholar 

  • Kret ME, Pichon S, Grezes J, de Gelder B (2011) Similarities and differences in perceiving threat from dynamic faces and bodies. An fMRI study. NeuroImage 54(2):1755–1762

    Article  Google Scholar 

  • Kulic D, Ott C, Lee D, Ishikawa J, Nakamura Y (2012) Incremental learning of full body motion primitives and their sequencing through human motion observation. Int J Robot Res 31(3):330345

    Article  Google Scholar 

  • Kumar P, Gauba H, Roy PP, Dogra DP (2017) A multimodal framework for sensor based sign language recognition. Neurocomputing 259:21–38

    Article  Google Scholar 

  • Lee D, Nakamura Y (2005) Mimesis from partial observations. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 1911–1916

    Google Scholar 

  • Lee D, Nakamura Y (2007) Motion capturing from monocular vision by statistical inference based on motion database: vector field approach. In: EEE/RSJ international conference on intelligent robots and systems (IROS), pp 617–623

    Google Scholar 

  • Lee D, Nakamura Y (2014) Motion recognition and recovery from occluded monocular observations. Robot Auton Syst 62(6):818–832

    Article  Google Scholar 

  • Lee D, Ott C, Nakamura Y (2009) Mimetic communication with impedance control for physical human-robot interaction. In: IEEE international conference on robotics and automation (ICRA), pp 1535–1542

    Google Scholar 

  • Lee D, Soloperto R, Saveriano M (2017) Bidirectional invariant representation of rigid body motions and its application to gesture recognition and reproduction. Auton Robots (AURO) 42(1):125–145

    Article  Google Scholar 

  • Li S, Lee D (2018) Point-to-pose voting based hand pose estimation using residual permutation. Equivariant Layer, arXiv:1812.02050, arXiv

    Google Scholar 

  • Li S, Woehlke J, Lee D (2018) Model-based hand pose estimation for generalized hand shape with spatial transformer network. In: European conference on computer vision (ECCV), 4th international workshop on observing and understanding hands in action (HANDS)

    Google Scholar 

  • Liu Z, Lee D, Sepp W (2011) Particle filter based monocular human tracking with a 3D cardbox model and a novel deterministic resampling strategy. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 3626–3631

    Google Scholar 

  • Magnanimo V, Saveriano M, Rossi S, Lee D (2014) A Bayesian approach for task recognition and future human activity prediction. In: IEEE international symposium on robot and human interactive communication (RO-MAN), pp 726–731

    Google Scholar 

  • Medina Hernández JR, Shelley M, Lee D, Takano W, Hirche S (2012) Towards interactive physical robotic assistance: parameterizing motion primitives through natural language. In: IEEE international symposium on robot and human interactive communication (Ro-Man)

    Google Scholar 

  • Mehrabian A (1996) Pleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperament. Curr Psychol 14(4):261–292

    Article  Google Scholar 

  • Minato T, Yoshikawa Y, Noda T, Ikemoto S, Ishiguro H, Asada M (2007) CB2: a child robot with biomimetic body for cognitive developmental robotics. In: IEEE-RAS international conference on humanoid robots

    Google Scholar 

  • Mitra S, Acharya T (2007) Gesture recognition: a survey. IEEE Trans Syst Man Cybern Part C Appl Rev 37(3):311–324

    Article  Google Scholar 

  • Molchanov P, Yang X, Gupta S, Kim K, Tyree S, Kautz J (2016) Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural network. In: IEEE conference on computer vision and pattern recognition, pp 4207–4215

    Google Scholar 

  • Monkaresi H, Calvo RA, Hussain MS (2012) Automatic natural expression recognition using head movement and skin color features. In: International working conference on advanced visual interfaces, pp 657–660

    Google Scholar 

  • Montepare J, Goldstein S, Clausen A (1987) The identification of emotions from gait information. J Nonverbal Behav 11:33–42

    Article  Google Scholar 

  • Nakagawa K, Shinozawa K, Ishiguro H, Akimoto T, Hagita N (2009) Motion modification method to control affective nuances for robots. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 5003–5008

    Google Scholar 

  • Neto P, Pereira D, Pires JN, Moreira AP (2013) Real-time and continuous hand gesture spotting: an approach based on artificial neural networks. In: IEEE international conference on robotics and automation (ICRA), pp 178–183

    Google Scholar 

  • Nishio S, Ishiguro H, Hagita N (2007) Geminoid: teleoperated android of an existing person. In: Humanoid robots: new developments. I-Tech Education and Publishing, pp 343–352

    Google Scholar 

  • Ott C, Lee D, Nakamura Y (2008) Motion capture based human motion recognition and imitation by direct marker control. In: IEEE international conference on humanoid robots, pp 399–405

    Google Scholar 

  • Ott C, Baumgartner C, Mayr J, Fuchs M, Burger R, Lee D, Eiberger O, Albu-Schaffer A, Grebenstein M, Hirzinger G (2010) Development of a biped robot with torque controlled joints. In: IEEE international conference on humanoid robots, pp 167–173

    Google Scholar 

  • Pantic M, Rothkranz LJM (2000) Automatic analysis of facial expressions: the state of the art. IEEE Trans Pattern Anal Mach Intell 22(12):1424–1445

    Article  Google Scholar 

  • Park H, Park J-I, Kim U-M, Woo W (2004) Emotion recognition from dance image sequences using contour approximation. In: Joint IAPR international workshops structural, syntactic, and statistical pattern recognition, pp 547–555

    MATH  Google Scholar 

  • Parusel S, Widmoser H, Golz S, Ende T, Blodow N, Saveriano M, Maldonado A, Kresse I, Weitschat R, Lee D, Beetz M, Albu-Schaeffer A, Haddadin S (2014) Human-robot interaction planning. In: The 28th AAAI conference on artificial intelligence (AAAI)

    Google Scholar 

  • Picard RW (1997) Affective computing. The MIT Press

    Google Scholar 

  • Plutchik R (2001) The nature of emotions human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. Am Sci 89(4):344–350

    Article  Google Scholar 

  • Riek LD, Paul PC, Robinson P (2010) When my robot smiles at me: enabling human-robot rapport via real-time head gesture mimicry. J Multimodal User Interfaces 3(1–2):99–108

    Article  Google Scholar 

  • Roether C, Omlor L, Christensen A, Giese M (2009) Critical features for the perception of emotion from gait. J Vis 9(6):1–32

    Article  Google Scholar 

  • Russell J, Mehrabian A (1977) Evidence for a three-factor theory of emotions. J Res Pers 11:273–294

    Article  Google Scholar 

  • Saha S, Datta S, Konar A, Janarthanan R (2014) A study on emotion recognition from body gestures using kinect sensor. In: IEEE international conference on communications and signal processing (ICCSP), pp 056–060

    Google Scholar 

  • Saveriano M, Lee D (2013) Invariant representation for user independent motion recognition. In: IEEE international symposium on robot and human interactive communication (RO-MAN), pp 650–655

    Google Scholar 

  • Schmidts A, Lee D, Peer A (2011) Imitation learning of human grasping skills from motion and force data. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 1002–1007

    Google Scholar 

  • Soloperto R, Saveriano M, Lee D (2015) A bidirectional invariant representation of motion for gesture recognition and reproduction. In: IEEE international conference on robotics and utomation (ICRA), pp 6146–6152

    Google Scholar 

  • Sun M, Kohli P, Shotton J (2012) Conditional regression forests for human pose estimation. In: Conference on computer vision and pattern recognition, pp 3394–3401

    Google Scholar 

  • Thomas F, Johnston O (1995) The illusion of life: Disney animation. Hyperion

    Google Scholar 

  • Toshev A, Szegedy C (2014) Deeppose: human pose estimation via deep neural networks. In: Conference on computer vision and pattern recognition

    Google Scholar 

  • Trovato G, Zecca M, Sessa S, Jamone L, Ham J, Hashimoto K, Takanishi A (2013) Cross-cultural study on human-robot greeting interaction: acceptance and discomfort by Egyptians and Japanese. J Behav Robot 4(2)

    Google Scholar 

  • Wada K, Shibata T, Musha T, Kimura S (2008) Robot therapy for elders affected by dementia. IEEE Eng Med Biol Mag 27(4):53–60

    Article  Google Scholar 

  • Waldherr S, Romero R, Thrun S (2000) A gesture based interface for human-robot interaction. Auton Robots 9(2):151–173

    Article  Google Scholar 

  • Yacoob Y, Davis L (1996) Recognizing human facial expressions from long image sequences using optical flow. IEEE Trans Pattern Anal Mach Intell 18(6):636–642

    Article  Google Scholar 

  • Yuan S, Garcia-Hernando G, Stenger B, Kim T-K, Moon G, Chang JY, Lee KM, Molchanov P, Ge L, Yuan J, Chen X, Wang G, Yang F, Akiyama K, Wu Y, Wan Q, Madadi M, Escalera S, Li S, Lee D, Oikonomidis I, Argyros A (2018) 3D hand pose estimation: from current achievements to future goals. In: IEEE international conference on computer vision and pattern recognition (CVPR), pp 2636–2645

    Google Scholar 

  • Zecca M, Endo N, Momoki S, Itoh K, Takanishi A (2008) Design of the humanoid robot KOBIAN – preliminary analysis of facial and whole body emotion expression capabilities. In: IEEE-RAS international conference on humanoid robots (Humanoids), pp 487–492

    Google Scholar 

  • Zhang L, Sturm J, Cremers D, Lee D (2012) Real-time human motion tracking using multiple depth cameras. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 2389–2395

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dongheui Lee .

Editor information

Editors and Affiliations

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer-Verlag GmbH Germany, part of Springer Nature

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Lee, D. (2020). Gesture, Posture, Facial Interfaces. In: Ang, M., Khatib, O., Siciliano, B. (eds) Encyclopedia of Robotics. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41610-1_25-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-41610-1_25-1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-41610-1

  • Online ISBN: 978-3-642-41610-1

  • eBook Packages: Springer Reference EngineeringReference Module Computer Science and Engineering

Publish with us

Policies and ethics