Definition
A gesture is a form of nonvocal communication in which visible bodily actions communicate particular messages. Gestures include movement made with body parts, such as hands, arms, fingers, head, and legs. One gesture type is gestures that convey meaning by themselves and are assumed to be deliberately performed by the speaker. They are conventionalized symbols and strongly culture-dependent. Another type is conversational gestures, which accompany the speech but do not deliver the semantic content of the accompanying speech. An example is simple, repetitive, rhythmic movements like the beats.
Posture interfaceconveys information about interpersonal relations, personality traits (e.g., confidence, submissiveness, and openness), social standing, and current emotional states through the position and orientation of specific body parts, which can be expressed either in a fixed coordinate on...
This is a preview of subscription content, log in via an institution.
Notes
- 1.
Velocity, acceleration, duration of the movement, and finger motion range are reported as important features in perception of affect.
References
Beck A, Hiolle A, Mazel A, Canamero L (2010) Interpretation of emotional body language displayed by robots. In: International workshop affective interaction in natural environments (AFFINE), pp 37–42
Bethel C, Murphy R (2008) Survey of non-facial/non-verbal affective expressions for appearance-constrained robots. IEEE Trans Syst Man Cybern Part C Appl Rev 38(1):83–92
Breazeal C (2002) Designing sociable Robots. The MIT Press
Breazeal C, Scassellati B (1999) How to build robots that make friends and influence people. IROS
Breazeal C, Kidd C, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: IEEE/RSJ international conference on intelligent robots and systems (IROS)
Caccavale R, Saveriano M, Finzi A, Lee D (2018) Kinesthetic teaching and attentional supervision of structured tasks in human-robot interaction. In: Autonomous robots
Calinon S, Lee D (2017) Learning control. In: Vadakkepat P, Goswami A (eds) Humanoid robotics: a Reference. Springer
Cao X, Wei Y, Wen F, Sun J (2012) Face alignment by explicit shape regression. In: Conference on computer vision and pattern recognition, pp 2887–2894
Crane E, Gross M (2007) Motion capture and emotion: affect detection in whole body movement. In: International conference on affective computing and intelligent interaction (ACII), pp 95–101
Dantone M, Gall J, Fanelli G, Van Gool LJ (2012) Realtime facial feature detection using conditional regression forests. In: Conference on computer vision and pattern recognition, pp 2578–2585
Dariush B, Gienger M, Jian B, Goerick C, Fujimura K (2008) Whole body humanoid control from human motion descriptors. In: IEEE international conference on robotics and automation (ICRA), pp 2677–2684
Darwin C (1872) The expression of the emotions in man and animals. Oxford University Press
Dautenhahn K, Werry I (2004) Towards interactive robots in autism therapy: Background, motivation and challenges. Pragmatics & Cognition, 12:1–35. https://doi.org/10.1075/pc.12.1.03dau
de Gelder B, Van den Stock J, Meeren HK, Sinke C, Kret ME, Tamietto M (2010) Standing up for the body. Recent progress in uncovering the networks involved in the perception of bodies and bodily expressions. Neurosci Biobehav Rev 34(4):513–527
Destephe M, Hashimoto K, Takanishi A (2013) Emotional gait generation method based on emotion mental model – preliminary experiment with happiness and sadness. In: International conference on ubiquitous robots and ambient intelligence (URAI), pp 86–89
Ekman P (1972) Universals and cultural differences in facial expression of emotion. In: Nebraska symposium on motivation. University of Nebraska Press, Lincoln, pp 207–283
Endo N, Takanishi A (2011) Development of whole-body emotional expression humanoid robot for ADL-assistive RT services. J Robot Mechatronics 23(6). Fuji Press pp. 969–977
Falco P, Saveriano M, Hasany EG, Kirk N, Lee D (2017) A human action descriptor based on motion coordination. IEEE Robot Autom Lett (RA-L) 2(2):811–818
Falco P, Saveriano M, Shah D, Lee D (2018) Representing human motion with FADE and U-FADE: an efficient frequency-domain approach. In: Autonomous robots (AURO)
Fasel B, Luettin J (2003) Automatic facial expression analysis: a survey. Pattern Recogn 36:259–275
Fehr L, Langbein WE, Skaar SB (2000) Adequacy of power wheelchair control interfaces for persons with severe disabilities: a clinical survey. J Rehabil Res Dev 37(3):353–360
Fod A, Mataric MJ, Jenkins OC (2002) Automated derivation of primitives for movement classification. Auton Robots 12(1):3954
Fujita M, Kitano H (1998) Development of an autonomous quadruped robot for robot entertainment. Auton Robots 5:7–20
Ge Y, Li B, Yan W, Zhao Y (2018) A real-time gesture prediction system using neural networks and multimodal fusion based on data glove. In: IEEE international conference on advanced computational intelligence (ICACI), pp 625–630
Glowinski D, Mortillaro M, Scherer K, Dael N, Camurri GVA (2015) Towards a minimal representation of affective gestures. In: IEEE international conference on affective computing and intelligent interaction (ACII), pp 498–504
Gonsior B, Sosnowski S, Buß M, Wollherr D, Kühnlenz K (2012) An emotional adaption approach to increase helpfulness towards a robot. In: IEEE/RSJ international conference on intelligent robots and systems (IROS)
Gunes H, Pantic M (2010a) Automatic, dimensional and continuous emotion recognition. Int J Synth Emot 1(1):69–99
Gunes H, Pantic M (2010b) Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In: International conference on affective computing and intelligent interaction (ACII), pp 371–377
Gunes H, Piccardi M (2005) Affect recognition from face and body: early fusion vs. late fusion. In: IEEE international conference on systems, man and cybernetics, pp 3437–3443
Gunes H, Schuller B, Pantic M, Cowie R (2011) Emotion representation, analysis and synthesis in continuous space: a survey. In: Proceedings of IEEE international conference on FG, workshop emotion synthesis, representation, and analysis in continuous space
Hegel F, Spexard T, Vogt T, Horstmann G, Wrede B (2006) Playing a different imitation game: interaction with an Empathic Android Robot. In: IEEE-RAS international conference on humanoid robots (Humanoids), pp 56–61
Hook K (2009) Affective loop experiences: designing for interactional embodiment. Phil Trans R Soc B 364:3585–3595
Hu K, Lee D (2012) Biped locomotion primitive learning, control and prediction from human data. In: Proceedings of the 10th international IFAC symposium on robot control (SYROCO)
Hu K, Ott C, Lee D (2014) Online human walking imitation in task and joint space based on quadratic programming. In: IEEE international conference on robotics and automation (ICRA), pp 3458–3464
Karg M, Kuehnlenz K, Buss M (2010a) Recognition of affect based on gait patterns. IEEE Trans Syst Man Cybern Part B Cybern 40(4):1050–1061
Karg M, Schwimmbeck M, Kuehnlenz K, Buss M (2010b) Towards mapping emotive gait patterns from human to robot. In: IEEE international symposium RO-MAN, pp 258–263
Karg M, Samadani A-A, Gorbet R, Kuehnlenz K, Hoey J, Kulic D (2013) Body movements for affective expression: a survey of automatic recognition and generation. IEEE Trans Affect Comput 4(4):341–359
Kishi T, Kojima T, Endo N, Destephe M, Otani T, Jamone L, Kryczka P, Trovato G, Hashimoto K, Cosentino S, Takanishi A (2013) Impression survey of the emotion expression humanoid robot with mental model based dynamic emotions. In: IEEE international conference on robotics and automation, pp 1655–1660
Kleinsmith A, Bianchi-Berthouze N (2007) Recognizing affective dimensions from body posture. In: International conference on affective computing and intelligent interaction (ACII), pp 48–58
Kleinsmith A, Bianchi-Berthouze N (2013) Affective body expression perception and recognition: a survey. IEEE Trans Affect Comput 4(1):15–38
Kret ME, Pichon S, Grezes J, de Gelder B (2011) Similarities and differences in perceiving threat from dynamic faces and bodies. An fMRI study. NeuroImage 54(2):1755–1762
Kulic D, Ott C, Lee D, Ishikawa J, Nakamura Y (2012) Incremental learning of full body motion primitives and their sequencing through human motion observation. Int J Robot Res 31(3):330345
Kumar P, Gauba H, Roy PP, Dogra DP (2017) A multimodal framework for sensor based sign language recognition. Neurocomputing 259:21–38
Lee D, Nakamura Y (2005) Mimesis from partial observations. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 1911–1916
Lee D, Nakamura Y (2007) Motion capturing from monocular vision by statistical inference based on motion database: vector field approach. In: EEE/RSJ international conference on intelligent robots and systems (IROS), pp 617–623
Lee D, Nakamura Y (2014) Motion recognition and recovery from occluded monocular observations. Robot Auton Syst 62(6):818–832
Lee D, Ott C, Nakamura Y (2009) Mimetic communication with impedance control for physical human-robot interaction. In: IEEE international conference on robotics and automation (ICRA), pp 1535–1542
Lee D, Soloperto R, Saveriano M (2017) Bidirectional invariant representation of rigid body motions and its application to gesture recognition and reproduction. Auton Robots (AURO) 42(1):125–145
Li S, Lee D (2018) Point-to-pose voting based hand pose estimation using residual permutation. Equivariant Layer, arXiv:1812.02050, arXiv
Li S, Woehlke J, Lee D (2018) Model-based hand pose estimation for generalized hand shape with spatial transformer network. In: European conference on computer vision (ECCV), 4th international workshop on observing and understanding hands in action (HANDS)
Liu Z, Lee D, Sepp W (2011) Particle filter based monocular human tracking with a 3D cardbox model and a novel deterministic resampling strategy. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 3626–3631
Magnanimo V, Saveriano M, Rossi S, Lee D (2014) A Bayesian approach for task recognition and future human activity prediction. In: IEEE international symposium on robot and human interactive communication (RO-MAN), pp 726–731
Medina Hernández JR, Shelley M, Lee D, Takano W, Hirche S (2012) Towards interactive physical robotic assistance: parameterizing motion primitives through natural language. In: IEEE international symposium on robot and human interactive communication (Ro-Man)
Mehrabian A (1996) Pleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperament. Curr Psychol 14(4):261–292
Minato T, Yoshikawa Y, Noda T, Ikemoto S, Ishiguro H, Asada M (2007) CB2: a child robot with biomimetic body for cognitive developmental robotics. In: IEEE-RAS international conference on humanoid robots
Mitra S, Acharya T (2007) Gesture recognition: a survey. IEEE Trans Syst Man Cybern Part C Appl Rev 37(3):311–324
Molchanov P, Yang X, Gupta S, Kim K, Tyree S, Kautz J (2016) Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural network. In: IEEE conference on computer vision and pattern recognition, pp 4207–4215
Monkaresi H, Calvo RA, Hussain MS (2012) Automatic natural expression recognition using head movement and skin color features. In: International working conference on advanced visual interfaces, pp 657–660
Montepare J, Goldstein S, Clausen A (1987) The identification of emotions from gait information. J Nonverbal Behav 11:33–42
Nakagawa K, Shinozawa K, Ishiguro H, Akimoto T, Hagita N (2009) Motion modification method to control affective nuances for robots. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 5003–5008
Neto P, Pereira D, Pires JN, Moreira AP (2013) Real-time and continuous hand gesture spotting: an approach based on artificial neural networks. In: IEEE international conference on robotics and automation (ICRA), pp 178–183
Nishio S, Ishiguro H, Hagita N (2007) Geminoid: teleoperated android of an existing person. In: Humanoid robots: new developments. I-Tech Education and Publishing, pp 343–352
Ott C, Lee D, Nakamura Y (2008) Motion capture based human motion recognition and imitation by direct marker control. In: IEEE international conference on humanoid robots, pp 399–405
Ott C, Baumgartner C, Mayr J, Fuchs M, Burger R, Lee D, Eiberger O, Albu-Schaffer A, Grebenstein M, Hirzinger G (2010) Development of a biped robot with torque controlled joints. In: IEEE international conference on humanoid robots, pp 167–173
Pantic M, Rothkranz LJM (2000) Automatic analysis of facial expressions: the state of the art. IEEE Trans Pattern Anal Mach Intell 22(12):1424–1445
Park H, Park J-I, Kim U-M, Woo W (2004) Emotion recognition from dance image sequences using contour approximation. In: Joint IAPR international workshops structural, syntactic, and statistical pattern recognition, pp 547–555
Parusel S, Widmoser H, Golz S, Ende T, Blodow N, Saveriano M, Maldonado A, Kresse I, Weitschat R, Lee D, Beetz M, Albu-Schaeffer A, Haddadin S (2014) Human-robot interaction planning. In: The 28th AAAI conference on artificial intelligence (AAAI)
Picard RW (1997) Affective computing. The MIT Press
Plutchik R (2001) The nature of emotions human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. Am Sci 89(4):344–350
Riek LD, Paul PC, Robinson P (2010) When my robot smiles at me: enabling human-robot rapport via real-time head gesture mimicry. J Multimodal User Interfaces 3(1–2):99–108
Roether C, Omlor L, Christensen A, Giese M (2009) Critical features for the perception of emotion from gait. J Vis 9(6):1–32
Russell J, Mehrabian A (1977) Evidence for a three-factor theory of emotions. J Res Pers 11:273–294
Saha S, Datta S, Konar A, Janarthanan R (2014) A study on emotion recognition from body gestures using kinect sensor. In: IEEE international conference on communications and signal processing (ICCSP), pp 056–060
Saveriano M, Lee D (2013) Invariant representation for user independent motion recognition. In: IEEE international symposium on robot and human interactive communication (RO-MAN), pp 650–655
Schmidts A, Lee D, Peer A (2011) Imitation learning of human grasping skills from motion and force data. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 1002–1007
Soloperto R, Saveriano M, Lee D (2015) A bidirectional invariant representation of motion for gesture recognition and reproduction. In: IEEE international conference on robotics and utomation (ICRA), pp 6146–6152
Sun M, Kohli P, Shotton J (2012) Conditional regression forests for human pose estimation. In: Conference on computer vision and pattern recognition, pp 3394–3401
Thomas F, Johnston O (1995) The illusion of life: Disney animation. Hyperion
Toshev A, Szegedy C (2014) Deeppose: human pose estimation via deep neural networks. In: Conference on computer vision and pattern recognition
Trovato G, Zecca M, Sessa S, Jamone L, Ham J, Hashimoto K, Takanishi A (2013) Cross-cultural study on human-robot greeting interaction: acceptance and discomfort by Egyptians and Japanese. J Behav Robot 4(2)
Wada K, Shibata T, Musha T, Kimura S (2008) Robot therapy for elders affected by dementia. IEEE Eng Med Biol Mag 27(4):53–60
Waldherr S, Romero R, Thrun S (2000) A gesture based interface for human-robot interaction. Auton Robots 9(2):151–173
Yacoob Y, Davis L (1996) Recognizing human facial expressions from long image sequences using optical flow. IEEE Trans Pattern Anal Mach Intell 18(6):636–642
Yuan S, Garcia-Hernando G, Stenger B, Kim T-K, Moon G, Chang JY, Lee KM, Molchanov P, Ge L, Yuan J, Chen X, Wang G, Yang F, Akiyama K, Wu Y, Wan Q, Madadi M, Escalera S, Li S, Lee D, Oikonomidis I, Argyros A (2018) 3D hand pose estimation: from current achievements to future goals. In: IEEE international conference on computer vision and pattern recognition (CVPR), pp 2636–2645
Zecca M, Endo N, Momoki S, Itoh K, Takanishi A (2008) Design of the humanoid robot KOBIAN – preliminary analysis of facial and whole body emotion expression capabilities. In: IEEE-RAS international conference on humanoid robots (Humanoids), pp 487–492
Zhang L, Sturm J, Cremers D, Lee D (2012) Real-time human motion tracking using multiple depth cameras. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 2389–2395
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Section Editor information
Rights and permissions
Copyright information
© 2020 Springer-Verlag GmbH Germany, part of Springer Nature
About this entry
Cite this entry
Lee, D. (2020). Gesture, Posture, Facial Interfaces. In: Ang, M., Khatib, O., Siciliano, B. (eds) Encyclopedia of Robotics. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41610-1_25-1
Download citation
DOI: https://doi.org/10.1007/978-3-642-41610-1_25-1
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-41610-1
Online ISBN: 978-3-642-41610-1
eBook Packages: Springer Reference EngineeringReference Module Computer Science and Engineering