Advertisement

Body Movement Analysis and Recognition

  • Yang Xiao
  • Hui Liang
  • Junsong Yuan
  • Daniel Thalmann
Chapter
Part of the Human–Computer Interaction Series book series (HCIS)

Abstract

In this chapter, a nonverbal way of communication for human–robot interaction by understanding human upper body gestures will be addressed. The human–robot interaction system based on a novel combination of sensors is proposed. It allows one person to interact with a humanoid social robot with natural body language. The robot can understand the meaning of human upper body gestures and express itself by using a combination of body movements, facial expressions, and verbal language. A set of 12 upper body gestures is involved for communication. Human–object interactions are also included in these gestures. The gestures can be characterized by the head, arm, and hand posture information. CyberGlove II is employed to capture the hand posture. This feature is combined with the head and arm posture information captured from Microsoft Kinect. This is a new sensor solution for human-gesture capture. Based on the body posture data, an effective and real-time human gesture recognition method is proposed. For experiments, a human body gesture dataset was built. The experimental results demonstrate the effectiveness and efficiency of the proposed approach.

References

  1. 1.
    Adamo-Villani N, Heisler J, Arns L (2007) Two gesture recognition systems for immersive math education of the deaf. In: Proceedings of the first international conference on immersive telecommunications (ICIT 2007). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), p 9Google Scholar
  2. 2.
    Beck A, Cañamero L, Hiolle A, Damiano L, Cosi P, Tesser F, Sommavilla G (2013) Interpretation of emotional body language displayed by a humanoid robot: a case study with children. Int J Soc Robot: 1–10Google Scholar
  3. 3.
    Belongies S, Malik J, Puzicha J (2002) Shape matching and object recognition using shape contexts. IEEE Trans Pattern Anal Mach Int 24(4):509–522CrossRefGoogle Scholar
  4. 4.
    Berman S, Stern H (2012) Sensors for gesture recognition systems. IEEE Trans Syst Man Cybern: Appl Rev 42(3):277–290CrossRefGoogle Scholar
  5. 5.
    Brethes L, Menezes P, Lerasle F, Hayet J (2004) Face tracking and hand gesture recognition for human-robot interaction. In: Proceedings of IEEE conference on robotics and automation (ICRA 2004), IEEE, vol 2. pp 1901–1906Google Scholar
  6. 6.
    Cañamero L, Fredslund J (2001) I show you how i like you—can you read it in my face? [robotics]. IEEE Trans Syst Man Cybern: Syst Hum 31(5):454–459CrossRefGoogle Scholar
  7. 7.
    Cassell J et al. (2000) Nudge nudge wink wink: elements of face-to-face conversation for embodied conversational agents. Embodied conversational agents, pp 1–27Google Scholar
  8. 8.
    Chopra S, Hadsell R, LeCun Y (2005) Learning a similarity metric discriminatively, with application to face verification. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR 2005), vol 1. pp 539–546Google Scholar
  9. 9.
    Cover T, Hart P (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory 13(1):21–27zbMATHCrossRefGoogle Scholar
  10. 10.
    Dautenhahn K (2007) Socially intelligent robots: dimensions of human-robot interaction. Philos Trans Royal Soc B: Biol Sci 362(1480):679–704Google Scholar
  11. 11.
    Faber F, Bennewitz M, Eppner C, Gorog A, Gonsior C, Joho D, Schreiber M, Behnke S (2009) The humanoid museum tour guide robotinho. In: Proceedings of IEEE symposium on robot and human interactive communication (RO-MAN 2009), IEEE, pp 891–896Google Scholar
  12. 12.
    Fisher RA (1936) The use of multiple measures in taxonomic problems. Ann Eugenics 7:179–188CrossRefGoogle Scholar
  13. 13.
    Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst 42(3):143–166Google Scholar
  14. 14.
    Goodrich MA, Schultz AC (2007) Human-robot interaction: a survey. Found Trends Hum-Comput Interact 1(3):203–275Google Scholar
  15. 15.
    Immersion (2010) Cyberglove II specficationsGoogle Scholar
  16. 16.
    Jolliffe IT (1986) Principal component analysis. Springer, LondonGoogle Scholar
  17. 17.
    Krämer NC, Tietz B, Bente G (2003) Effects of embodied interface agents and their gestural activity. In: Intelligent virtual agents. Springer, London, pp 292–300Google Scholar
  18. 18.
    Lu G, Shark L-K, Hall G, Zeshan U (2012) Immersive manipulation of virtual objects through glove-based hand gesture interaction. Virtual Reality 16(3):243–252Google Scholar
  19. 19.
    Mehrabian A (1971) Silent messagesGoogle Scholar
  20. 20.
    Müller M, Röder T (2006) Motion templates for automatic classification and retrieval of motion capture data. In: Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on computer animation (SCA 2006), Eurographics Association, pp 137–146Google Scholar
  21. 21.
    Nickel K, Stiefelhagen R (2007) Visual recognition of pointing gestures for human-robot interaction. Image Vis Comput 25(12):1875–1884Google Scholar
  22. 22.
    Osborne JW, Costello AB (2004) Sample size and subject to item ratio in principal components analysis. Pract Assess, Res Eval 9(11):8Google Scholar
  23. 23.
    Perzanowski D, Schultz AC, Adams W, Marsh E, Bugajska M (2001) Building a multimodal human-robot interface. IEEE Intell Syst 16(1):16–21Google Scholar
  24. 24.
    Shotton J, Fitzgibbon A, Cook M, Sharp T, Finocchio M, Moore R, Kipman A, Blake A (2011) Real-time human pose recognition in parts from single depth images. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR 2011), pp 1297–1304Google Scholar
  25. 25.
    Smith LB, Breazeal C (2007) The dynamic lift of developmental process. Dev Sci 10(1):61–68Google Scholar
  26. 26.
    Spiliotopoulos D, Androutsopoulos I, Spyropoulos CD (2001) Human-robot interaction based on spoken natural language dialogue. In: Proceedings of the European workshop on service and humanoid robots, pp 25–27Google Scholar
  27. 27.
    Stiefelhagen R, Ekenel HK, Fugen C, Gieselmann P, Holzapfel H, Kraft F, Nickel K, Voit M, Waibel A (2007) Enabling multimodal human-robot interaction for the karlsruhe humanoid robot. IEEE Trans Robot 23(5):840–851Google Scholar
  28. 28.
    Stiefelhagen R, Fugen C, Gieselmann R, Holzapfel H, Nickel K, Waibel A (2004) Natural human-robot interaction using speech, head pose and gestures. In: Proceedings of IEEE conference on intelligent robots and systems (IROS 2004), IEEE, vol 3, pp 2422–2427Google Scholar
  29. 29.
    Teleb H, Chang G (2012) Data glove integration with 3d virtual environments. In: Proceeedings of international conference on systems and informatics (ICSAI 2012), IEEE, pp 107–112Google Scholar
  30. 30.
    Waldherr S, Romero R, Thrun S (2000) A gesture based interface for human-robot interaction. Auton Robots 9(2):151–173Google Scholar
  31. 31.
    Wang J, Liu Z, Wu Y, Yuan J (2012) Mining actionlet ensemble for action recognition with depth cameras. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR 2012), pp 1290–1297Google Scholar
  32. 32.
    Wang J, Liu Z, Chorowski J, Chen Z, Wu Y (2012) Robust 3d action recognition with random occupancy patterns. In: Proceedings of European conference on computer vision (ECCV). Springer, London, pp 872–885Google Scholar
  33. 33.
    Weinberger KQ, Saul LK (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10:207–244zbMATHGoogle Scholar
  34. 34.
    Xiao Y, Cao Z, Zhuo W (2011) Type-2 fuzzy thresholding using glsc histogram of human visual nonlinearity characteristics. Opt. Express 19(11):10656–10672Google Scholar
  35. 35.
    Xiao Y, Cao Z, Yuan J (2014) Entropic image thresholding based on GLGM histogram. Pattern Recogn Lett 40:47–55Google Scholar
  36. 36.
    Xiao Y, Wu J, Yuan J (2014) mCENTRIST: a multi-channel feature generation mechanism for scene categorization. IEEE Trans Image Process 23(2):823–836Google Scholar
  37. 37.
    Xiao Y, Yuan J, Thalmann D (2013) Human-virtual human interaction by upper body gesture understanding. In: Proceedings of the 19th ACM symposium on virtual reality software and technology (VRST 2013), pp 133–142. ACM, Las VegasGoogle Scholar
  38. 38.
    Xiao Y, Zhang Z, Beck A, Yuan J, Thalmann D (2014) Human-robot interaction by understanding upper body gestures. Presence: teleoperators and virtual environments (Accepted)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Yang Xiao
    • 1
  • Hui Liang
    • 2
  • Junsong Yuan
    • 2
  • Daniel Thalmann
    • 2
  1. 1.School of AutomationHuazhong University of Science and TechnologyWuhanChina
  2. 2.BeingThere CentreNanyang Technological UniversitySingaporeSingapore

Personalised recommendations