Skip to main content

A Relational Approach to Content-based Analysis of Motion Capture Data

  • Chapter
Human Motion

Part of the book series: Computational Imaging and Vision ((CIVI,volume 36))

Motion capture or mocap systems allow for tracking and recording of human motions at high spatial and temporal resolutions. The resulting 3D mocap data is used for motion analysis in fields such as sports sciences, biomechanics, or computer vision, and in particular for motion synthesis in data-driven computer animation. In view of a rapidly growing corpus of motion data, automatic retrieval, annotation, and classification of such data has become an important research field. Since logically similar motions may exhibit significant spatio-temporal variations, the notion of similarity is of crucial importance in comparing motion data streams. After reviewing various aspects of motion similarity, we discuss as the main contribution of this chapter a relational approach to content-based motion analysis, which exploits the existence of an explicitly given kinematic model underlying the 3D mocap data. Considering suitable combinations of boolean relations between specified body points allows for capturing the motion content while disregarding motion details. Finally, we sketch how such relational features can be used for automatic and efficient segmentation, indexing, retrieval, classification, and annotation of mocap data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Brand M. and Hertzmann A. Style machines. In Proc. ACM SIGGRAPH 2000, Computer Graphics Proc., pp. 183-192. ACM Press, 2000.

    Google Scholar 

  2. Bregler C. Learning and recognizing human dynamics in video sequences. In Proc. CVPR 1997, pp. 568, Washington, DC, USA, 1997. IEEE Computer Society.

    Google Scholar 

  3. Bruderlin A. and Williams L. Motion signal processing. In Proc. ACM SIG-GRAPH 1995, Computer Graphics Proc., pp. 97-104. ACM Press, 1995.

    Google Scholar 

  4. Carlsson S. Combinatorial geometry for shape representation and indexing. In Object Representation in Computer Vision, pp. 53-78, 1996.

    Google Scholar 

  5. Carlsson S. Order structure, correspondence, and shape based categories. In Shape, Contour and Grouping in Computer Vision, pp. 58-71. Springer, 1999.

    Google Scholar 

  6. Clausen M. and Kurth F. A unified approach to content-based and fault tolerant music recognition. IEEE Trans. Multimedia, 6(5):717-731, 2004.

    Article  Google Scholar 

  7. CMU. Carnegie-Mellon Mocap Database. http://mocap.cs.cmu.edu, March, 2007.

  8. Davis J. W. and Gao H. An expressive three-mode principal components model of human action style. Image Vision Comput., 21(11):1001-1016, 2003.

    Article  Google Scholar 

  9. de Aguiar E., Theobalt C., and Seidel H.-P. Automatic learning of articulated skeletons from 3D marker trajectories. In Proc. Intl. Symposium on Visual Computing (ISVC 2006), to appear, 2006.

    Google Scholar 

  10. Demuth B., Röder T., Müller M., and Eberhardt B. An information retrieval system for motion capture data. In Proc. 28th European Conference on Infor-mation Retrieval (ECIR 2006), volume 3936 of LNCS, pp. 373-384. Springer, 2006.

    Google Scholar 

  11. Faugeras O. Three-Dimensional Computer Vision: A Geometric Viewpoint, Chapter 9, pp. 341-400. MIT Press, Cambridge, MA, 1993.

    Google Scholar 

  12. Forbes K. and Fiume E. An efficient search algorithm for motion data using weighted PCA. In Proc. 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 67-76. ACM Press, 2005.

    Google Scholar 

  13. Giese M. and Poggio T. Morphable models for the analysis and synthesis of complex motion patterns. IJCV, 38(1):59-73, 2000.

    Article  MATH  Google Scholar 

  14. Green R. D. and Guan L. Quantifying and recognizing human movement pat-terns from monocular video images: Part I. IEEE Trans. Circuits and Systems for Video Technology, 14(2):179-190, February 2004.

    Article  Google Scholar 

  15. Hsu E., Pulli K., and Popović J. Style translation for human motion. ACM Trans. Graph., 24(3):1082-1089, 2005.

    Article  Google Scholar 

  16. Keogh E. J., Palpanas T., Zordan V. B., Gunopulos D., and Cardle M. Index-ing large human-motion databases. In Proc. 30th VLDB Conf., Toronto, pp. 780-791, 2004.

    Google Scholar 

  17. Kovar L. and Gleicher M. Flexible automatic motion blending with registration curves. In Proc. 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 214-224. Eurographics Association, 2003.

    Google Scholar 

  18. Kovar L. and Gleicher M. Automated extraction and parameterization of mo- tions in large data sets. ACM Trans. Graph., 23(3):559-568, 2004.

    Article  Google Scholar 

  19. Kry P. G. and Pai D. K. Interaction capture and synthesis. ACM Trans. Graph., 25(3):872-880, 2006.

    Article  Google Scholar 

  20. Lafortune M. A., Lambert C., and Lake M. Skin marker displacement at the knee joint. In Proc. 2nd North American Congress on Biomechanics, Chicago, 1992.

    Google Scholar 

  21. Lee C.-S. and Elgammal A. Gait style and gait content: Bilinear models for gait recognition using gait re-sampling. In Proc. IEEE Intl. Conf. Automatic Face and Gesture Recognition (FGR 2004), pp. 147-152. IEEE Computer Society, 2004.

    Google Scholar 

  22. Li Y., Wang T., and Shum H.-Y. Motion texture: a two-level statistical model for character motion synthesis. In Proc. ACM SIGGRAPH 2002, pp. 465-472. ACM Press, 2002.

    Google Scholar 

  23. Liu C. K., Hertzmann A., and Popović Z. Learning physics-based motion style with nonlinear inverse optimization. ACM Trans. Graph., 24(3):1071-1081, 2005.

    Article  Google Scholar 

  24. Müller M. and Röder T. Motion templates for automatic classification and retrieval of motion capture data. In Proc. 2006 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA 2006). Eurographics Association, 2006.

    Google Scholar 

  25. Müller M., Röder T., and Clausen M. Efficient content-based retrieval of motion capture data. ACM Trans. Graph., 24(3):677-685, 2005.

    Google Scholar 

  26. Müller M., Röder T., and Clausen M. Efficient indexing and retrieval of motion capture data based on adaptive segmentation. In Proc. Fourth International Workshop on Content-Based Multimedia Indexing (CBMI), 2005.

    Google Scholar 

  27. Neff M. and Fiume E. Methods for exploring expressive stance. In Proc. 2004 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA 2004), pp. 49-58. ACM Press, 2004.

    Google Scholar 

  28. Neff M. and Fiume E. AER: aesthetic exploration and refinement for expressive character animation. In Proc. 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA 2005), pp. 161-170. ACM Press, 2005.

    Google Scholar 

  29. O’Brien J. F., Bodenheimer R., Brostow G., and Hodgins J. K. Automatic joint parameter estimation from magnetic motion capture data. In Graphics Interface, pp. 53-60, 2000.

    Google Scholar 

  30. Pullen K. and Bregler C. Motion capture assisted animation: texturing and synthesis. In Proc. SIGGRAPH 2002, pp. 501-508. ACM Press, 2002.

    Google Scholar 

  31. Ren L., Patrick A., Efros A. A., Hodgins J. K., and Rehg J. M. A data-driven ap-proach to quantifying natural human motion. ACM Trans. Graph., 24(3):1090-1097,2005.

    Google Scholar 

  32. Rose C., Cohen M. F., and Bodenheimer B. Verbs and adverbs: multidimen-sional motion interpolation. IEEE Comput. Graph. Appl., 18(5):32-40, 1998.

    Article  Google Scholar 

  33. Rosenhahn B., Kersting U. G., Smith A. W., Gurney J. K., Brox T., and Klette R. A system for marker-less human motion estimation. In DAGM-Symposium, pp. 230-237, 2005.

    Google Scholar 

  34. Sakamoto Y., Kuriyama S., and Kaneko T. Motion map: image-based retrieval and segmentation of motion data. In Proc.2004ACM SIG- GRAPH/Eurographics Symposium on Computer Animation, pp. 259-266. ACM Press, 2004.

    Google Scholar 

  35. Sullivan J. and Carlsson S. Recognizing and tracking human action. In Proc. ECCV ’02, Part I, pp. 629-644. Springer, 2002.

    Google Scholar 

  36. Troje N. F. Decomposing biological motion: A framework for analysis and syn-thesis of human gait patterns. J. Vis., 2(5):371-387, 9 2002.

    Article  Google Scholar 

  37. Unuma M., Anjyo K., and Takeuchi R. Fourier principles for emotion-based human figure animation. In Proc. ACM SIGGRAPH 1995, pp. 91-96. ACM Press, 1995.

    Google Scholar 

  38. Wikipedia. http://en.wikipedia.org/wiki/Motion\ capture, March, 2007.

  39. Witkin A. and Popović Z. Motion warping. In Proc. ACM SIGGRAPH 95, Computer Graphics Proc., pp. 105-108. ACM Press/ACM SIGGRAPH, 1995.

    Google Scholar 

  40. Witten I. H., Moffat A., and Bell T. C. Managing Gigabytes. Morgan Kaufmann Publishers, 1999.

    Google Scholar 

  41. Wu M.-Y., Chao S.-P., Yang S.-N., and Lin H.-C. Content-based retrieval for human motion data. In 16th IPPR Conf. on Computer Vision, Graphics, and Image Processing, pp. 605-612, 2003.

    Google Scholar 

  42. Zatsiorsky V. M. Kinematics of Human Motion. Human Kinetics, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer

About this chapter

Cite this chapter

Müller, M., Röder, T. (2008). A Relational Approach to Content-based Analysis of Motion Capture Data. In: Rosenhahn, B., Klette, R., Metaxas, D. (eds) Human Motion. Computational Imaging and Vision, vol 36. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-6693-1_20

Download citation

  • DOI: https://doi.org/10.1007/978-1-4020-6693-1_20

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-1-4020-6692-4

  • Online ISBN: 978-1-4020-6693-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics