Skip to main content

Motion Capture Automated Customized Presets

  • Conference paper
  • First Online:
Intelligent Human Systems Integration 2019 (IHSI 2019)

Abstract

Motion Capture technologies transfer coordinate data from the human body movement and locomotion to a digital structure in order to move the avatar according to the actions performed by a person, creating digital animation function curves, marking as a keyframe each frame captured in a timeline. Those keyframes create excess of short movements as they try to correct the coordinates to a distinct virtual character from the real person who originated them, making the avatar quiver each time it performs any action. For animation purposes, in order to produce visually harmonic movements, it is necessary to remove manually the exceeded keyframes. The present study proposes an automated scripted method to reduce the amount of keyframes, keeping the shapes of the function curves, in order to customize the aesthetic gestural properties of characters animated by MoCap. It is presented a graphic comparison from before and after applying the automated customization proposed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Estévez-García, R., et al.: Grid open data motion capture: MOCAP-ULL database. Proc. Comput. Sci. 75, 316–326 (2015)

    Article  Google Scholar 

  2. Holden, D.: Robust solving of optical motion capture data by denoising. ACM Trans. Graph 38, 1 (2018). Article 165

    Article  Google Scholar 

  3. Wang, P., et al.: The alpha parallelogram predictor: a lossless compression method for motion capture data. Inf. Sci. 232, 1–10 (2013)

    Article  Google Scholar 

  4. Canton-Ferrer, C., Casas, J.R., Pardàs, M.: Human motion capture using scalable body mod-els. Comput. Vis. Image Underst. 115, 1363–1374 (2011)

    Article  Google Scholar 

  5. Ayusawa, K., Ikegami, Y., Nakamura, Y.: Simultaneous global inverse kinematics and geo-metric parameter identification of human skeletal model from motion capture data. Mech. Mach. Theor. 74, 274–284 (2014)

    Article  Google Scholar 

  6. Geng, W., Yu, G.: Reuse of motion capture data in animation: a review. In: Kumar, V., et al. (eds.) ICCSA. LNCS, vol. 2669, pp. 620–629. Springer, Heidelberg (2003)

    Google Scholar 

  7. Feng, Y., et al.: Exploiting temporal stability and low-rank structure for motion capture data refinement. Inf. Sci. 277, 777–793 (2014)

    Article  Google Scholar 

  8. Güdükbay, U., Demir, I., Dedeoğlu, Y.: Motion capture and human pose reconstruction from a single-view video sequence. Digit. Sig. Process. 23, 1441–1450 (2013)

    Article  MathSciNet  Google Scholar 

  9. Nikolai, J., Bennett, G.: Stillness, breath and the spine - dance performance enhancement catalysed by the interplay between 3D motion capture technology in a collaborative impro-visational choreographic process. Perform. Enhanc. Health 4, 58–66 (2016)

    Article  Google Scholar 

  10. Chung, H.-S., Lee, Y.: MCML: motion capture markup language for integration of heterogeneous motion capture data. Comput. Stand. Interfaces 26, 113–130 (2004)

    Article  Google Scholar 

  11. Boehs, G., et al.: Locomotion dataset. http://tecmidia.ufsc.br/en/locomotion-dataset/

  12. Guo, X., et al.: Automatic motion generation based on path editing from motion capture data. In: Pan, Z., et al. (eds.) Transactions on Edutainment IV. LNCS, vol. 6250, pp. 91–104. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  13. Gao, Y., et al.: From keyframing to motion capture. In: Hommel, G., Huanye, S. (eds.) Human Interaction with Machines, pp. 35–42. Springer, Dordrecht (2006)

    Chapter  Google Scholar 

  14. Forsythe, A.: Python scripting in motion buider. 07-Dealing with keyframe data: FBFCurve. http://www.awforsythe.com/tutorials/pyfbsdk-7

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wiliam Machado de Andrade .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de Andrade, W.M., Nishida, J.K., Vieira, M.L.H., Prim, G.S., Boehs, G.E. (2019). Motion Capture Automated Customized Presets. In: Karwowski, W., Ahram, T. (eds) Intelligent Human Systems Integration 2019. IHSI 2019. Advances in Intelligent Systems and Computing, vol 903. Springer, Cham. https://doi.org/10.1007/978-3-030-11051-2_16

Download citation

Publish with us

Policies and ethics