Abstract
Motion Capture technologies transfer coordinate data from the human body movement and locomotion to a digital structure in order to move the avatar according to the actions performed by a person, creating digital animation function curves, marking as a keyframe each frame captured in a timeline. Those keyframes create excess of short movements as they try to correct the coordinates to a distinct virtual character from the real person who originated them, making the avatar quiver each time it performs any action. For animation purposes, in order to produce visually harmonic movements, it is necessary to remove manually the exceeded keyframes. The present study proposes an automated scripted method to reduce the amount of keyframes, keeping the shapes of the function curves, in order to customize the aesthetic gestural properties of characters animated by MoCap. It is presented a graphic comparison from before and after applying the automated customization proposed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Estévez-García, R., et al.: Grid open data motion capture: MOCAP-ULL database. Proc. Comput. Sci. 75, 316–326 (2015)
Holden, D.: Robust solving of optical motion capture data by denoising. ACM Trans. Graph 38, 1 (2018). Article 165
Wang, P., et al.: The alpha parallelogram predictor: a lossless compression method for motion capture data. Inf. Sci. 232, 1–10 (2013)
Canton-Ferrer, C., Casas, J.R., Pardàs, M.: Human motion capture using scalable body mod-els. Comput. Vis. Image Underst. 115, 1363–1374 (2011)
Ayusawa, K., Ikegami, Y., Nakamura, Y.: Simultaneous global inverse kinematics and geo-metric parameter identification of human skeletal model from motion capture data. Mech. Mach. Theor. 74, 274–284 (2014)
Geng, W., Yu, G.: Reuse of motion capture data in animation: a review. In: Kumar, V., et al. (eds.) ICCSA. LNCS, vol. 2669, pp. 620–629. Springer, Heidelberg (2003)
Feng, Y., et al.: Exploiting temporal stability and low-rank structure for motion capture data refinement. Inf. Sci. 277, 777–793 (2014)
Güdükbay, U., Demir, I., Dedeoğlu, Y.: Motion capture and human pose reconstruction from a single-view video sequence. Digit. Sig. Process. 23, 1441–1450 (2013)
Nikolai, J., Bennett, G.: Stillness, breath and the spine - dance performance enhancement catalysed by the interplay between 3D motion capture technology in a collaborative impro-visational choreographic process. Perform. Enhanc. Health 4, 58–66 (2016)
Chung, H.-S., Lee, Y.: MCML: motion capture markup language for integration of heterogeneous motion capture data. Comput. Stand. Interfaces 26, 113–130 (2004)
Boehs, G., et al.: Locomotion dataset. http://tecmidia.ufsc.br/en/locomotion-dataset/
Guo, X., et al.: Automatic motion generation based on path editing from motion capture data. In: Pan, Z., et al. (eds.) Transactions on Edutainment IV. LNCS, vol. 6250, pp. 91–104. Springer, Heidelberg (2010)
Gao, Y., et al.: From keyframing to motion capture. In: Hommel, G., Huanye, S. (eds.) Human Interaction with Machines, pp. 35–42. Springer, Dordrecht (2006)
Forsythe, A.: Python scripting in motion buider. 07-Dealing with keyframe data: FBFCurve. http://www.awforsythe.com/tutorials/pyfbsdk-7
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
de Andrade, W.M., Nishida, J.K., Vieira, M.L.H., Prim, G.S., Boehs, G.E. (2019). Motion Capture Automated Customized Presets. In: Karwowski, W., Ahram, T. (eds) Intelligent Human Systems Integration 2019. IHSI 2019. Advances in Intelligent Systems and Computing, vol 903. Springer, Cham. https://doi.org/10.1007/978-3-030-11051-2_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-11051-2_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-11050-5
Online ISBN: 978-3-030-11051-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)