Advertisement

Master of Puppets: An Animation-by-Demonstration Computer Puppetry Authoring Framework

Abstract

This paper presents Master of Puppets (MOP), an animation-by-demonstration framework that allows users to control the motion of virtual characters (puppets) in real time. In the first step, the user is asked to perform the necessary actions that correspond to the character’s motions. The user’s actions are recorded, and a hidden Markov model is used to learn the temporal profile of the actions. During the runtime of the framework, the user controls the motions of the virtual character based on the specified activities. The advantage of the MOP framework is that it recognizes and follows the progress of the user’s actions in real time. Based on the forward algorithm, the method predicts the evolution of the user’s actions, which corresponds to the evolution of the character’s motion. This method treats characters as puppets that can perform only one motion at a time. This means that combinations of motion segments (motion synthesis), as well as the interpolation of individual motion sequences, are not provided as functionalities. By implementing the framework and presenting several computer puppetry scenarios, its efficiency and flexibility in animating virtual characters is demonstrated.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Notes

  1. 1.

    https://unity3d.com/.

  2. 2.

    https://www.xbox.com/en-US/xbox-one/accessories/kinect.

  3. 3.

    https://www.leapmotion.com/.

References

  1. 1.

    Sarris, N., & Strintzis, M. G. (2005). 3D modeling and animation: Synthesis and analysis techniques for the human body. Hershey: IGI Global.

  2. 2.

    Gleicher, M. (1998). Retargetting motion to new characters. In Annual conference on computer graphics and interactive techniques, (pp. 33–42). New York: ACM.

  3. 3.

    McCann, J., & Pollard, N. (2007). Responsive characters from motion fragments. ACM Transactions on Graphics, 26(3), 6.

  4. 4.

    Oshita, M. (2010). Generating animation from natural language texts and semantic analysis for motion search and scheduling. The Visual Computer, 26(5), 339–352.

  5. 5.

    Mousas, C., & Anagnostopoulos, N. C. (2015). CHASE: Character animation scripting environment. In ACM SIGGRAPH international conference on virtual reality continuum and its applications in industry, (pp. 55–62).

  6. 6.

    Levine, S., Krähenbühl, P., Thrun, S., & Koltun, V. (2010). Gesture controllers. ACM Transactions on Graphics, 29(4), 124.

  7. 7.

    Davis, J., Agrawala, M., Chuang, E., Popović, Z., & Salesin, D. D. (2003). A sketching interface for articulated figure animation. In ACM SIGGRAPH/eurographics symposium on computer animation, (pp. 320–328). Aire-la-Ville: Eurographics Association.

  8. 8.

    Rhodin, H., Tompkin, J., In Kim, K., Varanasi, K., Seidel, H. P., & Theobalt, C. (2014). Interactive motion mapping for real time charactercontrol. Computer Graphics Forum, 33(2), 273–282.

  9. 9.

    Chen, J., Izadi, S., & Fitzgibbon, A. (2012). Kinêtre: Animating the world with the human body. In ACM symposium on user interface software and technology, (pp. 435–444). New York: ACM.

  10. 10.

    Ouzounis, C., Mousas, C., Anagnostopoulos, C.-N., & Newbury, P. (2015). Using personalized finger gestures for navigating virtual characters. In Workshop on virtual reality interaction and physical simulation, (pp. 5–14).

  11. 11.

    Lam, WC., Zou, F., & Komura, T. (2004). Motion editing with data glove. In ACM SIGCHI international conference on advances in computer entertainment technology, (pp. 337–342).

  12. 12.

    Wang, R. Y., & Popović, J. (2009). Real-time hand-tracking with a color glove. ACM Transactions on Graphics, 28(3), 63.

  13. 13.

    Shiratori, T., & Hodgins, J. K. (2008). Accelerometer-based user interfaces for the control of a physically simulated character. ACM Transactions on Graphics, 27(5), 123.

  14. 14.

    Slyper, R., & Hodgins, J. K.(2008). Action capture with accelerometers. In ACM SIGGRAPH/eurographics symposium on computer animation, (pp. 193–199). Aire-la-Ville: Eurographics Association.

  15. 15.

    Jacobson, A., Panozzo, D., Glauser, O., Pradalier, C., Hilliges, O., & Sorkine-Hornung, O. (2014). Tangible and modular input device for character articulation. ACM Transactions on Graphics, 33(4), 82.

  16. 16.

    Mukai, T., & Kuriyama, S. (2005). Geostatistical motion interpolation. ACM Transactions on Graphics, 24(3), 1062–1070.

  17. 17.

    Kovar, L., & Gleicher, M. (2003). Flexible automatic motion blending with registration curves. In ACM SIGGRAPH/eurographics symposium on computer animation, (pp. 214–224). Aire-la-Ville: Eurographics Association.

  18. 18.

    van Basten, B., & Egges, A. (2012). Motion transplantation techniques: A survey. IEEE Computer Graphics and Applications, 32(3), 16–23.

  19. 19.

    Mousas, C., Newbury, P., & Anagnostopoulos, C.-N. (2013). Splicing of concurrent upper-body motion spaces with locomotion. Procedia Computer Science, 25, 348–359.

  20. 20.

    Witkin, A., & Popovic, Z. (1995). Motion warping. In Annual conference on computer graphics and interactive techniques, (pp. 105–108). New York: ACM.

  21. 21.

    Glardon, P., Boulic, R., & Thalmann, D. (2004). PCA-based walking engine using motion capture data. In IEEE computer graphics international, (pp. 292–298).

  22. 22.

    Song, J., Choi, B., Seol, Y., & Noh, J. (2011). Characteristic facial retargeting. Computer Animation and Virtual Worlds, 22(23), 187–194.

  23. 23.

    Ouzounis, C., Kilias, A., & Mousas, C. (2017). Kernel projection of latent structures regression for facial animation retargeting. In EUROGRAPHICS workshop on virtual reality interaction and physical simulation, (pp. 59–65).

  24. 24.

    Cashman, T. J., & Hormann, K. (2012). A continuous, editable representation for deforming mesh sequences with separate signals for time, pose and shape. Computer Graphics Forum, 31(2), 735–744.

  25. 25.

    Levine, S., Wang, J. M., Haraux, A., Popović, Z., & Koltun, V. (2012). Continuous character control with low-dimensional embeddings. ACM Transactions on Graphics, 31(4), 28.

  26. 26.

    Wei, X., Zhang, P., & Chai, J. (2012). Accurate realtime full-body motion capture using a single depth camera. ACM Transactions on Graphics, 31(6), 188.

  27. 27.

    Raunhardt, D., & Boulic, R. (2011). Immersive singularity free full body interactions with reduced marker set. Computer Animation and Virtual Worlds, 22(5), 407–419.

  28. 28.

    Liu, H., Wei, X., Chai, J., Ha, I., & Rhee, T. (2011). Realtime human motion control with a small number of inertial sensors. In Symposium on interactive 3D graphics and games, (pp. 133–140). New York: ACM.

  29. 29.

    Mousas, C., Newbury, P., & Anagnostopoulos, C. -N. (2014). Evaluating the covariance matrix constraints for data-driven statistical human motion reconstruction. In Spring conference on computer graphics, (pp. 99–106). New York: ACM.

  30. 30.

    Eom, H., Choi, D., & Noh, J. (2014). Data driven reconstruction of human locomotion using a single smartphone. Computer Graphics Forum, 33(7), 11–19.

  31. 31.

    Min, J., & Chai, J. (2012). Motion graphs++: A compact generative model for semantic motion analysis and synthesis. ACM Transactions on Graphics, 31(6), 153.

  32. 32.

    Sturman, D. J. (1998). Computer puppetry. IEEE Computer Graphics and Applications, 18(1), 38–45.

  33. 33.

    Dontcheva, M., Yngve, G., & Popović, Z. (2003). Layered acting for character animation. ACM Transactions on Graphic, 22(3), 409–416.

  34. 34.

    Seol, Y., O’Sullivan, C., & Lee, J. (2013). Creature features: online motion puppetry for non-human characters. In textitACM SIGGRAPH/eurographics symposium on computer animation, (pp. 213–221).

  35. 35.

    Ishigaki, S., White, T., Zordan, V. B., & Liu, C. K. (2009). Performance-based control interface for character animation. ACM Transactions on Graphics, 28(3), 61.

  36. 36.

    Yamane, K., Ariki, Y., & Hodgins J. (2010). Animating non-humanoid characters with human motion data. In ACM SIGGRAPH/eurographics symposium on computer animation, (pp. 169–178). Aire-la-Ville: Eurographics Association.

  37. 37.

    Mousas, C., & Anagnostopoulos, C. N. (2017). Performance-driven hybrid full-body character control for navigation and interaction in virtual environments. 3D Research, 8(2), 18.

  38. 38.

    Otsuka, T., Nakadai, K., Ogata, T., & Okuno, H. G. (2011). Incremental bayesian audio-to-score alignment with flexible harmonic structure models. In ISMIR, (pp. 525–530).

  39. 39.

    Françoise, J., Caramiaux, B., & Bevilacqua, F. (2012). A hierarchical approach for the design of gesture-to-sound mappings. In Sound and music computing conference, (pp. 233–240).

  40. 40.

    Bettens, F., & Todoroff, T. (2009). Real-time dtw-based gesture recognition external object for max/msp and puredata. In Sound and music computing conference, (pp. 30–35).

  41. 41.

    Bevilacqua, F., Zamborlin, B., Sypniewski, A., Schnell N., Guédy, F., & Rasamimanana, N. (2010). Continuous realtime gesture following and recognition. In Gesture in embodied communication and human-computer interaction, (pp. 73–84). Berlin: Springer.

  42. 42.

    Fels, S. S., & Hinton, G. E. (1993). Glove-talk: A neural network interface between a data-glove and a speech synthesizer. IEEE Transactions on Neural Networks, 4(1), 2–8.

  43. 43.

    Fiebrink, R., Cook, P R., & Trueman, D. (2011). Human model evaluation in interactive supervised learning. In SIGCHI conference on human factors in computing systems, (pp. 147–156). New York: ACM.

  44. 44.

    Mori, A., Uchida S., Kurazume, R., Taniguchi, R. I., Hasegawa, T., & Sakoe, H. (2006). Early recognition and prediction of gestures. In IEEE international conference on pattern recognition, (pp. 560–563).

  45. 45.

    Berndt, D. J., & Clifford, J. (1994). Using dynamic time warping to find patterns in time series. In KDD workshop, (pp. 359–370).

  46. 46.

    Rabiner, L. (1989). A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), 257–286.

  47. 47.

    Microsoft. Kinect SDK from https://www.microsoft.com/en-us/kinectforwindows/, (2017).

  48. 48.

    Leap Motion. Developers SDK from https://developer.leapmotion.com/, (2017).

  49. 49.

    Liang, X., Hoyet, L., Geng, W., & Multon, F. (2010). Responsive action generation by physically-based motion retrieval and adaptation. In Motion in games, (pp. 313–324). Berlin: Springer.

  50. 50.

    Al-Asqhar, R. A., Komura, T., & Choi, M. G. (2013). Relationship descriptors for interactive motion adaptation. In ACM SIGGRAPH/eurographics symposium on computer animation, (pp. 45–53).

  51. 51.

    Mousas, C., & Anagnostopoulos, C.-N. (2017). Real-time performance-driven finger motion synthesis. Computers & Graphics, 65, 1–11.

  52. 52.

    Mousas, C. (2017). Full-body locomotion reconstruction of virtual characters using a single inertial measurement unit. Sensors, 17(11), 2589.

Download references

Author information

Correspondence to Christos Mousas.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 18131 KB)

Supplementary material 1 (mp4 18131 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cui, Y., Mousas, C. Master of Puppets: An Animation-by-Demonstration Computer Puppetry Authoring Framework. 3D Res 9, 5 (2018) doi:10.1007/s13319-018-0158-y

Download citation

Keywords

  • Computer puppetry
  • Performance animation
  • Character animation
  • Motion control
  • HMM