3D Research

, 8:25 | Cite as

Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters

  • Christos MousasEmail author
  • Christos-Nikolaos Anagnostopoulos
3DR Express


This paper presents a methodology for estimating the motion of a character’s fingers based on the use of motion features provided by a virtual character’s hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character’s hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mechanism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in comparison to a previous method.


Finger motion Motion estimation Character animation Motion features Features pre-processing Metric learning 

Supplementary material

Supplementary material 1 (mp4 18439 KB)


  1. 1.
    Courgeon, M., Buisine, S., & Martin, J. C. (2009). Impact of expressive wrinkles on perception of a virtual character’s facial expressions of emotions. In Z. Ruttkay, M. Kipp, A. Nijholt & H. H. Vilhjálmsson (Eds.), Intelligent Virtual Agents (pp. 201–214). Berlin, Heidelberg: Springer.Google Scholar
  2. 2.
    Clavel, C., Plessier, J., Martin, J. C., Ach, L., & Morel, B. (2009). Combining facial and postural expressions of emotions in a virtual character. In Z. Ruttkay, M. Kipp, A. Nijholt & H. H. Vilhjálmsson (Eds.), Intelligent virtual agents (pp. 287–300). Berlin, Heidelberg: Springer.Google Scholar
  3. 3.
    Jörg, S., Hodgins, J. K., & O’Sullivan, C. (2010). The perception of finger motions. In Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization, APGV 2010, Los Angeles, California (pp. 129–133).Google Scholar
  4. 4.
    Hoyet, L., Ryall, K., McDonnell, R., & O’Sullivan, C. (2012). Sleight of hand: Perception of finger motion from reduced marker sets. In itACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (pp. 79–86). ACM.Google Scholar
  5. 5.
    Mousas, C., Newbury, P., & Anagnostopoulos, C.-N. (2014). Evaluating the covariance matrix constraints for data-driven statistical human motion reconstruction. In Spring Conference on Computer Graphics (pp. 99–106).Google Scholar
  6. 6.
    Mousas, C., Newbury, P., & Anagnostopoulos, C.-N. (2014). Data-driven motion reconstruction using local regression models. In IFIP International Conference on Artificial Intelligence Applications and Innovations (pp. 364–374). Berlin, Heidelberg: Springer.Google Scholar
  7. 7.
    Kang, C., Wheatland, N., Neff, M., & Zordan, V. B. (2012). Automatic hand-over animation for free-hand motions from low resolution input. In Motion in Games—5th International Conference, MIG 2012, Rennes, France, Proceedings (pp. 244–253).Google Scholar
  8. 8.
    Wheatland, N., Jörg, S., & Zordan, V. B. (2013). Automatic hand-over animation using principle component analysis. In Motion in Games, MIG ’13, Dublin, Ireland (pp. 197–202).Google Scholar
  9. 9.
    Mousas, C., Newbury, P., & Anagnostopoulos, C. N. (2014). Efficient hand-over motion reconstruction. In International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (pp. 111–120).Google Scholar
  10. 10.
    Jörg, S., Hodgins, J. K., & Safonova, Alla. (2012). Data-driven finger motion synthesis for gesturing characters. ACM Transactions on Graphics, 31(6), 189.CrossRefGoogle Scholar
  11. 11.
    Mousas, C., Anagnostopoulos, C.-N., & Newbury, P. (2015). Finger motion estimation and synthesis for gesturing characters. In Spring Conference on Computer Graphics (pp. 97–104) ACM.Google Scholar
  12. 12.
    Jörg, S., Hodgins, J., & O’Sullivan, C. (2017). Finger motion database.
  13. 13.
    Wheatland, N., Wang, Y., Song, H., Neff, M., Zordan, V. B., & Jörg, S. (2015). State of the art in hand and finger modeling and animation. Computer Graphics Forum, 34(2), 735–760.CrossRefGoogle Scholar
  14. 14.
    Jörg, S. (2016). Data-driven hand animation synthesis. In B. Müller, S. I. Wolf, G.-P. Brueggemann, Z. Deng, A. McIntosh, F. Miller, & W. Scott Selbie (Eds.), Handbook of Human Motion. Berlin: Springer.Google Scholar
  15. 15.
    Jin, G., & Hahn, J. K. (2005). Adding hand motion to the motion capture based character animation. In Advances in Visual Computing, First International Symposium, ISVC 2005, Lake Tahoe, NV, USA, Proceedings (pp. 17–24).Google Scholar
  16. 16.
    Bitan, M., Jörg, S., & Kraus, S. (2016). Data-driven finger motion synthesis with interactions. Eurographics Association: In ACM SIGGRAPH/Eurographics Symposium on Computer Animation.Google Scholar
  17. 17.
    Mousas, C., & Anagnostopoulos, C.-N. (2017) Real-time performance-driven finger motion synthesis. Computers & Graphics, 65, 1–11.CrossRefGoogle Scholar
  18. 18.
    Majkowska, A., Zordan, V. B., & Faloutsos, P. (2006). Automatic splicing for hand and body animations. In Proceedings of the 2006 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 2006, Vienna, Austria, (pp. 309–316).Google Scholar
  19. 19.
    van Basten, B., & Egges, A. (2012). Motion transplantation techniques: A survey. IEEE Computer Graphics and Applications, 32(3), 16–23.CrossRefGoogle Scholar
  20. 20.
    Mousas, C., Newbury, P., & Anagnostopoulos, C. N. (2013). Splicing of concurrent upper-body motion spaces with locomotion. Procedia Computer Science, 25, 348–359.CrossRefGoogle Scholar
  21. 21.
    Mousas, C., & Newbury, P. (2012). Real-time motion synthesis for multiple goal-directed tasks using motion layers. In Virtual Reality Interaction and Physical Simulation (pp 79–85). Eurographics Association.Google Scholar
  22. 22.
    Ye, Y., & Liu, C. K. (2012). Synthesis of detailed hand manipulations using contact sampling. ACM Transactions on Graphics, 31(4), 41:1–41:10.CrossRefGoogle Scholar
  23. 23.
    Bai, Y., Liu, & C. K. (2014). Dexterous manipulation using both palm and fingers. In 2014 IEEE International Conference on Robotics and Automation, ICRA 2014, Hong Kong, China (pp. 1560–1565).Google Scholar
  24. 24.
    Wei, X. K., Zhang, P., & Chai, J. (2012). Accurate realtime full-body motion capture using a single depth camera. ACM Transactions on Graphics, 31(6), 188.CrossRefGoogle Scholar
  25. 25.
    Hamer, H., Gall, J., Urtasun, R., & Van Gool, L. (2011). Data-driven animation of hand-object interactions. In IEEE International Conference on Automatic Face and Gesture Recognition (pp. 360–367).Google Scholar
  26. 26.
    Zhu, Yuanfeng. (2013). Ajay Sundar Ramakrishnan, Bernd Hamann, and Michael Neff. A system for automatic animation of piano performances. Journal of Visualization and Computer Animation, 24(5), 445–457.Google Scholar
  27. 27.
    ElKoura, G., & Singh, K. (2003). Handrix: Animating the human hand. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation (pp. 110–119).Google Scholar
  28. 28.
    Liu, C. K. (2009). Dextrous manipulation from a grasping pose. ACM Transactions on Graphics, 28(3), 59.Google Scholar
  29. 29.
    Pollard, N. S., & Zordan, V. B. (2005). Physically based grasping control from example. In Proceedings of the 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 2005, Los Angeles, CA, USA (pp. 11–318).Google Scholar
  30. 30.
    Andrews, S., & Kry, P. G. (2013). Goal directed multi-finger manipulation: Control policies and analysis. Computers & Graphics, 37(7), 830–839.CrossRefGoogle Scholar
  31. 31.
    Kyota, F., & Saito, S. (2012). Fast grasp synthesis for various shaped object. Computer Graphics Forum, 31(2), 765–774.CrossRefGoogle Scholar
  32. 32.
    Kry, P. G., & Pai, D. K. (2006). Interaction capture and synthesis. ACM Transactions on Graphics, 25(3), 872–880.CrossRefGoogle Scholar
  33. 33.
    Neff, M., Seidel, H.-P. (2006). Modeling relaxed hand shape for character animation. In Articulated Motion and Deformable Objects, 4th International Conference, AMDO 2006, Port d’Andratx, Mallorca, Spain, July 11–14, 2006, Proceedings (pp. 262–270).Google Scholar
  34. 34.
    Tsang, W., Singh, K., & Fiume, E. (2005). Helping hand: An anatomically accurate inverse dynamics solution for unconstrained hand motion. In Proceedings of the 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 2005, Los Angeles, CA, USA (pp. 319–328).Google Scholar
  35. 35.
    Zhao, W., Zhang, J., Min, J., & Chai, J. (2013). Robust realtime physics-based motion control for human grasping. ACM Transactions on Graphics, 32(6), 207.CrossRefGoogle Scholar
  36. 36.
    McNeill, D. (1992). Hand and mind: What gestures reveal about thought. Chicago, IL: University of Chicago Press.Google Scholar
  37. 37.
    Kendon, A. (2004). Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  38. 38.
    Mousas, C., Newbury, P., & Anagnostopoulos, C. N. (2014). Analyzing and segmenting finger gestures in meaningful phases. In International Conference on Computer Graphics, Imaging and Visualization, (pp. 89–94).Google Scholar
  39. 39.
    Seol, Y., O’Sullivan, C., & Lee, J. (2013). Creature features: Online motion puppetry for non-human characters. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation (pp. 213–221). ACM.Google Scholar
  40. 40.
    Raptis, M., Kirovski, D., & Hoppe, H. (2011). Real-time classification of dance gestures from skeleton animation. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation (pp. 147–156). ACM.Google Scholar
  41. 41.
    Jin, Y., & Prabhakaran, B. (2008). Semantic quantization of 3d human motion capture data through spatial-temporal feature extraction. In International Conference on Multimedia Modeling. Berlin, Heidelberg: Springer.Google Scholar
  42. 42.
    Caridakis, G., Karpouzis, K., Drosopoulos, A., & Kollias, S. (2010). Somm: Self organizing markov map for gesture recognition. Pattern Recognition Letters, 31(1), 52–59.CrossRefGoogle Scholar
  43. 43.
    Elmezain, M., Al-Hamadi, A., Appenrodt, J., & Michaelis, B. (2008). A hidden markov model-based continuous gesture recognition system for hand motion trajectory. In International Conference on Pattern Recognition (pp. 1–4).Google Scholar
  44. 44.
    Smolensky, P. (1986). Information Processing in Dynamical Systems: Foundations of Harmony Theory, Volume 1 of Parallel Distributed Processing. Cambridge, MA: MIT Press.Google Scholar
  45. 45.
    Winkler, G. (2012). Image Analysis, Random Fields and Markov Chain Monte Carlo Methods: A Mathematical Introduction (Vol. 27). Berlin: Springer Science & Business Media.Google Scholar
  46. 46.
    Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8), 1771–1800.zbMATHCrossRefGoogle Scholar
  47. 47.
    Stober, S., & Nürnberger, A. (2013). An experimental comparison of similarity adaptation approaches. In International Workshop on Adaptive Multimedia Retrieval (pp. 96–113). Berlin, Heidelberg: Springer.Google Scholar
  48. 48.
    Schultz, M., & Joachims, T. (2003). Learning a distance metric from relative comparisons. Neural Information Processing Systems, 1, 2.Google Scholar
  49. 49.
    Karpathy, A. Code for training Restricted Boltzmann Machines (RBM) and Deep Belief Networks in MATLAB from
  50. 50.
    Joachims, T. (2008). Svm-lights from
  51. 51.
    Kovar, L., Gleicher, M., & Pighin, F. (2002). Motion graphs. ACM Transactions on Graphics, 21(3), 473–482.CrossRefGoogle Scholar
  52. 52.
    Aristidou, A., Charalambous, P., & Chrysanthou, Y. (2015). Emotion analysis and classification: Understanding the performers’ emotions using the lma entities. Computer Graphics Forum, 34(6), 262–276.CrossRefGoogle Scholar

Copyright information

© 3D Research Center, Kwangwoon University and Springer-Verlag GmbH Germany 2017

Authors and Affiliations

  • Christos Mousas
    • 1
    Email author
  • Christos-Nikolaos Anagnostopoulos
    • 2
  1. 1.Graphics and Entertainment Technology Lab, Department of Computer ScienceSouthern Illinois UniversityCarbondaleUSA
  2. 2.Department of Cultural Technology and CommunicationUniversity of the AegeanMytileneGreece

Personalised recommendations