Advertisement

Joint entropy-based motion segmentation for 3D animations

  • 302 Accesses

  • 1 Citations

Abstract

With the recent advancement of data acquisition techniques, 3D animation is becoming a new challenging subject for data processing. In this paper, we present a joint entropy-based key-frame extraction method, which further derives a motion segmentation method for 3D animations. We start by applying an existing deformation-based feature descriptor to measure the degree of deformation of each triangle within each frame, from which we compute the statistical joint probability distribution of triangles’ deformation between two consecutive subsequences of frames with a fixed length. Then, we further compute joint entropy between the two subsequences. This allows us to extract key-frames by taking the local maximal from the joint entropy curve (or energy curve) of a given 3D animation. Furthermore, we classify the extracted key-frames by grouping the key-frames with similar motions into the same cluster. Finally, we compute a boundary frame between each of the two neighboring frames with different motions, which is achieved by minimizing the variance of energy between the two motions. The experimental results show that our method successfully extracts representative key-frames of different motions, and the comparisons with existing methods show the effectiveness and the efficiency of our method.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 199

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

References

  1. 1.

    Autodesk inc.: 3d studio max. http://www.autodesk.com/products/3ds-max. Accessed 23 Nov 2015

  2. 2.

    Cmu graphics lab: Carnegie mellon university motion capture database. http://mocap.cs.cmu.edu. Accessed 01 Dec 2015

  3. 3.

    Barbič, J., Safonova, A., Pan, J.-Y., Faloutsos, C., Hodgins, J. K., Pollard, N. S.: Segmenting motion capture data into distinct behaviors. In: Proceedings of Graphics Interface 2004, Canadian Human–Computer Communications Society, pp. 185–194 (2004)

  4. 4.

    Baum, L.E.: An equality and associated maximization technique in statistical estimation for probabilistic functions of markov processes. Inequalities 3, 1–8 (1972)

  5. 5.

    Fine, S., Singer, Y., Tishby, N.: The hierarchical hidden markov model: analysis and applications. Mach. Learn. 32(1), 41–62 (1998)

  6. 6.

    Fod, A., Matarić, M.J., Jenkins, O.C.: Automated derivation of primitives for movement classification. Auton. Robots 12(1), 39–54 (2002)

  7. 7.

    Gal, R., Shamir, A., Cohen-Or, D.: Pose-oblivious shape signature. IEEE Trans. Vis. Comput. Gr. 13(2), 261–271 (2007)

  8. 8.

    González-Díaz, I., Martínez-Cortés, T., Gallardo-Antolín, A.: Díaz-de María, F.: Temporal segmentation and keyframe selection methods for user-generated video search-based annotation. Expert Syst. Appl. 42(1), 488–502 (2015)

  9. 9.

    Hartigan, J.A., Wong, M.A.: Algorithm as 136: a k-means clustering algorithm. Appl. Stat. 28, 100–108 (1979)

  10. 10.

    Janus, B., Nakamura, Y.: Unsupervised probabilistic segmentation of motion data for mimesis modeling. In: 12th International Conference on Advanced Robotics, 2005. ICAR’05. Proceedings. IEEE, pp. 411–417 (2005)

  11. 11.

    Krishna, M.V., Bodesheim, P., Körner, M., Denzler, J.: Temporal video segmentation by event detection: a novelty detection approach. Pattern Recognit. Image Anal. 24(2), 243–255 (2014)

  12. 12.

    Lee, T.Y., Lin, C.H., Wang, Y.S., Chen, T.G.: Animation key-frame extraction and simplification using deformation analysis. IEEE Trans. Circuits Syst. Video Technol. 18(4), 478–486 (2008)

  13. 13.

    Liu, T., Zhang, H.-J., Qi, F.: A novel video key-frame-extraction algorithm based on perceived motion energy model. IEEE Trans. Circuits Syst. Video Technol. 13(10), 1006–1013 (2003)

  14. 14.

    Luo, G., Cordier, F., Seo, H.: Similarity of deforming meshes based on spatio-temporal segmentation. 3D Object Retrieval Workshop, 3DOR (2014)

  15. 15.

    Luo, G., Lei, G., Seo, H.: Joint entropy based key-frame extraction for 3d animations. In: Computer Graphics International (CGI) 2015, the 32nd annual conference (Strasbourg, France, June 2015), Université de Strasbourg (2015)

  16. 16.

    Luo, G., Seo, H., Cordier, F.: Temporal segmentation of deforming mesh. In: Computer Graphics International (CGI) 2014, the 31st annual conference (Sydney, Australia, June 2014), University of Sydney (2014)

  17. 17.

    Maji, S., Berg, A., Malik, J.: Classification using intersection kernel support vector machines is efficient. CVPR 2008. IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp 1–8 (2008)

  18. 18.

    Rabiner, L.R.: A tutorial on hidden markov models and selected applications in speech recognition. Proc. IEEE 77(2), 257–286 (1989)

  19. 19.

    Shamir, A.: A survey on mesh segmentation techniques. In: Computer Graphics Forum, vol. 27, pp. 1539–1556. Wiley Online Library (2008)

  20. 20.

    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)

  21. 21.

    Sun, J., Ovsjanikov, M., Guibas, L.: A concise and provably informative multi-scale signature based on heat diffusion. In: Computer Graphics Forum, vol. 28, pp. 1383–1392. Wiley Online Library (2009)

  22. 22.

    Van Kaick, O., Zhang, H., Hamarneh, G., Cohen-Or, D.: A survey on shape correspondence. In: Computer Graphics Forum, vol. 30, pp. 1681–1707. Wiley Online Library (2011)

  23. 23.

    Wang, T.-S., Shum, H.-Y., Xu, Y.-Q., Zheng, N.-N.: Unsupervised analysis of human gestures. In: Advances in Multimedia Information Processing–PCM 2001, pp. 174–181. Springer, Berlin (2001)

  24. 24.

    Yamauchi, H., Gumhold, S., Zayer, R., Seidel, H.-P.: Mesh segmentation driven by gaussian curvature. Vis. Comput. 21(8–10), 659–668 (2005)

  25. 25.

    Zhou, F., De la Torre, F., Hodgins, J.K.: Hierarchical aligned cluster analysis for temporal clustering of human motion. IEEE Trans. Pattern Anal. Mach. Intell. 35(3), 582–596 (2013)

  26. 26.

    Zhou, F., Torre, F., Hodgins, J. K.: Aligned cluster analysis for temporal segmentation of human motion. In: 8th IEEE International Conference on Automatic Face and Gesture Recognition, 2008. FG’08, IEEE, pp. 1–7 (2008)

Download references

Acknowledgments

This work has been jointly supported by the National Nature Science Foundation of China (Nos. 61602222, 61562044), the Science and Technology Research Program funded by the Education Department of Jiangxi Province (No. GJJ150359), the Doctoral Research Project of JXNU (6754), the Science and Technology Research Project of Jiangxi Provincial Department of Education (No. GJJ14246) and the French national project SHARED (Shape Analysis and Registration of People Using Dynamic Data, No. 10-CHEX-014-01). We are also thankful to the assistance from our colleagues Lei Haopeng, Yi Yugen and Li Zhangyu.

Author information

Correspondence to Guoliang Luo.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mov 23331 KB)

Supplementary material 1 (mov 23331 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Luo, G., Lei, G., Cao, Y. et al. Joint entropy-based motion segmentation for 3D animations. Vis Comput 33, 1279–1289 (2017). https://doi.org/10.1007/s00371-016-1313-1

Download citation

Keywords

  • Joint entropy
  • Key-frame extraction
  • Motion segmentation
  • 3D animation