Advertisement

Generative Estimation of 3D Human Pose Using Shape Contexts Matching

  • Xu Zhao
  • Yuncai Liu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4843)

Abstract

We present a method for 3D pose estimation of human motion in generative framework. For the generalization of application scenario, the observation information we utilized comes from monocular silhouettes. We distill prior information of human motion by performing conventional PCA on single motion capture data sequence. In doing so, the aims for both reducing dimensionality and extracting the prior knowledge of human motion are achieved simultaneously. We adopt the shape contexts descriptor to construct the matching function, by which the validity and the robustness of the matching between image features and synthesized model features can be ensured. To explore the solution space efficiently, we design the Annealed Genetic Algorithm (AGA) and Hierarchical Annealed Genetic Algorithm (HAGA) that searches the optimal solutions effectively by utilizing the characteristics of state space. Results of pose estimation on different motion sequences demonstrate that the novel generative method can achieves viewpoint invariant 3D pose estimation.

Keywords

Human Motion Motion Capture Shape Context Motion Capture Data Image Silhouette 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sminchisescu, C., Kanaujia, A., Li, Z., Metaxas, D.: Discriminative density propagation for 3d human motion estimation. In: Proc. Conf. Computer Vision and Pattern Recognition, pp. 217–323 (2005)Google Scholar
  2. 2.
    Deutscher, J., Blake, A., Reid, I.: Articulated body motion capture by annealed particle filtering. In: Proceedings of the 2000 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 126–133 (2000)Google Scholar
  3. 3.
    Sidenbladh, H., Black, M., Fleet, D.: Stochastic tracking of 3D human figures using 2D image motion. In: European Conference on Computer Vision, vol. 2, pp. 702–718 (2000)Google Scholar
  4. 4.
    Sminchisescu, C., Triggs, B.: Covariance scaled sampling for monocular 3D body tracking. In: IEEE International Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 447–454 (2001)Google Scholar
  5. 5.
    Agarwal, A., Triggs, B.: Tracking articulated motion using a mixture of autoregressive models. In: Pajdla, T., Matas, J. (eds.) ECCV 2004. LNCS, vol. 3023, pp. 54–65. Springer, Heidelberg (2004)Google Scholar
  6. 6.
    Ning, H., Tan, T., Wang, L., Hu, W.: People tracking based on motion model and motion constraints withautomatic initialization. Pattern Recognition 37(7), 1423–1440 (2004)CrossRefGoogle Scholar
  7. 7.
    Mori, G., Malik, J.: Recovering 3 D Human Body Configurations Using Shape Contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(7), 1052–1062 (2006)CrossRefGoogle Scholar
  8. 8.
    Sidenbladh, H., Black, M., Sigal, L.: Implicit Probabilistic Models of Human Motion for Synthesis and Tracking. In: European Conference on Computer Vision, vol. 1, pp. 784–800 (2002)Google Scholar
  9. 9.
    Urtasun, R., Fleet, D., Fua, P.: Monocular 3-D Tracking of the Golf Swing. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, IEEE Computer Society Press, Los Alamitos (2005)Google Scholar
  10. 10.
    Michalewicz, Z.: Genetic algorithms+ data structures= evolution programs. Springer, Heidelberg (1996)zbMATHGoogle Scholar
  11. 11.
    Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(4), 509–522 (2002)CrossRefGoogle Scholar
  12. 12.
    Aggarwal, J., Cai, Q.: Human motion analysis: A review. Computer Vision and Image Understanding 73(3), 428–440 (1999)CrossRefGoogle Scholar
  13. 13.
    Gavrila, D.: Visual analysis of human movement: A survey. Computer Vision and Image Understanding 73(1), 82–98 (1999)zbMATHCrossRefGoogle Scholar
  14. 14.
    Moeslund, T., Granum, E.: A survey of computer vision-based human motion capture. Computer Vision and Image Understanding 81(3), 231–268 (2001)zbMATHCrossRefGoogle Scholar
  15. 15.
    Urtasun, R., Fua, P.: 3D Human Body Tracking using Deterministic Temporal Motion Models. In: European Conference on Computer Vision, vol. 3, pp. 92–106 (2004)Google Scholar
  16. 16.
    Arulampalam, M., Maskell, S., Gordon, N., Clapp, T., Sci, D., Organ, T., Adelaide, S.: A tutorial on particle filters for online nonlinear/non-GaussianBayesian tracking. IEEE Transactions on Signal Processing, IEEE Transactions on [see also Acoustics, Speech, and Signal Processing 50(2), 174–188 (2002)Google Scholar
  17. 17.
    Deutscher, J., Davison, A., Reid, I.: Automatic partitioning of high dimensional search spaces associated with articulated body motion capture. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society Press, Los Alamitos (2001)Google Scholar
  18. 18.
    Jin, X., Tai, C.: Convolution surfaces for arcs and quadratic curves with a varying kernel. The Visual Computer 18(8), 530–546 (2002)CrossRefGoogle Scholar
  19. 19.
    Sigal, L., Black, M.J.: Humaneva: Synchronized video and motion capture dataset for evaluation of articulated human motion. Technical Report CS-06-08, Brown University (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Xu Zhao
    • 1
  • Yuncai Liu
    • 1
  1. 1.Institute of Image Processing & Pattern Recognition, Shanghai Jiao Tong University, 200240, ShanghaiChina

Personalised recommendations