Advertisement

Vision-Based Motion Capture of Interacting Multiple People

  • Hiroaki Egashira
  • Atsushi Shimada
  • Daisaku Arita
  • Rin-ichiro Taniguchi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5716)

Abstract

Vision-based motion capture is getting popular for acquiring human motion information in various interactive applications. To enlarge its applicability, we have been developing a vision-based motion capture system which can estimate the postures of multiple people simultaneously using multiview image analysis. Our approach is divided into the following two phases: at first, extraction, or segmentation, of each person in input multiview images; then, posture analysis for one person is applied to the segmented region of each person. The segmentation is realized in the voxel space, which is reconstructed by visual cone intersection of multiview silhouettes. Here, a graph cut algorithm is employed to achieve optimal segmentation. Posture analysis is based on a model-based approach, where a skeleton model of human figure is matched with the multiview silhouettes based on a particle filter and physical constraints on human body movement. Several experimental studies show that the proposed method acquires human postures of multiple people correctly and efficiently even when they touch each otter.

Keywords

Segmentation Result Motion Capture Motion Capture System Human Posture Human Region 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Sundaresan, A., Chellappa, R.: Multi-camera Tracking of Articulated Human Motion Using Motion and Shape Cues. In: Narayanan, P.J., Nayar, S.K., Shum, H.-Y. (eds.) ACCV 2006. LNCS, vol. 3852, pp. 131–140. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  2. 2.
    Carranza, J., Theobalt, C., Magnor, M., Seidel, H.: Free-Viewpoint Video of Human Actors. In: Proc. of ACM SIGGRAPH, pp. 569–577 (2003)Google Scholar
  3. 3.
    Sand, P., McMillan, L., Popovic, J.: Continuous Capture of Skin Deformation. In: Proc. of ACM SIGGRAPH, pp. 578–586 (2003)Google Scholar
  4. 4.
    Date, N., Yoshimoto, H., Arita, D., Taniguchi, R.: Real-time Human Motion Sensing based on Vision-based Inverse Kinematics for Interactive Applications. In: Proc. of International Conference on Pattern Recognition, vol. 3, pp. 318–321 (2004)Google Scholar
  5. 5.
    Kehl, R., Bray, M., Van Gool, L.: Full Body Tracking from Multiple Views Using Stochastic Sampling. In: Proc. of Computer Vision and Pattern Recognition, pp. 129–136 (2005)Google Scholar
  6. 6.
    Bernier, O.: Real-Time 3D Articulated Pose Tracking using Particle Filters Interacting through Belief Propagation. In: Proc. of International Conference on Pattern Recognition, vol. 1, pp. 90–93 (2006)Google Scholar
  7. 7.
    Saiki, T., Shimada, A., Arita, D., Taniguchi, R.: A Vision-based Real-time Motion Capture System using Fast Model Fitting. In: CD-ROM Proc. of 14th Korea-Japan Joint Workshop on Frontiers of Computer Vision (2008)Google Scholar
  8. 8.
    Tanaka, H., Nakazawa, A., Takemura, H.: Human Pose Estimation from Volume Data and Topological Graph Database. In: Proc. of 8th Asian Conference on Computer Vision, pp. 618–627 (2007)Google Scholar
  9. 9.
    Sagawa, Y., Shimosaka, M., Mori, T., Sato, T.: Fast Online Human Pose Estimation via 3D Voxel Data. In: Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1034–1040 (2007)Google Scholar
  10. 10.
    Huang, K.S., Trivedi, M.M.: 3D Shape Context Based Gesture Analysis Integrated with Tracking using Omni Video Array. In: Proceedings of IEEE Workshop on Vision for Human-Computer Interaction, V4HCI (2005)Google Scholar
  11. 11.
    Boykov, Y., Kolmogorov, V.: An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision. IEEE Trans. on Pattern Analysis and Machine Intelligence 26(9), 1124–1137 (2004)CrossRefzbMATHGoogle Scholar
  12. 12.
  13. 13.
    Martin, W.N., Aggarwal, J.K.: Volumetric Description of Objects from Multiple Views. IEEE Trans. on Pattern Analysis and Machine Intelligence 5(2), 150–158 (1983)CrossRefGoogle Scholar
  14. 14.
    Tsai, R.Y.: A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses. IEEE Journal of Robotics and Automation 3(4), 323–344 (1987)CrossRefGoogle Scholar
  15. 15.
    Arita, D., Taniguchi, R.: RPV-II: A Stream-Based Real-Time Parallel Vision System and Its Application to Real-Time Volume Reconstruction. In: Schiele, B., Sagerer, G. (eds.) ICVS 2001. LNCS, vol. 2095, pp. 174–189. Springer, Heidelberg (2001)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Hiroaki Egashira
    • 1
  • Atsushi Shimada
    • 1
  • Daisaku Arita
    • 1
    • 2
  • Rin-ichiro Taniguchi
    • 1
  1. 1.Department of Intelligent Systems Kyushu UniversityFukuokaJapan
  2. 2.Institute of SystemsInformation Technologies and NanotechnologiesFukuokaJapan

Personalised recommendations