Advertisement

Inserting virtual pedestrians into pedestrian groups video with behavior consistency

  • 369 Accesses

  • 4 Citations

Abstract

In this paper, we propose a novel approach to integrate virtual pedestrians into a scene of real pedestrian groups with behavior consistency, and this is achieved by dynamic path planning of virtual pedestrians. Rather than accounting for the local collision avoidance only, our approach is capable of finding an optimized path for each virtual pedestrian on his way based on the current global distribution of the real groups in the scene. The big challenge is due to the information of both position and velocity of real pedestrians in the video being unavailable; also the distribution of the groups in the scene may vary dynamically. We therefore need to detect and track real pedestrians on each frame of the video to acquire their distribution and motion information. We save this information by an efficient data structure, called environment grid. During the way of a virtual pedestrian, the respective agent frequently emits the detection rays through the environment cells to find the situation of the real pedestrians ahead of him and adjust the original path if necessary. Virtual pedestrians are merged into the video finally with the occlusion between virtual characters and the real pedestrians correctly presented. Experiment results on several scenarios demonstrate the effectiveness of the proposed approach.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 199

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

References

  1. 1.

    Wu, J., Geyer, C., Rehe, J.M.: Real-time human detection using contour cues. In: Proceedings of the 2011 IEEE International Conference on Robotics and Automation, pp. 860–867. IEEE, New York (2011)

  2. 2.

    Sudowe, P., Leibe, B.: Efficient use of geometric constraints for sliding-window object detection in video. In: Proceedings of International Conference on Computer Vision Systems, pp. 11–20. ACM, New York (2011)

  3. 3.

    Benenson, R., Mathias, M., Timofte, R., van Gool, L.: Pedestrian detection at 100 frames per second. In: Proceedings of Computer Vision and Pattern Recognition, pp. 2903–2910. IEEE, New York (2012)

  4. 4.

    Andriluka, M., Roth, S., Schiele, B.: People tracking-by-detection and people detection-by-tracking. In: Proceedings of Computer Vision and Pattern Recognition, pp. 1–8. IEEE, New York (2012)

  5. 5.

    Reichlin, F., Leibe, B., Koller-Meier, E., van Gool, L.: Online multiperson tracking-by-detection from a single, uncalibrated camera. IEEE Trans. Pattern Anal. Mach. Intell. 33, 1820–1833 (2011)

  6. 6.

    Pellegrini, S., Ess, A., Schindler, K., van Gool, L.: You’ll never walk alone: modeling social behavior for multi-target tracking. In: Proceedings of 2009 IEEE 12th International Conference on Computer Vision, pp. 261–268. IEEE, New York (2009)

  7. 7.

    Piccardi, M.: Background subtraction techniques: a review. In: Proceedings of 2004 IEEE International Conference on Systems, Man and Cybernetics, vol. 4, pp. 3099–3104. IEEE, New York (2004)

  8. 8.

    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22, 888–905 (2000)

  9. 9.

    Malladi, R., Sethian, J.A., Vemuri, B.C.: Shape modeling with front propagation: a level set approach. IEEE Trans. Pattern Anal. Mach. Intell. 17, 158–175 (1995)

  10. 10.

    Treuille, A., Cooper, S., Popović, Z.: Continuum crowds. ACM Trans. Graph. 25, 1160–1168 (2006)

  11. 11.

    Jiang, H., Xu, W., Mao, T., Li, C., Xia, S., Wang, Z.: Continuum crowd simulation in complex environments. Comput. Graph. 34, 537–544 (2010)

  12. 12.

    Henderson, L.F.: The statistics of crowd fluids. Nature 229, 381–383 (1971)

  13. 13.

    Narain, R., Golas, A., Curtis, S., Lin, M.C.: Aggregate dynamics for dense crowd simulation. ACM Trans. Graph. 28(122), 1–8 (2009)

  14. 14.

    Helbing, D., Molnar, P.: Social force model for pedestrian dynamics. Phys. Rev. E 51, 4282–4286 (1995)

  15. 15.

    Reynolds, C.W.: Steering behaviors for autonomous characters. In: Proceedings of Game Developers Conference, pp. 763–782. Miller Freeman, San Francisco (1999)

  16. 16.

    Fiorini, P., Shiller, Z.: Motion planning in dynamic environments using velocity obstacles. Int. J. Robot. Res. 17, 760–772 (1998)

  17. 17.

    van den Berg, J., Guy, S.J., Lin, M., Manocha, D.: Reciprocal n-body collision avoidance. In: Proceedings of the 14th International Symposium on Robotics Research, vol. 70, pp. 3–7. Springer, Berlin (2009)

  18. 18.

    Pettré, J., Ondřej, J., Olivier, A.H., Cretual, A., Donikian, S.: Experiment-based modeling simulation and validation of interactions between virtual walkers. In: Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 189–198. ACM, New York (2009)

  19. 19.

    Kim, S., Guy, S., Liu, W., Lin, M.: Predicting pedestrian trajectories using velocity-space reasoning. In: Proceedings of the 10th International Workshop on the Algorithmic Foundations of Robotics. Springer, Berlin (2012)

  20. 20.

    Azuma, R.: A survey of augmented reality. Presence, Teleoper. Virtual Environ. 6, 355–385 (1997)

  21. 21.

    Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A.: KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. In: Proceeding of ACM Symposium on User Interface Software and Technology, pp. 559–568. ACM, New York (2011)

  22. 22.

    Abad, F., Camahort, E., Viv, R.: On the integration of synthetic objects with real-world scenes. In: Proceedings of EUROGRAPHICS. IEEE, New York (2002)

  23. 23.

    Kim, H., Sohn, K.: 3D reconstruction from stereo images for interactions between real and virtual objects. Signal Process. Image Commun. 22, 61–75 (2005)

  24. 24.

    Somasundaram, A., Parent, R.: Inserting synthetic characters into live-action scenes of multiple people. In: Proceedings of the 16th International Conference on Computer Animation and Social Agents, pp. 137. IEEE, New York (2003)

  25. 25.

    Zhang, Y., Pettré, J., Ondřej, J., Qin, X., Peng, Q., Donikian, S.: Online inserting virtual characters into dynamic video scenes. Comput. Animat. Virtual Worlds 22, 499–510 (2011)

  26. 26.

    Rosales, R., Sclaroff, S.: 3D trajectory recovery for tracking multiple objects and trajectory guided recognition of actions. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 123. IEEE, New York (1999)

  27. 27.

    Karamouzas, I., Overmars, M.: Simulating the local behaviour of small pedestrian groups. In: Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology, pp. 183–190. ACM, New York (2010)

Download references

Acknowledgements

This paper was supported partly by National Basic Research Program of China under granted No. 2009CB320802 and National Natural Science Foundation of China under granted No. 61272302.

Author information

Correspondence to Zhiguo Ren.

Electronic Supplementary Material

Below is the link to the electronic supplementary material.

(WMV 4.3 MB)

(WMV 4.3 MB)

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Ren, Z., Gai, W., Zhong, F. et al. Inserting virtual pedestrians into pedestrian groups video with behavior consistency. Vis Comput 29, 927–936 (2013). https://doi.org/10.1007/s00371-013-0853-x

Download citation

Keywords

  • Mixed reality
  • Agent-based simulation
  • Steering methods
  • Path planning