Skip to main content

Creating Immersive Virtual Reality Scenes Using a Single RGB-D Camera

  • Conference paper
  • First Online:
Image Analysis and Recognition (ICIAR 2017)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 10317))

Included in the following conference series:

  • 2663 Accesses

Abstract

We examine the problem of creating immersive virtual reality (VR) scenes using a single moving RGB-D camera. Our approach takes as input a RGB-D video containing one or more actors and constructs a complete 3D background within which human actors are properly embedded. A user can then view the captured video from any viewpoint and interact with the scene. We also provide a manually labeled database of RGB-D video sequences and evaluation metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. LaValle, S.M., Yershova, A., Katsev, M., Antonov, M.: Head tracking for the oculus rift. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 187–194. IEEE (2014)

    Google Scholar 

  2. Yagi, Y., Yachida, M.: Real-time omnidirectional image sensors. Int. J. Comput. Vis. 58(3), 173–207 (2004)

    Article  Google Scholar 

  3. Tzavidas, S., Katsaggelos, A.K.: A multicamera setup for generating stereo panoramic video. IEEE Trans. Multimed. 7(5), 880–890 (2005)

    Article  Google Scholar 

  4. Zhang, Z.: Microsoft kinect sensor and its effect. IEEE Multimed. 19(2), 4–10 (2012)

    Article  MathSciNet  Google Scholar 

  5. Tippetts, B., Lee, D.J., Lillywhite, K., Archibald, J.: Review of stereo vision algorithms and their suitability for resource-limited systems. J. Real-Time Image Process. 11(1), 5–25 (2016)

    Article  Google Scholar 

  6. Yang, Q., Tan, K.-H., Culbertson, B., Apostolopoulos, J.: Fusion of active and passive sensors for fast 3D capture. In: 2010 IEEE International Workshop on Multimedia Signal Processing (MMSP), pp. 69–74. IEEE (2010)

    Google Scholar 

  7. Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., Fitzgibbon, A.: Kinectfusion: real-time dense surface mapping and tracking. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 127–136. IEEE (2011)

    Google Scholar 

  8. Newcombe, R.A., Fox, D., Seitz, S.M.: Dynamicfusion: reconstruction and tracking of non-rigid scenes in real-time. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 343–352 (2015)

    Google Scholar 

  9. Dou, M., Fuchs, H.: Temporally enhanced 3D capture of room-sized dynamic scenes with commodity depth cameras. In: 2014 IEEE Virtual Reality (VR), pp. 39–44. IEEE (2014)

    Google Scholar 

  10. Dou, M., Guan, L., Frahm, J.-M., Fuchs, H.: Exploring high-level plane primitives for indoor 3D reconstruction with a hand-held RGB-D camera. In: Park, J.-I., Kim, J. (eds.) ACCV 2012. LNCS, vol. 7729, pp. 94–108. Springer, Heidelberg (2013). doi:10.1007/978-3-642-37484-5_9

    Chapter  Google Scholar 

  11. Dou, M., Khamis, S., Degtyarev, Y., Davidson, P., Fanello, S.R., Kowdle, A., Escolano, S.O., Rhemann, C., Kim, D., Taylor, J., Kohli, P., Tankovich, V., Izadi, S.: Fusion4D: real-time performance capture of challenging scenes. ACM Trans. Graph. 35(4), 114:1–114:13 (2016)

    Article  Google Scholar 

  12. Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., Cook, M., Moore, R.: Real-time human pose recognition in parts from single depth images. Commun. ACM 56(1), 116–124 (2013)

    Article  Google Scholar 

  13. Montero, A.S., Lang, J., Laganiere, R.: Scalable kernel correlation filter with sparse feature integration. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 24–31 (2015)

    Google Scholar 

  14. Heckbert, P.S.: A seed fill algorithm. In: Graphics Gems, pp. 275–277. Academic Press Professional Inc (1990)

    Google Scholar 

  15. Labbé, M., Michaud, F.: Online global loop closure detection for large-scale multi-session graph-based slam. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2661–2666. IEEE (2014)

    Google Scholar 

  16. Labbé, M.,Michaud, F.: Memory management for real-time appearance-based loop closure detection. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1271–1276. IEEE (2011)

    Google Scholar 

  17. Labbe, M., Michaud, F.: Appearance-based loop closure detection for online large-scale and long-term operation. IEEE Trans. Robot. 29(3), 734–745 (2013)

    Article  Google Scholar 

  18. Lourakis, M.I.A., Argyros, A.A.: SBA: a software package for generic sparse bundle adjustment. ACM Trans. Math. Softw. (TOMS) 36(1), 2 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  19. Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson surface reconstruction. In: Proceedings of the Fourth Eurographics Symposium on Geometry Processing vol. 7 (2006)

    Google Scholar 

  20. Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D., Burgard, W.: An evaluation of the RGB-D slam system. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 1691–1696. IEEE (2012)

    Google Scholar 

  21. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D slam systems. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573–580. IEEE (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Po Kong Lai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Lai, P.K., Laganière, R. (2017). Creating Immersive Virtual Reality Scenes Using a Single RGB-D Camera. In: Karray, F., Campilho, A., Cheriet, F. (eds) Image Analysis and Recognition. ICIAR 2017. Lecture Notes in Computer Science(), vol 10317. Springer, Cham. https://doi.org/10.1007/978-3-319-59876-5_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59876-5_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59875-8

  • Online ISBN: 978-3-319-59876-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics