Skip to main content

Automatic Camera Path Generation from 360\(^\circ \) Video

  • Conference paper
  • First Online:
Advances in Visual Computing (ISVC 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11844))

Included in the following conference series:

Abstract

Omnidirectional (\(360^\circ \)) video is a novel media format, rapidly becoming adopted in media production and consumption as part of today’s ongoing virtual reality revolution. The goal of automatic camera path generation is to calculate automatically a visually interesting camera path from a \(360^\circ \) video in order to provide a traditional, TV-like consumption experience. In this work, we describe our algorithm for automatic camera path generation, based on extraction of the information of the scene objects with deep learning based methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://stephenfollows.com/many-shots-average-movie/.

References

  1. Bewley, A., Ge, Z., Ott, L., Ramos, F., Upcroft, B.: Simple online and realtime tracking. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 3464–3468 September 2016. https://doi.org/10.1109/ICIP.2016.7533003

  2. Galvane, Q., Ronfard, R.: Implementing hitchcock - the role of focalization and viewpoint. In: Bares, W., Gandhi, V., Galvane, Q., Ronfard, R. (eds.) Eurographics Workshop on Intelligent Cinematography and Editing. The Eurographics Association (2017). https://doi.org/10.2312/wiced.20171065

  3. Hu, H.N., Lin, Y.C., Liu, M.Y., Cheng, H.T., Chang, Y.J., Sun, M.: Deep 360 pilot: learning a deep agent for piloting through 360\(^\circ \) sports videos. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1396–1405, July 2017. https://doi.org/10.1109/CVPR.2017.153

  4. Kuhn, H.W., Yaw, B.: The hungarian method for the assignment problem. Naval Res. Logist. Q. 2(1–2), 83–97 (1955)

    Article  MathSciNet  Google Scholar 

  5. Lee, H., Tateyama, Y., Ogi, T.: Realistic visual environment for immersive projection display system. In: 2010 16th International Conference on Virtual Systems and Multimedia, pp. 128–132, October 2010. https://doi.org/10.1109/VSMM.2010.5665954

  6. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement. CoRR abs/1804.02767 (2018). http://arxiv.org/abs/1804.02767

  7. Su, Y.C., Grauman, K.: Making 360\(^\circ \) video watchable in 2d: Learning videography for click free viewing. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1368–1376, July 2017. https://doi.org/10.1109/CVPR.2017.150

  8. Su, Y.C., Jayaraman, D., Grauman, K.: Pano2Vid: automatic Cinematography for Watching 360\(^\circ \) Videos. In: Bares, W., Gandhi, V., Galvane, Q., Ronfard, R. (eds.) Eurographics Workshop on Intelligent Cinematography and Editing. The Eurographics Association (2017). https://doi.org/10.2312/wiced.20171071

  9. Suzuki, T., Yamanaka, T.: Saliency map estimation for omni-directional image considering prior distributions. In: IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018, Miyazaki, Japan, 7–10 October 2018, pp. 2079–2084 (2018). https://doi.org/10.1109/SMC.2018.00358

  10. Truong, A., Chen, S., Yumer, E., Li, W., Salesin, D.: Extracting regular FOV shots from 360 event footage. In: Human-Computer Interaction (CHI 2018), Montreal, April 2018

    Google Scholar 

  11. Werlberger, M., Trobin, W., Pock, T., Wedel, A., Cremers, D., Bischof, H.: Anisotropic huber-l1 optical flow. In: Proceedings of the British Machine Vision Conference (BMVC), London, UK, September 2009

    Google Scholar 

Download references

Acknowledgment

This work has received funding from the European Union’s Horizon 2020 research and innovation programme, grant n\(^\circ \) 761934, Hyper360 (“Enriching 360 media with 3D storytelling and personalisation elements”). Thanks to Rundfunk Berlin-Brandenburg (RBB) for providing the 360\(^\circ \) video content.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hannes Fassold .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fassold, H. (2019). Automatic Camera Path Generation from 360\(^\circ \) Video. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2019. Lecture Notes in Computer Science(), vol 11844. Springer, Cham. https://doi.org/10.1007/978-3-030-33720-9_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33720-9_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33719-3

  • Online ISBN: 978-3-030-33720-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics