Skip to main content

Multiview Panorama Alignment and Optical Flow Refinement

  • Chapter
  • First Online:
Real VR – Immersive Digital Reality

Abstract

Current techniques for “RealVR” experiences are usually limited to a small area around the capture setup. Simple linear blending between several viewpoints will disrupt the virtual reality (VR) experience and cause a loss of immersion for the user. To obtain smoother transitions, optical flow based warping between viewpoints can be utilized. Therefore, the panorama images of these viewpoints do not only need to be upright adjusted but also their viewing direction need to be aligned first. As panoramas for VR are usually of high resolution for high quality results in every direction, the optical flow in a high resolution is also indispensable.

This chapter gives an overview of how to align several viewpoints to a common viewing direction and obtain high resolution optical flow in between.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Agarwal, S., et al.: Building Rome in a day. Commun. ACM 54(10), 105–112 (2011)

    Article  Google Scholar 

  2. Akbarzadeh, A., et al.: Towards urban 3D reconstruction from video. In: Third International Symposium on 3D Data Processing, Visualization, and Transmission, pp. 1–8. IEEE (2006)

    Google Scholar 

  3. Anguelov, D., et al.: Google street view: capturing the world at street level. Computer 43, 32–38 (2010). http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5481932&tag=1

  4. Antone, M.E., Teller, S.: Automatic recovery of relative camera rotations for urban scenes. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, 2000, vol. 2, pp. 282–289. IEEE (2000)

    Google Scholar 

  5. Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. Int. J. Comput. Vis. 92(1), 1–31 (2011)

    Article  Google Scholar 

  6. Bazin, J.C., Demonceaux, C., Vasseur, P., Kweon, I.: Rotation estimation and vanishing point extraction by omnidirectional vision in urban environment. Int. J. Robot. Res. 31(1), 63–81 (2012)

    Article  Google Scholar 

  7. Behl, A., Hosseini Jafari, O., Karthik Mustikovela, S., Abu Alhaija, H., Rother, C., Geiger, A.: Bounding boxes, segmentations and object coordinates: how important is recognition for 3D scene flow estimation in autonomous driving scenarios? In: IEEE International Conference on Computer Vision (ICCV), pp. 2574–2583 (2017)

    Google Scholar 

  8. Burt, P., Adelson, E.: The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 31(4), 532–540 (1983)

    Article  Google Scholar 

  9. Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33783-3_44

    Chapter  Google Scholar 

  10. Cipolla, R., Drummond, T., Robertson, D.P.: Camera calibration from vanishing points in image of architectural scenes. In: BMVC, vol. 99, pp. 382–391 (1999)

    Google Scholar 

  11. Cohen, T., Geiger, M., Köhler, J., Welling, M.: Convolutional networks for spherical signals. In: ICML Workshop on Principled Approaches to Deep Learning (2017)

    Google Scholar 

  12. Cohen, T.S., Geiger, M., Köhler, J., Welling, M.: Spherical CNNs. In: International Conference on Learning Representations (ICLR) (2018)

    Google Scholar 

  13. Coughlan, J.M., Yuille, A.L.: Manhattan world: compass direction from a single image by Bayesian inference. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, vol. 2, pp. 941–947. IEEE (1999)

    Google Scholar 

  14. Dosovitskiy, A., et al.: FlowNet: learning optical flow with convolutional networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 2758–2766 (2015)

    Google Scholar 

  15. Fiala, M., Roth, G.: Automatic alignment and graph map building of panoramas. In: IEEE International Workshop on Haptic Audio Visual Environments and their Applications, 2005, p. 6. IEEE (2005)

    Google Scholar 

  16. Gurrieri, L.E., Dubois, E.: Optimum alignment of panoramic images for stereoscopic navigation in image-based telepresence systems. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 351–358. IEEE (2011)

    Google Scholar 

  17. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)

    MATH  Google Scholar 

  18. Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell. 17(1–3), 185–203 (1981)

    Article  Google Scholar 

  19. HTC: Discover virtual reality beyond imagination. https://www.vive.com/uk/

  20. Hui, T.W., Tang, X., Loy, C.C.: LiteFlowNet: a lightweight convolutional neural network for optical flow estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8981–8989 (2018)

    Google Scholar 

  21. Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: FlowNet 2.0: evolution of optical flow estimation with deep networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1647–1655 (2017)

    Google Scholar 

  22. Jung, J., Lee, J.Y., Kim, B., Lee, S.: Upright adjustment of 360 spherical panoramas. In: 2017 IEEE Virtual Reality (VR), pp. 251–252. IEEE (2017)

    Google Scholar 

  23. Kangni, F., Laganiere, R.: Epipolar geometry for the rectification of cubic panoramas. In: The 3rd Canadian Conference on Computer and Robot Vision, 2006, p. 70. IEEE (2006)

    Google Scholar 

  24. Karlsson, N., Di Bernardo, E., Ostrowski, J., Goncalves, L., Pirjanian, P., Munich, M.E.: The vSLAM algorithm for robust localization and mapping. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation, ICRA 2005, pp. 24–29. IEEE (2005)

    Google Scholar 

  25. Kopf, J.: 360 video stabilization. ACM Trans. Graph. (TOG) 35(6), 195 (2016)

    Article  Google Scholar 

  26. Lee, H., Shechtman, E., Wang, J., Lee, S.: Automatic upright adjustment of photographs with robust camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 833–844 (2014)

    Article  Google Scholar 

  27. Lemaire, T., Lacroix, S.: SLAM with panoramic vision. J. Field Robot. 24(1–2), 91–111 (2007)

    Article  Google Scholar 

  28. Luong, Q.T., Faugeras, O.: The geometry of multiple images. MIT Press Boston 2(3), 4–5 (2001)

    MATH  Google Scholar 

  29. Makadia, A., Daniilidis, K.: Rotation recovery from spherical images without correspondences. IEEE Trans. Pattern Anal. Mach. Intell. 28(7), 1170–1175 (2006)

    Article  Google Scholar 

  30. Martins, A.T., Aguiar, P.M., Figueiredo, M.A.: Orientation in Manhattan: equiprojective classes and sequential estimation. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 822–827 (2005)

    Article  Google Scholar 

  31. Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3061–3070 (2015)

    Google Scholar 

  32. Microsoft: Microsoft hololens | mixed reality technology for business. https://www.microsoft.com/en-us/hololens

  33. Mühlhausen, M., Wöhler, L., Albuquerque, G., Magnor, M.: Iterative optical flow refinement for high resolution images. In: Proceedings of IEEE International Conference on Image Processing (ICIP) (September 2019)

    Google Scholar 

  34. Oculus: Oculus rift. https://www.oculus.com/rift/

  35. Orghidan, R., Salvi, J., Gordan, M., Orza, B.: Camera calibration using two or three vanishing points. In: 2012 Federated Conference on Computer Science and Information Systems (FedCSIS), pp. 123–130. IEEE (2012)

    Google Scholar 

  36. Ranjan, A., Black, M.J.: Optical flow estimation using a spatial pyramid network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2720–2729 (2017)

    Google Scholar 

  37. Revaud, J., Weinzaepfel, P., Harchaoui, Z., Schmid, C.: EpicFlow: edge-preserving interpolation of correspondences for optical flow. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1164–1172 (2015)

    Google Scholar 

  38. Salehi, S., Dubois, E.: Alignment of cubic-panorama image datasets using epipolar geometry. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1545–1548. IEEE (2011)

    Google Scholar 

  39. Schindler, G., Dellaert, F.: Atlanta world: an expectation maximization framework for simultaneous low-level edge grouping and camera calibration in complex man-made environments. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 1, p. I. IEEE (2004)

    Google Scholar 

  40. Shum, H.Y., Chan, S.C., Kang, S.B.: Image-Based Rendering. Springer, Heidelberg (2008)

    MATH  Google Scholar 

  41. Sun, D., Yang, X., Liu, M.Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8934–8943 (2018)

    Google Scholar 

  42. Von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G.: LSD: a fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 32(4), 722–732 (2010)

    Article  Google Scholar 

  43. Xu, J., Ranftl, R., Koltun, V.: Accurate optical flow via direct cost volume processing. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1289–1297 (2017)

    Google Scholar 

  44. Zhou, T., Tucker, R., Flynn, J., Fyffe, G., Snavely, N.: Stereo magnification: learning view synthesis using multiplane images. In: SIGGRAPH (2018)

    Google Scholar 

Download references

Acknowledgements

The authors gratefully acknowledge funding by the German Science Foundation (DFG MA2555/15-1 “Immersive Digital Reality”).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Moritz Mühlhausen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mühlhausen, M., Magnor, M. (2020). Multiview Panorama Alignment and Optical Flow Refinement. In: Magnor, M., Sorkine-Hornung, A. (eds) Real VR – Immersive Digital Reality. Lecture Notes in Computer Science(), vol 11900. Springer, Cham. https://doi.org/10.1007/978-3-030-41816-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-41816-8_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-41815-1

  • Online ISBN: 978-3-030-41816-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics