Abstract
Current techniques for “RealVR” experiences are usually limited to a small area around the capture setup. Simple linear blending between several viewpoints will disrupt the virtual reality (VR) experience and cause a loss of immersion for the user. To obtain smoother transitions, optical flow based warping between viewpoints can be utilized. Therefore, the panorama images of these viewpoints do not only need to be upright adjusted but also their viewing direction need to be aligned first. As panoramas for VR are usually of high resolution for high quality results in every direction, the optical flow in a high resolution is also indispensable.
This chapter gives an overview of how to align several viewpoints to a common viewing direction and obtain high resolution optical flow in between.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Agarwal, S., et al.: Building Rome in a day. Commun. ACM 54(10), 105–112 (2011)
Akbarzadeh, A., et al.: Towards urban 3D reconstruction from video. In: Third International Symposium on 3D Data Processing, Visualization, and Transmission, pp. 1–8. IEEE (2006)
Anguelov, D., et al.: Google street view: capturing the world at street level. Computer 43, 32–38 (2010). http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5481932&tag=1
Antone, M.E., Teller, S.: Automatic recovery of relative camera rotations for urban scenes. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, 2000, vol. 2, pp. 282–289. IEEE (2000)
Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. Int. J. Comput. Vis. 92(1), 1–31 (2011)
Bazin, J.C., Demonceaux, C., Vasseur, P., Kweon, I.: Rotation estimation and vanishing point extraction by omnidirectional vision in urban environment. Int. J. Robot. Res. 31(1), 63–81 (2012)
Behl, A., Hosseini Jafari, O., Karthik Mustikovela, S., Abu Alhaija, H., Rother, C., Geiger, A.: Bounding boxes, segmentations and object coordinates: how important is recognition for 3D scene flow estimation in autonomous driving scenarios? In: IEEE International Conference on Computer Vision (ICCV), pp. 2574–2583 (2017)
Burt, P., Adelson, E.: The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 31(4), 532–540 (1983)
Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33783-3_44
Cipolla, R., Drummond, T., Robertson, D.P.: Camera calibration from vanishing points in image of architectural scenes. In: BMVC, vol. 99, pp. 382–391 (1999)
Cohen, T., Geiger, M., Köhler, J., Welling, M.: Convolutional networks for spherical signals. In: ICML Workshop on Principled Approaches to Deep Learning (2017)
Cohen, T.S., Geiger, M., Köhler, J., Welling, M.: Spherical CNNs. In: International Conference on Learning Representations (ICLR) (2018)
Coughlan, J.M., Yuille, A.L.: Manhattan world: compass direction from a single image by Bayesian inference. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, vol. 2, pp. 941–947. IEEE (1999)
Dosovitskiy, A., et al.: FlowNet: learning optical flow with convolutional networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 2758–2766 (2015)
Fiala, M., Roth, G.: Automatic alignment and graph map building of panoramas. In: IEEE International Workshop on Haptic Audio Visual Environments and their Applications, 2005, p. 6. IEEE (2005)
Gurrieri, L.E., Dubois, E.: Optimum alignment of panoramic images for stereoscopic navigation in image-based telepresence systems. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 351–358. IEEE (2011)
Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)
Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell. 17(1–3), 185–203 (1981)
HTC: Discover virtual reality beyond imagination. https://www.vive.com/uk/
Hui, T.W., Tang, X., Loy, C.C.: LiteFlowNet: a lightweight convolutional neural network for optical flow estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8981–8989 (2018)
Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: FlowNet 2.0: evolution of optical flow estimation with deep networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1647–1655 (2017)
Jung, J., Lee, J.Y., Kim, B., Lee, S.: Upright adjustment of 360 spherical panoramas. In: 2017 IEEE Virtual Reality (VR), pp. 251–252. IEEE (2017)
Kangni, F., Laganiere, R.: Epipolar geometry for the rectification of cubic panoramas. In: The 3rd Canadian Conference on Computer and Robot Vision, 2006, p. 70. IEEE (2006)
Karlsson, N., Di Bernardo, E., Ostrowski, J., Goncalves, L., Pirjanian, P., Munich, M.E.: The vSLAM algorithm for robust localization and mapping. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation, ICRA 2005, pp. 24–29. IEEE (2005)
Kopf, J.: 360 video stabilization. ACM Trans. Graph. (TOG) 35(6), 195 (2016)
Lee, H., Shechtman, E., Wang, J., Lee, S.: Automatic upright adjustment of photographs with robust camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 833–844 (2014)
Lemaire, T., Lacroix, S.: SLAM with panoramic vision. J. Field Robot. 24(1–2), 91–111 (2007)
Luong, Q.T., Faugeras, O.: The geometry of multiple images. MIT Press Boston 2(3), 4–5 (2001)
Makadia, A., Daniilidis, K.: Rotation recovery from spherical images without correspondences. IEEE Trans. Pattern Anal. Mach. Intell. 28(7), 1170–1175 (2006)
Martins, A.T., Aguiar, P.M., Figueiredo, M.A.: Orientation in Manhattan: equiprojective classes and sequential estimation. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 822–827 (2005)
Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3061–3070 (2015)
Microsoft: Microsoft hololens | mixed reality technology for business. https://www.microsoft.com/en-us/hololens
Mühlhausen, M., Wöhler, L., Albuquerque, G., Magnor, M.: Iterative optical flow refinement for high resolution images. In: Proceedings of IEEE International Conference on Image Processing (ICIP) (September 2019)
Oculus: Oculus rift. https://www.oculus.com/rift/
Orghidan, R., Salvi, J., Gordan, M., Orza, B.: Camera calibration using two or three vanishing points. In: 2012 Federated Conference on Computer Science and Information Systems (FedCSIS), pp. 123–130. IEEE (2012)
Ranjan, A., Black, M.J.: Optical flow estimation using a spatial pyramid network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2720–2729 (2017)
Revaud, J., Weinzaepfel, P., Harchaoui, Z., Schmid, C.: EpicFlow: edge-preserving interpolation of correspondences for optical flow. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1164–1172 (2015)
Salehi, S., Dubois, E.: Alignment of cubic-panorama image datasets using epipolar geometry. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1545–1548. IEEE (2011)
Schindler, G., Dellaert, F.: Atlanta world: an expectation maximization framework for simultaneous low-level edge grouping and camera calibration in complex man-made environments. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 1, p. I. IEEE (2004)
Shum, H.Y., Chan, S.C., Kang, S.B.: Image-Based Rendering. Springer, Heidelberg (2008)
Sun, D., Yang, X., Liu, M.Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8934–8943 (2018)
Von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G.: LSD: a fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 32(4), 722–732 (2010)
Xu, J., Ranftl, R., Koltun, V.: Accurate optical flow via direct cost volume processing. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1289–1297 (2017)
Zhou, T., Tucker, R., Flynn, J., Fyffe, G., Snavely, N.: Stereo magnification: learning view synthesis using multiplane images. In: SIGGRAPH (2018)
Acknowledgements
The authors gratefully acknowledge funding by the German Science Foundation (DFG MA2555/15-1 “Immersive Digital Reality”).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Mühlhausen, M., Magnor, M. (2020). Multiview Panorama Alignment and Optical Flow Refinement. In: Magnor, M., Sorkine-Hornung, A. (eds) Real VR – Immersive Digital Reality. Lecture Notes in Computer Science(), vol 11900. Springer, Cham. https://doi.org/10.1007/978-3-030-41816-8_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-41816-8_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-41815-1
Online ISBN: 978-3-030-41816-8
eBook Packages: Computer ScienceComputer Science (R0)