Advertisement

FishEyeRecNet: A Multi-context Collaborative Deep Network for Fisheye Image Rectification

  • Xiaoqing YinEmail author
  • Xinchao Wang
  • Jun Yu
  • Maojun Zhang
  • Pascal Fua
  • Dacheng Tao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11214)

Abstract

Images captured by fisheye lenses violate the pinhole camera assumption and suffer from distortions. Rectification of fisheye images is therefore a crucial preprocessing step for many computer vision applications. In this paper, we propose an end-to-end multi-context collaborative deep network for removing distortions from single fisheye images. In contrast to conventional approaches, which focus on extracting hand-crafted features from input images, our method learns high-level semantics and low-level appearance features simultaneously to estimate the distortion parameters. To facilitate training, we construct a synthesized dataset that covers various scenes and distortion parameter settings. Experiments on both synthesized and real-world datasets show that the proposed model significantly outperforms current state of the art methods. Our code and synthesized dataset will be made publicly available.

Keywords

Fisheye image rectification Distortion parameter estimation Collaborative deep network 

Notes

Acknowledgment

This work is partially supported by Australian Research Council Projects (FL-170100117, DP-180103424 and LP-150100671), National Natural Science Foundation of China (Grant No. 61405252) and State Scholarship Fund of China (Grant No. 201503170310).

References

  1. 1.
    Xiong, Y., Turkowski, K.: Creating image-based VR using a self-calibrating fisheye lens. In: Proceedings of 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 237–243. IEEE (1997)Google Scholar
  2. 2.
    Orlosky, J., Wu, Q., Kiyokawa, K., Takemura, H., Nitschke, C.: Fisheye vision: peripheral spatial compression for improved field of view in head mounted displays. In: Proceedings of the 2nd ACM Symposium on Spatial User Interaction, pp. 54–61. ACM (2014)Google Scholar
  3. 3.
    Drulea, M., Szakats, I., Vatavu, A., Nedevschi, S.: Omnidirectional stereo vision using fisheye lenses. In: 2014 IEEE International Conference on Intelligent Computer Communication and Processing (ICCP), pp. 251–258. IEEE (2014)Google Scholar
  4. 4.
    DeCamp, P., Shaw, G., Kubat, R., Roy, D.: An immersive system for browsing and visualizing surveillance video. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 371–380. ACM (2010)Google Scholar
  5. 5.
    Hughes, C., Glavin, M., Jones, E., Denny, P.: Wide-angle camera technology for automotive applications: a review. IET Intell. Transp. Syst. 3(1), 19–31 (2009)CrossRefGoogle Scholar
  6. 6.
    Gehrig, S.K.: Large-field-of-view stereo for automotive applications. In: Proceedings of Workshop on Omnidirectional Vision, Camera Networks and Nonclassical cameras (OMNIVIS 2005) (2005)Google Scholar
  7. 7.
    Shah, S., Aggarwal, J.K.: Depth estimation using stereo fish-eye lenses. In: Proceedings of IEEE International Conference on Image Processing, ICIP 1994, vol. 2, pp. 740–744. IEEE (1994)Google Scholar
  8. 8.
    Sun, J., Zhu, J.: Calibration and correction for omnidirectional image with a fisheye lens. In: Fourth International Conference on Natural Computation, ICNC 2008, vol. 6, pp. 133–137. IEEE (2008)Google Scholar
  9. 9.
    Mei, X., Yang, S., Rong, J., Ying, X., Huang, S., Zha, H.: Radial lens distortion correction using cascaded one-parameter division model. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 3615–3619. IEEE (2015)Google Scholar
  10. 10.
    Bukhari, F., Dailey, M.N.: Automatic radial distortion estimation from a single image. J. Math. Imaging Vis. 45(1), 31–45 (2013)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Melo, R., Antunes, M., Barreto, J.P., Falcao, G., Goncalves, N.: Unsupervised intrinsic calibration from a single frame using a. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 537–544 (2013)Google Scholar
  12. 12.
    Hughes, C., Denny, P., Glavin, M., Jones, E.: Equidistant fish-eye calibration and rectification by vanishing point extraction. IEEE Trans. Pattern Anal. Mach. Intell. 32(12), 2289–2296 (2010)CrossRefGoogle Scholar
  13. 13.
    Rosten, E., Loveland, R.: Camera distortion self-calibration using the plumb-line constraint and minimal hough entropy. Mach. Vis. Appl. 22(1), 77–85 (2011)CrossRefGoogle Scholar
  14. 14.
    Ying, X., Hu, Z.: Can we consider central catadioptric cameras and fisheye cameras within a unified imaging model. In: Pajdla, T., Matas, J. (eds.) ECCV 2004. LNCS, vol. 3021, pp. 442–455. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24670-1_34CrossRefGoogle Scholar
  15. 15.
    Zhang, M., Yao, J., Xia, M., Li, K., Zhang, Y., Liu, Y.: Line-based multi-label energy optimization for fisheye image rectification and calibration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4137–4145 (2015)Google Scholar
  16. 16.
    Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Semantic understanding of scenes through the ADE20K dataset. arXiv preprint arXiv:1608.05442 (2016)
  17. 17.
    Brand, P., Mohr, R., Bobet, P.: Distorsions optiques: correction dans un modele projectif. 9dine cong\(\sim \)s AFCET RFIA, pp. 87–98 (1993)Google Scholar
  18. 18.
    Rong, J., Huang, S., Shang, Z., Ying, X.: Radial lens distortion correction using convolutional neural networks trained with synthesized images. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10113, pp. 35–49. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-54187-7_3CrossRefGoogle Scholar
  19. 19.
    Liu, D., Wen, B., Liu, X., Huang, T.S.: When image denoising meets high-level vision tasks: a deep learning approach. arXiv preprint arXiv:1706.04284 (2017)
  20. 20.
    Tsai, Y.-H., Shen, X., Lin, Z., Sunkavalli, K., Lu, X., Yang, M.-H.: Deep image harmonization. arXiv preprint arXiv:1703.00069 (2017)
  21. 21.
    Qu, L., Tian, J., He, S., Tang, Y., Lau, R.W.: Deshadownet: a multi-context embedding deep network for shadow removal (2017)Google Scholar
  22. 22.
    Kannala, J., Brandt, S.: A generic camera calibration method for fish-eye lenses. In: Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, vol. 1, pp. 10–13. IEEE (2004)Google Scholar
  23. 23.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  24. 24.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)Google Scholar
  25. 25.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  26. 26.
    Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678. ACM (2014)Google Scholar
  27. 27.
    Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(Jul), 2121–2159 (2011)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Xiaoqing Yin
    • 1
    • 2
    Email author
  • Xinchao Wang
    • 3
  • Jun Yu
    • 4
  • Maojun Zhang
    • 2
  • Pascal Fua
    • 5
  • Dacheng Tao
    • 1
  1. 1.UBTECH Sydney AI Center, SIT, FEITUniversity of SydneySydneyAustralia
  2. 2.National University of Defense TechnologyChangshaChina
  3. 3.Stevens Institute of TechnologyHobokenUSA
  4. 4.Hangzhou Dianzi UniversityHangzhouChina
  5. 5.École Polytechnique Fédérale de LausanneLausanneSwitzerland

Personalised recommendations