Illumination estimation for augmented reality based on a global illumination model

  • Aijia Zhang
  • Yan ZhaoEmail author
  • Shigang Wang


With the rapid development of 3D technology, the illumination consistency plays an important role in realistic rendering of virtual objects which are superimposed into the real scene. In this paper we proposed a new approach to estimate illumination, given the images of a scene. Our algorithm is designed from principles in optics, which is based on a global illumination model. First, we get the images of a scene and perform scene reconstruction using images. Then, a photon emission hemispherical model is built for emitting photons into the scene. The photons are traced, meanwhile the result is stored in multiple photon maps. Finally, the estimated illumination is obtained by estimating the photon radiance value in the photon emission hemispherical model. Experiments on our newly collected real scene databases and virtual scene databases show the accuracy of our method subjectively and objectively. By comparing with related work, it is found that more accurate results can be obtained using the global illumination inversely.


Illumination estimation Photon mapping Global illumination Relighting 



This work was supported by the National Key R&D Plan (2017YFB1002900) and the National Natural Science Foundation of China (No. 61771220 and No. 61631009).


  1. 1.
    Boom B J, Orts-Escolano S, Ning X X, et al (2017) Interactive light sourceposition estimation for augmented reality with an RGB-D camera. Computer Animation and Virtual WorldsGoogle Scholar
  2. 2.
    Che C, Luan F, Zhao S, et al. (2018) Inverse transport networks. arXiv preprint arXiv:1809.10820Google Scholar
  3. 3.
    Chen XW, Jin X, Wang K (2014) Lighting virtual objects in a single image via coarse scene understanding. Sci Chin Inf Sci 57(9):1–14CrossRefGoogle Scholar
  4. 4.
    Cignoni P, Corsini M, Ranzuglia G (2018) MeshLab: an open source 3D mesh processing system. ERCIM NewsGoogle Scholar
  5. 5.
    Dong Y, Chen G, Peers P et al (2014) Appearance-from-motion:recovering spatially varying surface reflectance under unknown lighting. ACM Trans Graph 33(6):1–12CrossRefGoogle Scholar
  6. 6.
    Gardner M-A, Sunkavalli K, Yumer E et al (2017) Learning to pre-dict indoor illumination from a single image. ACM Trans Graph 36(6):1–14CrossRefGoogle Scholar
  7. 7.
    Georgoulis S, Rematas K, Ritschel T, Gavves E, Fritz M, Van Gool L et al (2017) Reflectance and natural illumination from single-material specular objects using deep learning. IEEE Trans Pattern Anal Mach Intell 40(8):1932–1946CrossRefGoogle Scholar
  8. 8.
    Hold-Geoffroy Y et al (2016) Deep outdoor illumination estimation. IEEE conference on computer vision and pattern recognitionGoogle Scholar
  9. 9.
    Jensen HW (1996) Global illumination using photon maps. Eurographics Workshop on Rendering TechniquesGoogle Scholar
  10. 10.
    Jiddi S, Robert P, Marchand E (2017) Reflectance and illumination estimation for realistic augmentations of real scenes. IEEE International Symposium on Mixed & Augmented RealityGoogle Scholar
  11. 11.
    Jiddi S , Robert P , Marchand E (2017) Illumination estimation using cast shadows for realistic augmented reality applications. IEEE International Symposium on Mixed and Augmented RealityGoogle Scholar
  12. 12.
    Karaoglu S, Liu Y, Gevers T, et al (2017) Point light source position Esti-mation from RGB-D images by learning surface attributes. IEEE Transactionson Image Processing: 1Google Scholar
  13. 13.
    Knorr SB, Kurz D (2014) Real-time illumination estimation from faces for coherent rendering. IEEE International Symposium on Mixed & Augmented RealityGoogle Scholar
  14. 14.
    Li X, Wan W, Cheng X, et al (2010) An improved poisson surface reconstruction algorithm. International conference on Audio Language & Image ProcessingGoogle Scholar
  15. 15.
    Liu B, Xu K, Martin RR (2017) Static scene illumination estimation from videos with applications. J Comput Sci Technol 32(3):430–442CrossRefGoogle Scholar
  16. 16.
    Neverova N, Muselet D, Trémeau A (2012) Lighting estimation in indoor environments from low-quality images. European conference on computer visionGoogle Scholar
  17. 17.
    Panagopoulos A, Samaras D, Paragios N (2009) Robust shadow and illumination estimation using a mixture model. IEEE conference on computer vision and pattern recognitionGoogle Scholar
  18. 18.
    Panagopoulos A, Wang C, Samaras D,et al. (2011) Illumination estimation and cast shadow detection through a higher-order graphical model. IEEEconference on computer vision and pattern recognitionGoogle Scholar
  19. 19.
    Romeiro F, Zickler T (2010) Blind reflectometry. European conference on computer visionGoogle Scholar
  20. 20.
    Sato I, Sato Y, Ikeuchi K (2003) Illumination from shadows. IEEE Trans Pattern Anal Mach Intell 25(3):0–300CrossRefGoogle Scholar
  21. 21.
    Shen J, Yang X, Jia Y, Li X (2011) Intrinsic images using optimization. IEEE conference on computer vision and pattern recognitionGoogle Scholar
  22. 22.
    Shim H (2012) Faces as light probes for relighting. Opt Eng 51(7):077002CrossRefGoogle Scholar
  23. 23.
    Snavely N, Seitz SM, Szeliski AR (2006) Photo tourism: exploring photo collections in 3D. ACM Trans Graph 25(3):835–846CrossRefGoogle Scholar
  24. 24.
    Song M, Watanabe H, Hara J (2018) Robust 3D reconstruction with omni-directional camera based on structure from motion. International Workshop on Advanced Image TechnologyGoogle Scholar
  25. 25.
    Wang H, Cao J, Tang L et al (2011) HDR image synthesis based on multi-exposure color images. Informatics in control, automation and robotics. Springer, Berlin HeidelbergGoogle Scholar
  26. 26.
    Weber H, Prévost D, Lalonde J-F (2018) Learning to estimate indoor lighting from 3d objects. International conference on 3D visionGoogle Scholar
  27. 27.
    Wu C (2013) Towards linear-time incremental structure from motion. International conference on 3D visionGoogle Scholar
  28. 28.
    Wu H, Wang Z, Zhou K (2016) Simultaneous localization and appearance estimation with a consumer RGB-D camera. IEEE Trans Vis Comput Graph 22(8):2012–2023CrossRefGoogle Scholar
  29. 29.
    Yuan L, Hu Y, Li D, et al.(2018) Illumination consistency based on single low dynamic range images. Multimed Tools Appl:1–27Google Scholar
  30. 30.
    Zhang G, Dong Z, Jia J et al. (2010) Efficient non-consecutive feature Tr-acking for structure-from-motion. European Conference on Computer VisionGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Communication EngineeringJilin UniversityChangchunChina

Personalised recommendations