Advertisement

International Journal of Computer Vision

, Volume 127, Issue 1, pp 22–37 | Cite as

An Approximate Shading Model with Detail Decomposition for Object Relighting

  • Zicheng LiaoEmail author
  • Kevin Karsch
  • Hongyi Zhang
  • David Forsyth
Article

Abstract

We present an object relighting system that allows an artist to select an object from an image and insert it into a target scene. Through simple interactions, the system can adjust illumination on the inserted object so that it appears naturally in the scene. To support image-based relighting, we build object model from the image, and propose a perceptually-inspired approximate shading model for the relighting. It decomposes the shading field into (a) a rough shape term that can be reshaded, (b) a parametric shading detail that encodes missing features from the first term, and (c) a geometric detail term that captures fine-scale material properties. With this decomposition, the shading model combines 3D rendering and image-based composition and allows more flexible compositing than image-based methods. Quantitative evaluation and a set of user studies suggest our method is a promising alternative to existing methods of object insertion.

Keywords

Image-based modeling Shading model Object insertion Image relighting 

Notes

Acknowledgements

DAF is supported in part by Division of Information and Intelligent Systems (US) (IIS 09-16014), Division of Information and Intelligent Systems (IIS-1421521) and Office of Naval Research (N00014-10-10934). ZL is supported in part by NSFC Grant No. 61602406, ZJNSF Grant No. Q15F020006 and a special fund from Alibaba – Zhejiang University Joint Institute of Frontier Technologies.

Supplementary material

11263_2018_1090_MOESM1_ESM.pdf (3.7 mb)
Supplementary material 1 (pdf 3791 KB)

References

  1. Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., et al. (2004). Interactive digital photomontage. ACM Transactions on Graphics, 23(3), 294–302.CrossRefGoogle Scholar
  2. Arsalan Soltani, A., Huang, H., Wu, J., Kulkarni, T. D., & Tenenbaum, J. B. (2017). Synthesizing 3D shapes via modeling multi-view depth maps and silhouettes with deep generative networks. In The IEEE conference on computer vision and pattern recognition (CVPR).Google Scholar
  3. Barron, J. T., & Malik, J. (2012). Color constancy, intrinsic images, and shape estimation. In ECCV.Google Scholar
  4. Basri, R., & Jacobs, D. (2003). Lambertian reflectance and linear subspaces. In PAMI.Google Scholar
  5. Beck, J., & Prazdny, S. (1981). Highlights and the perception of glossiness. Attention, Perception and Psychophysics, 30, 407–410.CrossRefGoogle Scholar
  6. Berzhanskaya, J., Swaminathan, G., Beck, J., & Mingolla, E. (2005). Remote effects of highlights on gloss perception. Perception, 34, 565–575.CrossRefGoogle Scholar
  7. Burt, P. J., & Adelson, E. H. (1983). A multiresolution spline with application to image mosaics. ACM Transactions on Graphics, 2(4), 217–236.CrossRefGoogle Scholar
  8. Cavanagh, P. (2005). The artist as neuroscientist. Nature, 434, 301–307.CrossRefGoogle Scholar
  9. Chen, T., Cheng, M.-M., Tan, P., Shamir, A., & Hu, S.-M. (2009). Sketch2photo: internet image montage. ACM Transactions on Graphics, 28(5), 124:1–124:10.Google Scholar
  10. Debevec, P. (1998). Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography. In SIGGRAPH’98 (pp. 189–198). ACM.Google Scholar
  11. Deshpande, A., Lu, J., Yeh, M.-C., Jin Chong, M., & Forsyth, D. (2017). Learning diverse image colorization. In The IEEE conference on computer vision and pattern recognition (CVPR).Google Scholar
  12. Durou, J.-D., Falcone, M., & Sagona, M. (2008). Numerical methods for shape-from-shading: A new survey with benchmarks. Computer Vision and Image Understanding, 109(1), 22–43.CrossRefGoogle Scholar
  13. Esteban, C. H., Vogiatzis, G., & Cipolla, R. (2008). Multiview photometric stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(3), 548–554.CrossRefGoogle Scholar
  14. Furukawa, Y., & Hernandez, C. (2015). Multi-vew Stereo: A tutorial. Foundations and trends?. In Computer graphics and vision.Google Scholar
  15. Fyffe, G., Jones, A., Alexander, O., Ichikari, R., Graham, P., Nagano, K., Busch, J., & Debevec P. (2013). Driving high-resolution facial blendshapes with video performance capture. In ACM SIGGRAPH 2013 Talks, SIGGRAPH ’13 (p. 33:1), New York, NY, USA, 2013. ACM.Google Scholar
  16. Fyffe, G., Nagano, K., Huynh, L., Saito, S., Busch, J., Jones, A., Li, H., & Debevec, P. (2017). Multi-view stereo on consistent face topology. In Computer Graphics Forum.Google Scholar
  17. Ghosh, A., Fyffe, G., Tunwattanapong, B., Busch, J., Yu, X., & Debevec, P. (2011). Multiview face capture using polarized spherical gradient illumination. ACM Transactions on Graphics, 30(6), 129:1–129:10.CrossRefGoogle Scholar
  18. Grosse, R., Johnson, M. K., Adelson, E. H., & Freeman, W. T. (2009). Ground-truth dataset and baseline evaluations for intrinsic image algorithms. In ICCV (pp. 2335–2342).Google Scholar
  19. Hartley, R. I., & Zisserman, A. (2004). Multiple view geometry in computer vision (2nd ed.). Cambridge: Cambridge University Press. (ISBN: 0521540518).CrossRefzbMATHGoogle Scholar
  20. Johnston, S. F. (2002). Lumo: Illumination for cel animation. In NPAR ’02.Google Scholar
  21. Karsch, K., Hedau, V., Forsyth, D., & Hoiem, D. (2011). Rendering synthetic objects into legacy photographs. ACM Transactions on Graphics (SIGGRAPH Asia), 30(6), 157:1–157:12.Google Scholar
  22. Karsch, K., Liu, C., Kang, S. B., & England, N. (2012). Depth extraction from video using non-parametric sampling. In ECCV.Google Scholar
  23. Karsch, K., Sunkavalli, K., Hadap, S., Carr, N., Jin, H., Fonte, R., et al. (2014). Automatic scene inference for 3D object compositing. ACM Transactions on Graphics, 33(3), 32:1–32:15.CrossRefzbMATHGoogle Scholar
  24. Kim, S., Park, K., Sohn, K., & Lin, S. (2016). Unified depth prediction and intrinsic image decomposition from a single image via joint convolutional neural fields. In European conference on computer vision.Google Scholar
  25. Lalonde, J.-F., Hoiem, D., Efros, A. A., Rother, C., Winn, J., & Criminisi, A. (2007). Photo clip art. ACM Transactions on Graphics (SIGGRAPH 2007), 26(3), 3.CrossRefGoogle Scholar
  26. Lettry, L., Vanhoey, K., & Van Gool L. (2016). Darn: a deep adversial residual network for intrinsic image decomposition. arXiv:1612.07899.
  27. Liao, Z., Karsch, K., & Forsyth, D. (2015). An approximate shading model for object relighting. In CVPR.Google Scholar
  28. Liao, Z., Rock, J., Wang, Y., & Forsyth, D. (2013). Non-parametric filtering for geometric detail extraction and material representation. In CVPR (pp. 963–970).Google Scholar
  29. Narihira, T., Maire, M., & Yu, S. X. (2015). Direct intrinsics: Learning albedo-shading decomposition by convolutional regression. In Proceedings of the IEEE international conference on computer vision (pp. 2992–2992).Google Scholar
  30. Niebner, M., Keinert, B., Fisher, M., Stamminger, M., Loop, C., & Schäfer, H. (2016). Real-time rendering techniques with hardware tessellation. Computer Graphics Forum, 35(1), 113–137.CrossRefGoogle Scholar
  31. Ostrovsky, Y., Cavanagh, P., & Sinha, P. (2005). Perceiving illumination inconsistencies in scenes. Perception, 34, 1301–1314.CrossRefGoogle Scholar
  32. Pérez, P., Gangnet, M., & Blake, A. (2003). Poisson image editing. ACM Transactions on Graphics, 22(3), 313–318.CrossRefGoogle Scholar
  33. Prasad, M., & Fitzgibbon, A. (2006). Single view reconstruction of curved surfaces. In CVPR (pp. 1345–1354).Google Scholar
  34. Ramamoorthi, R., & Hanrahan, P. (2001). An efficient representation for irradiance environment maps. In Proceedings of the 28th annual conference on computer graphics and interactive techniques, SIGGRAPH’01 (pp. 497–500).Google Scholar
  35. Richardson, E., Sela, M., Or-El, R., & Kimmel R. (July 2017). Learning detailed face reconstruction from a single image. In The IEEE conference on computer vision and pattern recognition (CVPR).Google Scholar
  36. Shu, Z., Yumer, E., Hadap, S., Sunkavalli, K., Shechtman, E., & Samaras, D. (2017). Neural face editing with intrinsic image disentangling. In The IEEE conference on computer vision and pattern recognition (CVPR).Google Scholar
  37. Tarr, M. J., Kersten, D., & Bülthoff, H. H. (1998). Why the visual recognition system might encode the effects of illumination. Vision Research, 38, 2259–2275.CrossRefGoogle Scholar
  38. Trigeorgis, G., Snape, P., Kokkinos, I., & Zafeiriou, S. (2017). Face normals “in-the-wild” using fully convolutional networks. In The IEEE conference on computer vision and pattern recognition (CVPR).Google Scholar
  39. Twarog, N., Tappen, M., & Adelson, E. (2012). Playing with puffball: simple scale-invariant inflation for use in vision and graphics. In Proceedings of the ACM symposium on applied perception, SAP’12 (pp. 45–54).Google Scholar
  40. Wu, T.-P., Sun, J., Tang, C.-K., & Shum, H.-Y. (2008). Interactive normal reconstruction from a single image. ACM Transactions on Graphics, 27(5), 119:1–119:9.CrossRefGoogle Scholar
  41. Xia, T., Liao, B., & Yu, Y. (2009). Patch-based image vectorization with automatic curvilinear feature alignment. ACM Transactions on Graphics, 28(5), 115:1–115:10.CrossRefGoogle Scholar
  42. Zhang, R., Tsai, P.-S., Cryer, J., & Shah, M. (1999). Shape-from-shading: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(8), 690–706.CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Zicheng Liao
    • 1
    • 2
    Email author
  • Kevin Karsch
    • 2
  • Hongyi Zhang
    • 1
  • David Forsyth
    • 2
  1. 1.College of Computer ScienceZhejiang UniversityHangzhouChina
  2. 2.Department of Computer ScienceUniversity of Illinois at Urbana-ChampaignChampaignUSA

Personalised recommendations