Computational Visual Media

, Volume 3, Issue 1, pp 21–31 | Cite as

Estimating reflectance and shape of objects from a single cartoon-shaded image

Open Access
Research Article
  • 408 Downloads

Abstract

Although many photorealistic relighting methods provide a way to change the illumination of objects in a digital photograph, it is currently difficult to relight digital illustrations having a cartoon shading style. The main difference between photorealistic and cartoon shading styles is that cartoon shading is characterized by soft color quantization and nonlinear color variations that cause noticeable reconstruction errors under a physical reflectance assumption, such as Lambertian reflection. To handle this non-photorealistic shading property, we focus on shading analysis of the most fundamental cartoon shading technique. Based on the color map shading representation, we propose a simple method to determine the input shading as that of a smooth shape with a nonlinear reflectance property. We have conducted simple ground-truth evaluations to compare our results to those obtained by other approaches.

Keywords

non-photorealistic rendering cartoon shading relighting quantization 

Notes

Acknowledgements

We would like to thank the anonymous reviewers for their constructive comments. We are also grateful to Tatsuya Yatagawa, Hiromu Ozaki, Tomohiro Tachi, and Takashi Kanai for their valuable discussions and suggestions. Additional thanks go to the AIM@SHAPE Shape Repository, Keenan’s 3D Model Repository for 3D models, and Makoto Nakajima, www.piapro.net for 2D illustrations used in this work. This work was supported in part by the Japan Science and Technology Agency CREST project and the Japan Society for the Promotion of Science KAKENHI Grant No. JP15H05924.

Supplementary material

41095_2016_66_MOESM1_ESM.wmv (669 kb)
Supplementary material, approximately 672 KB.
41095_2016_66_MOESM2_ESM.wmv (741 kb)
Supplementary material, approximately 744 KB.
41095_2016_66_MOESM3_ESM.wmv (1 mb)
Supplementary material, approximately 1.02 MB.
41095_2016_66_MOESM4_ESM.wmv (1.1 mb)
Supplementary material, approximately 1.14 MB.
41095_2016_66_MOESM5_ESM.wmv (1.7 mb)
Supplementary material, approximately 1.65 MB.
41095_2016_66_MOESM6_ESM.wmv (2 mb)
Supplementary material, approximately 2.00 MB.
41095_2016_66_MOESM7_ESM.wmv (698 kb)
Supplementary material, approximately 700 KB.
41095_2016_66_MOESM8_ESM.wmv (657 kb)
Supplementary material, approximately 660 KB.
41095_2016_66_MOESM9_ESM.wmv (810 kb)
Supplementary material, approximately 812 KB.
41095_2016_66_MOESM10_ESM.wmv (732 kb)
Supplementary material, approximately 732 KB.
41095_2016_66_MOESM11_ESM.wmv (532 kb)
Supplementary material, approximately 532 KB.
41095_2016_66_MOESM12_ESM.wmv (913 kb)
Supplementary material, approximately 916 KB.
41095_2016_66_MOESM13_ESM.wmv (526 kb)
Supplementary material, approximately 528 KB.

References

  1. [1]
    Horn, B. K. P.; Brooks, M. J. Shape from Shading. Cambridge, MA, USA: MIT Press, 1989.MATHGoogle Scholar
  2. [2]
    Khan, E. A.; Reinhard, E.; Fleming, R. W.; Bülthoff, H. H. Image-based material editing. ACM Transactions on Graphics Vol. 25, No. 3, 654–663, 2006.CrossRefGoogle Scholar
  3. [3]
    Okabe, M.; Zeng, G.; Matsushita, Y.; Igarashi, T.; Quan, L.; Shum, H.-Y. Single-view relighting with normal map painting. In: Proceedings of Pacific Graphics, 27–34, 2006.Google Scholar
  4. [4]
    Wu, T.-P.; Sun, J.; Tang, C.-K.; Shum, H.-Y. Interactive normal reconstruction from a single image. ACM Transactions on Graphics Vol. 27, No. 5, Article No. 119, 2008.Google Scholar
  5. [5]
    Barla, P.; Thollot, J.; Markosian, L. X-toon: An extended toon shader. In: Proceedings of the 4th International Symposium on Non-Photorealistic Animation and Rendering, 127–132, 2006.Google Scholar
  6. [6]
    Lake, A.; Marshall, C.; Harris, M.; Blackstein, M. Stylized rendering techniques for scalable real-time 3D animation. In: Proceedings of the 1st International Symposium on Non-Photorealistic Animation and Rendering, 13–20, 2000.Google Scholar
  7. [7]
    Mitchell, J.; Francke, M.; Eng, D. Illustrative rendering in Team Fortress 2. In: Proceedings of the 5th International Symposium on Non-Photorealistic Animation and Rendering, 71–76, 2007.Google Scholar
  8. [8]
    DeCarlo, D.; Santella, A. Stylization and abstraction of photographs. ACM Transactions on Graphics Vol. 21, No. 3, 769–776, 2002.CrossRefGoogle Scholar
  9. [9]
    Kang, H.; Lee, S.; Chui, C. K. Flow-based image abstraction. IEEE Transactions on Visualization and Computer Graphics Vol. 15, No. 1, 62–76, 2009.CrossRefGoogle Scholar
  10. [10]
    Kyprianidis, J. E.; Döllner, J. Image abstraction by structure adaptive filtering. In: Proceedings of EG UK Theory and Practice of Computer Graphics, 51–58, 2008.Google Scholar
  11. [11]
    Winnemöller, H.; Olsen, S. C.; Gooch, B. Real-time video abstraction. ACM Transactions on Graphics Vol. 25, No. 3, 1221–1226, 2006.CrossRefGoogle Scholar
  12. [12]
    Johnston, S. F. Lumo: Illumination for cel animation. In: Proceedings of the 2nd International Symposium on Non-Photorealistic Animation and Rendering, 45–52, 2002.Google Scholar
  13. [13]
    Sýkora, D.; Kavan, L.; Čadík, M.; Jamriška, O.; Jacobson, A.; Whited, B.; Simmons, M.; Sorkine-Hornung, O. Ink-and-ray: Bas-relief meshes for adding global illumination effects to hand-drawn characters. ACM Transactions on Graphics Vol. 33, No. 2, Article No. 16, 2014.Google Scholar
  14. [14]
    Shao, C.; Bousseau, A.; Sheffer, A.; Singh, K. CrossShade: Shading concept sketches using crosssection curves. ACM Transactions on Graphics Vol. 31, No. 4, Article No. 45, 2012.Google Scholar
  15. [15]
    Iarussi, E.; Bommes, D.; Bousseau, A. Bendfields: Regularized curvature fields from rough concept sketches. ACM Transactions on Graphics Vol. 34, No. 3, Article No. 24, 2015.Google Scholar
  16. [16]
    Xu, Q.; Gingold, Y.; Singh, K. Inverse toon shading: Interactive normal field modeling with isophotes. In: Proceedings of the Workshop on Sketch-Based Interfaces and Modeling, 15–25, 2015.Google Scholar
  17. [17]
    Wu, T.-P.; Tang, C.-K.; Brown, M. S.; Shum, H.-Y. ShapePalettes: Interactive normal transfer via sketching. ACM Transactions on Graphics Vol. 26, No. 3, Article No. 44, 2007.Google Scholar
  18. [18]
    Lopez-Moreno, J.; Jimenez, J.; Hadap, S.; Reinhard, E.; Anjyo, K.; Gutierrez, D. Stylized depiction of images based on depth perception. In: Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, 109–118, 2010.CrossRefGoogle Scholar
  19. [19]
    Orzan, A.; Bousseau, A.; Barla, P.; Winnemöller, H.; Thollot, J.; Salesin, D. Diffusion curves: A vector representation for smooth-shaded images. Communications of the ACM Vol. 56, No. 7, 101–108, 2013.CrossRefGoogle Scholar
  20. [20]
    Kholgade, N.; Simon, T.; Efros, A.; Sheikh, Y. 3D object manipulation in a single photograph using stock 3D models. ACM Transactions on Graphics Vol. 33, No. 4, Article No. 127, 2014.Google Scholar
  21. [21]
    Lopez-Moreno, J.; Garces, E.; Hadap, S.; Reinhard, E.; Gutierrez, D. Multiple light source estimation in a single image. Computer Graphics Forum Vol. 32, No. 8, 170–182, 2013.CrossRefGoogle Scholar
  22. [22]
    Grosse, R.; Johnson, M. K.; Adelson, E. H.; Freeman, W. T. Ground truth dataset and baseline evaluations for intrinsic image algorithms. In: Proceedings of IEEE 12th International Conference on Computer Vision, 2335–2342, 2009.Google Scholar
  23. [23]
    Rother, C.; Kiefel, M.; Zhang, L.; Schölkopf, B.; Gehler, P. V. Recovering intrinsic images with a global sparsity prior on reflectance. In: Proceedings of Advances in Neural Information Processing Systems 24, 765–773, 2011.Google Scholar

Copyright information

© The Author(s) 2016

Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Authors and Affiliations

  1. 1.Tokyo University of TechnologyTokyoJapan
  2. 2.The University of TokyoTokyoJapan

Personalised recommendations