Deep Normal Estimation for Automatic Shading of Hand-Drawn Characters

  • Matis HudonEmail author
  • Mairéad Grogan
  • Rafael Pagés
  • Aljoša Smolić
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11131)


We present a new fully automatic pipeline for generating shading effects on hand-drawn characters. Our method takes as input a single digitized sketch of any resolution and outputs a dense normal map estimation suitable for rendering without requiring any human input. At the heart of our method lies a deep residual, encoder-decoder convolutional network. The input sketch is first sampled using several equally sized 3-channel windows, with each window capturing a local area of interest at 3 different scales. Each window is then passed through the previously trained network for normal estimation. Finally, network outputs are arranged together to form a full-size normal map of the input sketch. We also present an efficient and effective way to generate a rich set of training data. Resulting renders offer a rich quality without any effort from the 2D artist. We show both quantitative and qualitative results demonstrating the effectiveness and quality of our network and method.


Cartoons Non-photorealistic rendering Normal estimation Deep learning 



The authors would like to thank David Revoy and Ester Huete, for sharing their original creations. This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) under the Grant Number 15/RP/2776. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.


  1. 1.
    Abadi, M., et al.: Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016)
  2. 2.
    Anjyo, K.i., Wemler, S., Baxter, W.: Tweakable light and shade for cartoon animation. In: Proceedings of the 4th International Symposium on Non-photorealistic Animation and Rendering, pp. 133–139. ACM (2006)Google Scholar
  3. 3.
    Bansal, A., Russell, B., Gupta, A.: Marr revisited: 2D–3D alignment via surface normal prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5965–5974 (2016)Google Scholar
  4. 4.
    Barla, P., Thollot, J., Markosian, L.: X-Toon: an extended toon shader. In: Proceedings of the 4th International Symposium on Non-photorealistic Animation and Rendering, pp. 127–132. ACM (2006)Google Scholar
  5. 5.
    Belhumeur, P.N., Kriegman, D.J., Yuille, A.L.: The bas-relief ambiguity. Int. J. Comput. Vis. 35(1), 33–44 (1999)CrossRefGoogle Scholar
  6. 6.
    Bui, M.T., Kim, J., Lee, Y.: 3D-look shading from contours and hatching strokes. Comput. Graph. 51, 167–176 (2015)CrossRefGoogle Scholar
  7. 7.
    Cole, F., et al.: How well do line drawings depict shape? In: ACM Transactions on Graphics (ToG), vol. 28, p. 28. ACM (2009)Google Scholar
  8. 8.
    Eigen, D., Fergus, R.: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2650–2658 (2015)Google Scholar
  9. 9.
    Feng, L., Yang, X., Xiao, S., Jiang, F.: An interactive 2D-to-3D cartoon modeling system. In: El Rhalibi, A., Tian, F., Pan, Z., Liu, B. (eds.) Edutainment 2016. LNCS, vol. 9654, pp. 193–204. Springer, Cham (2016). Scholar
  10. 10.
    Grabli, S., Turquin, E., Durand, F., Sillion, F.X.: Programmable rendering of line drawing from 3D scenes. ACM Trans. Graph. (TOG) 29(2), 18 (2010)CrossRefGoogle Scholar
  11. 11.
    Han, X., Gao, C., Yu, Y.: DeepSketch2Face: a deep learning based sketching system for 3D face and caricature modeling. arXiv preprint arXiv:1706.02042 (2017)
  12. 12.
    Henz, B., Oliveira, M.M.: Artistic relighting of paintings and drawings. Vis. Comput. 33(1), 33–46 (2017)CrossRefGoogle Scholar
  13. 13.
    Huang, H., Kalogerakis, E., Yumer, E., Mech, R.: Shape synthesis from sketches via procedural models and convolutional networks. IEEE Trans. Vis. Comput. Graph. 23(8), 2003–2013 (2017)CrossRefGoogle Scholar
  14. 14.
    Hudon, M., Pagés, R., Grogan, M., Ondřej, J., Smolić, A.: 2D shading for cel animation. In: Expressive The Joint Symposium on Computational Aesthetics and Sketch Based Interfaces and Modeling and Non-photorealistic Animation and Rendering (2018)Google Scholar
  15. 15.
    Igarashi, T., Matsuoka, S., Tanaka, H.: Teddy: A sketching interface for 3D freeform design. In: SIGGRAPH 1999 Conference Proceedings. ACM (1999)Google Scholar
  16. 16.
    Jayaraman, P.K., Fu, C.W., Zheng, J., Liu, X., Wong, T.T.: Globally consistent wrinkle-aware shading of line drawings. IEEE Trans. Vis. Comput. Graph. 24(7), 2103–2117 (2017)CrossRefGoogle Scholar
  17. 17.
    Johnston, S.F.: Lumo: illumination for cel animation. In: Proceedings of the 2nd International Symposium on Non-photorealistic Animation and Rendering, pp. 45–ff. ACM (2002)Google Scholar
  18. 18.
    Karpenko, O.A., Hughes, J.F.: Smoothsketch: 3D free-form shapes from complex sketches. In: ACM Transactions on Graphics (TOG), vol. 25, pp. 589–598. ACM (2006)Google Scholar
  19. 19.
    Kersten, D., Mamassian, P., Knill, D.C.: Moving cast shadows induce apparent motion in depth. Perception 26(2), 171–192 (1997)CrossRefGoogle Scholar
  20. 20.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  21. 21.
    Koenderink, J.J., Van Doorn, A.J., Kappers, A.M.: Surface perception in pictures. Atten. Percept. Psycho. 52(5), 487–496 (1992)CrossRefGoogle Scholar
  22. 22.
    Kwok, P.: A thinning algorithm by contour generation. Commun. ACM 31(11), 1314–1324 (1988)CrossRefGoogle Scholar
  23. 23.
    Lee, Y., Markosian, L., Lee, S., Hughes, J.F.: Line drawings via abstracted shading. In: ACM Transactions on Graphics (TOG), vol. 26, p. 18. ACM (2007)CrossRefGoogle Scholar
  24. 24.
    Li, C., Liu, X., Wong, T.T.: Deep extraction of manga structural lines. ACM Trans. Graph. (TOG) 36(4), 117 (2017)Google Scholar
  25. 25.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
  26. 26.
    Lun, Z., Gadelha, M., Kalogerakis, E., Maji, S., Wang, R.: 3D shape reconstruction from sketches via multi-view convolutional networks. arXiv preprint arXiv:1707.06375 (2017)
  27. 27.
    Olsen, L., Samavati, F.F., Sousa, M.C., Jorge, J.A.: Sketch-based modeling: a survey. Comput. Graph. 33(1), 85–103 (2009)CrossRefGoogle Scholar
  28. 28.
    Pan, H., Liu, Y., Sheffer, A., Vining, N., Li, C.J., Wang, W.: Flow aligned surfacing of curve networks. ACM Trans. Graph. (TOG) 34(4), 127 (2015)CrossRefGoogle Scholar
  29. 29.
    Petrović, L., Fujito, B., Williams, L., Finkelstein, A.: Shadows for cel animation. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 511–516. ACM Press/Addison-Wesley Publishing Co. (2000)Google Scholar
  30. 30.
    Pontes, J.K., Kong, C., Sridharan, S., Lucey, S., Eriksson, A., Fookes, C.: Image2Mesh: a learning framework for single image 3D reconstruction. arXiv preprint arXiv:1711.10669 (2017)
  31. 31.
    Rematas, K., Ritschel, T., Fritz, M., Gavves, E., Tuytelaars, T.: Deep reflectance maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4508–4516 (2016)Google Scholar
  32. 32.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). Scholar
  33. 33.
    Schlick, C.: An inexpensive BRDF model for physically-based rendering. In: Computer Graphics Forum, vol. 13, pp. 233–246. Wiley Online Library (1994)Google Scholar
  34. 34.
    Schmidt, R., Khan, A., Singh, K., Kurtenbach, G.: Analytic drawing of 3D scaffolds. In: ACM Transactions on Graphics (TOG), vol. 28, p. 149. ACM (2009)CrossRefGoogle Scholar
  35. 35.
    Shao, C., Bousseau, A., Sheffer, A., Singh, K.: CrossShade: shading concept sketches using cross-section curves. ACM Trans. Graph. 31(4) (2012). Scholar
  36. 36.
    Simo-Serra, E., Iizuka, S., Ishikawa, H.: Mastering sketching: adversarial augmentation for structured prediction. arXiv preprint arXiv:1703.08966 (2017)
  37. 37.
    Simo-Serra, E., Iizuka, S., Sasaki, K., Ishikawa, H.: Learning to simplify: fully convolutional networks for rough sketch cleanup. ACM Trans. Graph. (TOG) 35(4), 121 (2016)CrossRefGoogle Scholar
  38. 38.
    Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806 (2014)
  39. 39.
    Su, W., Du, D., Yang, X., Zhou, S., Hongbo, F.: Interactive sketch-based normal map generation with deep neural networks. In: ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (i3D 2018). ACM (2018)Google Scholar
  40. 40.
    Sỳkora, D., Ben-Chen, M., Čadík, M., Whited, B., Simmons, M.: Textoons: practical texture mapping for hand-drawn cartoon animations. In: Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-photorealistic Animation and Rendering, pp. 75–84. ACM (2011)Google Scholar
  41. 41.
    Sỳkora, D., Dingliana, J., Collins, S.: As-rigid-as-possible image registration for hand-drawn cartoon animations. In: Proceedings of the 7th International Symposium on Non-photorealistic Animation and Rendering, pp. 25–33. ACM (2009)Google Scholar
  42. 42.
    Sỳkora, D., Dingliana, J., Collins, S.: Lazybrush: flexible painting tool for hand-drawn cartoons. In: Computer Graphics Forum, vol. 28, pp. 599–608. Wiley Online Library (2009)Google Scholar
  43. 43.
    Sỳkora, D., et al.: Ink-and-ray: bas-relief meshes for adding global illumination effects to hand-drawn characters. ACM Trans. Graph. (TOG) 33(2), 16 (2014)CrossRefGoogle Scholar
  44. 44.
    Tatarchenko, M., Dosovitskiy, A., Brox, T.: Multi-view 3D models from single images with a convolutional network. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9911, pp. 322–337. Springer, Cham (2016). Scholar
  45. 45.
    Todo, H., Anjyo, K.I., Baxter, W., Igarashi, T.: Locally controllable stylized shading. ACM Trans. Graph. (TOG) 26(3), 17 (2007)CrossRefGoogle Scholar
  46. 46.
    Tuan, B.M., Kim, J., Lee, Y.: Height-field construction using cross contours. Comput. Graph. 66, 53–63 (2017)CrossRefGoogle Scholar
  47. 47.
    Wang, X., Fouhey, D., Gupta, A.: Designing deep networks for surface normal estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 539–547 (2015)Google Scholar
  48. 48.
    Wanger, L.R., Ferwerda, J.A., Greenberg, D.P.: Perceiving spatial relationships in computer-generated images. IEEE Comput. Graph. Appl. 12(3), 44–58 (1992)CrossRefGoogle Scholar
  49. 49.
    Whited, B., Noris, G., Simmons, M., Sumner, R.W., Gross, M., Rossignac, J.: BetweenIT: an interactive tool for tight inbetweening. In: Computer Graphics Forum, vol. 29, pp. 605–614. Wiley Online Library (2010)CrossRefGoogle Scholar
  50. 50.
    Xing, J., Wei, L.Y., Shiratori, T., Yatani, K.: Autocomplete hand-drawn animations. ACM Trans. Graph. (TOG) 34(6), 169 (2015)CrossRefGoogle Scholar
  51. 51.
    Xu, B., Chang, W., Sheffer, A., Bousseau, A., McCrae, J., Singh, K.: True2Form: 3D curve networks from 2D sketches via selective regularization. ACM Trans. Graph. 33(4), 131 (2014)Google Scholar
  52. 52.
    Zhang, L., Ji, Y., Lin, X.: Style transfer for anime sketches with enhanced residual U-Net and auxiliary classifier GAN. arXiv preprint arXiv:1706.03319 (2017)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.V-SENSE, Trinity College DublinDublinIreland

Personalised recommendations