Retexturing Single Views Using Texture and Shading

  • Ryan White
  • David Forsyth
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3954)


We present a method for retexturing non-rigid objects from a single viewpoint. Without reconstructing 3D geometry, we create realistic video with shape cues at two scales. At a coarse scale, a track of the deforming surface in 2D allows us to erase the old texture and overwrite it with a new texture. At a fine scale, estimates of the local irradiance provide strong cues of fine scale structure in the actual lighting environment. Computing irradiance from explicit correspondence is difficult and unreliable, so we limit our reconstructions to screen printing — a common printing techniques with a finite number of colors. Our irradiance estimates are computed in a local manner: pixels are classified according to color, then irradiance is computed given the color. We demonstrate results in two situations: on a special shirt designed for easy retexturing and on natural clothing with screen prints. Because of the quality of the results, we believe that this technique has wide applications in special effects and advertising.


Screen Print Texture Synthesis Single View Nonrigid Motion Explicit Correspondence 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Debevec, P.: Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. In: Proceedings of SIGGRAPH 1998. Computer Graphics Proceedings, Annual Conference Series, July 1998, pp. 189–198 (1998)Google Scholar
  2. 2.
    Fang, H., Hart, J.C.: Textureshop: texture synthesis as a photograph editing tool. ACM Trans. Graph. 23(3), 354–359 (2004)CrossRefGoogle Scholar
  3. 3.
    Forsyth, D.: Shape from texture without boundaries. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2352, pp. 225–239. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Forsyth, D.A., Zisserman, A.P.: Reflections on shading. IEEE T. Pattern Analysis and Machine Intelligence 13(7), 671–679 (1991)CrossRefGoogle Scholar
  5. 5.
    Haddon, J., Forsyth, D.A.: Shading primitives. Int. Conf. on Computer Vision (1997)Google Scholar
  6. 6.
    Haddon, J., Forsyth, D.: Shape representations from shading primitives. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1407, pp. 415–431. Springer, Heidelberg (1998)Google Scholar
  7. 7.
    Lobay, A., Forsyth, D.A.: Recovering shape and irradiance maps from rich dense texton fields. In: Proceedings of Computer Vision and Pattern Recognition, CVPR (2004)Google Scholar
  8. 8.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV 60(2), 91–110 (2004)CrossRefGoogle Scholar
  9. 9.
    Pilet, J., Lepetit, V., Fua, P.: Real-time non-rigid surface detection. In: The IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2005)Google Scholar
  10. 10.
    Tsap, L.V., Goldgof, D.B., Sarkar, S.: Nonrigid motion analysis based on dynamic refinement of finite element models. IEEE Trans. Pattern Anal. Mach. Intell. 22(5), 526–543 (2000)CrossRefGoogle Scholar
  11. 11.
    Yu, Y., Debevec, P., Malik, J., Hawkins, T.: Inverse global illumination: Recovering reflectance models of real scenes from photographs from. In: Rockwood, A. (ed.) Siggraph 1999, Annual Conference Series, Los Angeles, pp. 215–224. Addison Wesley Longman, Amsterdam (1999)Google Scholar
  12. 12.
    Zhang, R., Tsai, P.-S., Cryer, J.E., Shah, M.: Shape from shading: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 21(8), 690–706 (1999)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Ryan White
    • 1
  • David Forsyth
    • 2
  1. 1.University of CaliforniaBerkeleyUSA
  2. 2.University of IllinoisUrbana ChampaignUSA

Personalised recommendations