Shape from Texture without Boundaries
We describe a shape from texture method that constructs a maximum a posteriori estimate of surface coefficients using only the deformation of individual texture elements. Our method does not need to use either the boundary of the observed surface or any assumption about the overall distribution of elements. The method assumes that texture elements are of a limited number of types of fixed shape. We show that, with this assumption and assuming generic view and texture, each texture element yields the surface gradient unique up to a two-fold ambiguity. Furthermore, texture elements that are not from one of the types can be identified and ignored. An EM-like procedure yields a surface reconstruction from the data. The method is defined for othographic views — an extension to perspective views appears to be complex, but possible. Examples of reconstructions for synthetic images of surfaces are provided, and compared with ground truth. We also provide examples of reconstructions for images of real scenes. We show that our method for recovering local texture imaging transformations can be used to retexture objects in images of real scenes.
KeywordsShape from texture texture computer vision surface fitting
Unable to display preview. Download preview PDF.
- 1.Y. Aloimonos. Detection of surface orientation from texture. i. the case of planes. In IEEE Conf. on Computer Vision and Pattern Recognition, pages 584–593, 1986.Google Scholar
- 3.M. Clerc and S. Mallat. Shape from texture through deformations. In Int. Conf. on Computer Vision, pages 405–410, 1999.Google Scholar
- 4.D.A. Forsyth. Shape from texture and integrability. In Int. Conf. on Computer Vision, pages 447–452, 2001.Google Scholar
- 5.J. Garding. Shape from texture for smooth curved surfaces. In European Conference on Computer Vision, pages 630–8, 1992.Google Scholar
- 6.J. Garding. Surface orientation and curvature from differential texture distortion. In Int. Conf. on Computer Vision, pages 733–9, 1995.Google Scholar
- 7.T. Leung and J. Malik. Detecting, localizing and grouping repeated scene elements from an image. In European Conference on Computer Vision, pages 546–555, 1996.Google Scholar
- 8.J. Malik, S. Belongie, J. Shi, and T. Leung. Textons, contours and regions: cue integration in image segmentation. In Int. Conf. on Computer Vision, pages 918–925, 1999.Google Scholar
- 9.J. Malik and R. Rosenholtz. Computing local surface orientation and shape from texture for curved surfaces. Int. J. Computer Vision, pages 149–168, 1997.Google Scholar
- 10.J.L. Mundy and A. Zisserman. Repeated structures: image correspondence constraints and 3d structure recovery. In J.L. Mundy, A. Zisserman, and D.A. Forsyth, editors, Applications of invariance in computer vision, pages 89–107, 1994.Google Scholar
- 12.F. Schaffalitzky and A. Zisserman. Geometric grouping of repeated elements within images. In D.A. Forsyth, J.L. Mundy, V. diGesu, and R. Cipolla, editors, Shape, contour and grouping in computer vision, pages 165–181, 1999.Google Scholar
- 13.T. Leung and J. Malik. Detecting, localizing and grouping repeated scene elements from an image. In European Conference on Computer Vision, pages 546–555, 1996.Google Scholar
- 14.L. Torresani, D. Yang, G. Alexander, and C. Bregler. Tracking and modelling non-rigid objects with rank constraints. In IEEE Conf. on Computer Vision and Pattern Recognition, 2001. to appear.Google Scholar