Estimating coloured 3D face models from single images: An example based approach
In this paper we present a method to derive 3D shape and surface texture of a human face from a single image. The method draws on a general flexible 3D face model which is “learned” from examples of individual 3D-face data (Cyberware-scans). In an analysis-by-synthesis loop, the flexible model is matched to the novel face image.
From the coloured 3D model obtained by this procedure, we can generate new images of the face across changes in viewpoint and illumination. Moreover, nonrigid transformations which are represented within the flexible model can be applied, for example changes in facial expression.
The key problem for generating a flexible face model is the computation of dense correspondence between all given 3D example faces. A new correspondence algorithm is described which is a generalization of common algorithms for optic flow computation to 3D-face data.
KeywordsOptical Flow Face Image Flexible Model Face Model Shape Vector
Unable to display preview. Download preview PDF.
- 1.C. Choi, T. Okazaki, H. Harashima, and T. Takebe, “A system of analyzing and synthesizing facial images,” in Proc. IEEE Int. Symposium of Circuit and Syatems (ISCAS91), pp. 2665–2668, 1991.Google Scholar
- 2.D. Beymer, A. Shashua, and T. Poggio, “Example-based image analysis and synthesis,” A.I. Memo No. 1431, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 1993.Google Scholar
- 3.A. Lanitis, C. Taylor, T. Cootes, and T. Ahmad, “Automatic interpretation of human faces and hand gestures using flexible models,” in Proc. International Workshop on Face and Gesture Recognition (M.Bichsel, ed.), (Zürich, Switzerland), pp. 98–103, 1995.Google Scholar
- 5.D. Beymer and T. Poggio, “Image representation for visual learning,” Science, vol. 272, pp. 1905–1909, 1996.Google Scholar
- 6.F. Parke, “A parametric model of human faces,” doctoral thesis, University of Utah, Salt Lake City, 1974.Google Scholar
- 8.J. Barron, D. Fleet, and S. Beauchemin, “Performance of optical flow techniques,” Int. Journal of Computer Vision, pp. 43–77, 1994.Google Scholar
- 9.J. Bergen, P. Anandan, K. Hanna, and R. Hingorani, “Hierarchical model-based motion estimation,” in Proceedings of the European Conference on Computer Vision, (Santa Margherita Ligure, Italy), pp. 237–252, 1992.Google Scholar
- 10.J. Bergen and R. Hingorani, “Hierarchical motion-based frame rate conversion,” technical report, David Sarnoff Research Center Princeton NJ 08540, 1990.Google Scholar
- 13.E. Hildreth, The Measurement of Visual Motion. Cambridge: MIT Press, 1983.Google Scholar
- 14.H. H. Nagel, “Displacement vectors derived from second order intensity variations in image sequences.,” Computer Vision, Graphics and Image Processing, vol. 21, pp. 85–117, 1983.Google Scholar
- 15.M. Jones and T. Poggio, “Model-based matching by linear combination of prototypes,” a.i. memo no., Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 1996.Google Scholar
- 16.T. Vetter, M. J. Jones, and T. Poggio, “A bootstrapping algorithm for learning linear models of object classes,” in IEEE Conference on Computer Vision and Pattern Recognition — CVPR'97, (Puerto Rico, USA), IEEE Computer Society Press, 1997.Google Scholar
- 17.P. Viola, “Alignment by maximization of mutual information,” A.I. Memo No. 1548, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 1995.Google Scholar
- 18.T. Vetter, “Synthestis of novel views from a single face image,” International Journal of Computer Vision, no. in press.Google Scholar
- 19.P. Burt and E. Adelson, “Merging images through pattern decomposition,” in Applications of Digital Image Processing VIII, no. 575, pp. 173–181, SPIE The International Society for Optical Engeneering, 1985.Google Scholar