Abstract
This paper describes expression space generation technology that enables animators to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In this system, approximately 2400 facial expression frames are used to generate facial expression space. In this paper, distance matrixes that present distances between facial characteristic points are used to show the state of an expression. The set of these distance matrixes is defined as facial expression space. However, this facial expression space is not space that can be transferred to one space or another in a straight line, when one expression changes to another. In this technology, the route for moving from one expression to another is approximately inferred from captured facial expression data. First, it is assumed that two expressions are close to each other when the distance between distance matrixes that show facial expression states is below a certain value. When two random facial expression states are connected with the set of a series of adjacent expressions, it is assumed that there is a route between the two expressions. It is further assumed that the shortest path between two facial expressions is the path when one expression moves to the other expression. Dynamic programming is used to find the shortest path between two facial expressions. The facial expression space, which is the set of these distance matrixes, is multidimensional space. The facial expression control of 3-dimensional avatars is carried out in real-time when animators navigate through facial expression space. In order to assist this task, multidimensional scaling is used for visualization in 2-dimensional space, and animators are told to control facial expressions when using this system. This paper evaluates the results of the experiment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Terzopoulos, D., et al.: Facial animation: Past, present and future. Panel, Siggraph97 (1997)
Parke, F.I., Waters, K.: Computer facial animation. A K Peters, Wellesley (1996)
Lee, W.-S., Magnenat-Thalmann, N.: Fast head modeling for animation. Journal Image and Vision Computing 18(4), 355–364 (2000)
Kouadio, C., Poulin, P., Lachapelle, P.: Real-time facial animation based upon a bank of 3D facial expressions. In: Proc. Computer Animation 98 (June 1998)
Lee, J., et al.: Interactive Control of Avatars Animated with Human Motion Data. ACM Transactions on Graphics (Siggraph 2002) 21(3), 491–500 (2002)
Floyd, R.W.: Algorithm 97: Shortest Path. CACMÂ 5, 345 (1962)
Tenenbaum, J.: Mapping a manifold of perceptual observations. In: Advances in Neural Information Processing Systems, vol. 10, pp. 682–688. MIT Press, Cambridge (1998)
Cox, T., Cox, M.: Multidimensional Scaling. Chapman & Hall, London (1994)
Torgerson, W.S.: Multidimensional scaling: I. theory and method. Psychometrica 17, 401–419 (1952)
Mardia, K.V.: Some properties of classical multi-dimensional scaling. Communications in Statistics-Theory and Methods A7, 1233–1241 (1978)
Shardanand, U.: Social information filtering for music recommendation. Master’s thesis, MIT (1994)
Gleicher, M.: Retargetting motion to new characters. In: Proceedings of SIGGRAPH 98. Computer Graphics Annual Conference Series (1998)
Vlasic, D., et al.: Face Transfer with Multilinear Models. ACM Transactions on Graphics (TOG) 24, 426–433 (2005)
Deng, Z., et al.: Animating blendshape faces by cross-mapping motion capture data. In: Proceedings of the 2006 symposium on Interactive 3D graphics and games, March 14-17, 2006, pp. 43–48 (2006)
Fidaleo, D., Neumann, U.: Analysis of co-articulation regions for performance-driven facial animation. Journal of Visualization and Computer Animation 15, 15–26 (2004)
Lee, J., et al.: Interactive Control of Avatars Animated with Human Motion Data. ACM Transactions on Graphics (SIGGRAPH 2002) 21(3), 491–500 (2002)
Noh, J.-Y., Neumann, U.: Expression cloning. In: Proceedings of SIGGRAPH 2001, pp. 21–28 (2001)
Deng, Z., Neumann, U.: eFASE: Expressive Facial Animation Synthesis and Editing with Phoneme-Isomap Controls. In: Proc. of ACM SIGGRAPH/EG Symposium on Computer Animation (SCA), pp. 251–259. ACM, New York (2006)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Kim, SH. (2007). Generation of Expression Space for Realtime Facial Expression Control of 3D Avatar. In: Gagalowicz, A., Philips, W. (eds) Computer Vision/Computer Graphics Collaboration Techniques. MIRAGE 2007. Lecture Notes in Computer Science, vol 4418. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-71457-6_29
Download citation
DOI: https://doi.org/10.1007/978-3-540-71457-6_29
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-71456-9
Online ISBN: 978-3-540-71457-6
eBook Packages: Computer ScienceComputer Science (R0)