Abstract
This paper describes a computer-character system intended to create a natural interaction between the computer and the user. Using predefined control rules, it generates the movements of the computer character’s head, body, hands, and gaze-lines according to changes in the user’s position and gaze-lines. This system acquires the user’s information about the user’s position, facial region, and gaze-lines by using a vision subsystem and an eye-tracker unit. The vision subsystem detects the presence of a person, estimates the three-dimensional position of the person by using information acquired by a stationary camera, and determines the locations of the face and hands. The reactive motions of the computer character are generated according to a set of predefined if-then rules. Furthermore, a motion-description file is designed to define simple and complex kinds of gestures.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Bradshaw, J.M. (Ed.): Software Agents. AAAI/MIT Press (1996).
Takeuchi, A., and Nagao, K.: Communicative facial displays as a new conversational modality. Human Factors in Computing Systems: INTERCH’93 Conference Proceedings, ACM (1993).
Cassell, J., et al.: Animated conversation: rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. In Proc. of SIGGRAPH’ 94 (1994).
Thorisson, K.R.: A mind model for multimodal communicative creatures and humanoids. International Journal of Applied Artificial Intelligence, Vol. 13, No. 4 (1999) 449–486.
Raffler-Engel, W.: Aspects of Nonverbal Communication. Swets and Zeitlinger B. V (1980).
Sellen, A.J.: Speech patterns in video-mediated conversation. In Proc. of CHI’92 (1992) 49–59.
Patterson, M.L.: Nonverbal Behavior: a Functional Perspective. Springer-Verlag (1983).
Otsu, N.: Discriminant and least-squares threshold selection. In Proc. of 4th Inter. Joint Conf. on Pattern Recognition (1978) 592–596.
Swain, M.J., and Ballard, D.H.: Color indexing. International Journal of Computer Vision, Vol. 7, No. 1 (1991) 11–32.
Saxe, D. and Foulds, R.: Toward robust skin identification in video images. In Proc. of Inter. Conf. on Automatic Face and Gesture Recognition (1996) 379–384.
Gose, E., et al.: Pattern Recognition and Image Analysis. Prentice Hall PTR (1996) 213–217.
Wren, C.R., et al.: Pfinder: real-time tracking of the human body. IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7 (1997) 780–785.
Hutchinson, T.E., et al.: Human-computer interaction using eye-gaze input. IEEE Trans. Systems, Man & Cybernetics, Vol. 19, No. 6 (1989) 1527–1543.
Parke, F.I. and Waters, K.: Computer Facial Animation. A K Perters, Ltd. (1996) 198–206.
Ahmad, S.: A usable real-time 3D hand tracker. Conference Record of the Asilomar Conference on Signals, System and Computers (1994) 1257–1261.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lu, S., Igi, S. (1999). Active Character: Dynamic Reaction to the User. In: Braffort, A., Gherbi, R., Gibet, S., Teil, D., Richardson, J. (eds) Gesture-Based Communication in Human-Computer Interaction. GW 1999. Lecture Notes in Computer Science(), vol 1739. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46616-9_22
Download citation
DOI: https://doi.org/10.1007/3-540-46616-9_22
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-66935-7
Online ISBN: 978-3-540-46616-1
eBook Packages: Springer Book Archive