Abstract
Annotated corpora have played a critical role in speech and natural language research; and, there is an increasing interest in corpora-based research in sign language and gesture as well. We present a non-semantic, geometrically-based annotation scheme, FORM, which allows an annotator to capture the kinematic information in a gesture just from videos of speakers. In addition, FORM stores this gestural information in Annotation Graph format—allowing for easy integration of gesture information with other types of communication information, e.g., discourse structure, parts of speech, intonation information, etc.
Much of this work was done at the University of Pennsylvania and at The RAND Corporation as well.
This presentation is a modified version of [Martell, 2002].
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bird, S. and Liberman, M. (1999). A Formal Framework for Linguistic Annotation. Technical Report MS-CIS-99-01, Department of Computer and Information Sciences, University of Pennsylvania, Philadelphia, Pennsylvania. http://citeseer.nj.nec.com/article/bird99formal.html.
Cassell, J., Vilhjálmsson, H. H., and Bickmore, T. (2001). BEAT: The Behavior Expression Animation Toolkit. In Fiume, E., editor, Proceedings of SIGGRAPH, pages 477–486. ACM Press / ACM SIGGRAPH. http://citeseer.ist.psu.edu/cassell01beat.html.
Kendon, A. (1996). An Agenda for Gesture Studies. Semiotic Review of Books, 7(3):8–12.
Kendon, A. (2000). Suggestions for a Descriptive Notation for Manual Gestures. Unpublished.
Kipp, M. (2001). Anvil-A Generic Annotation Tool for Multimodal Dialogue. In Proceedings of Eurospeech 2001, pages 1367–1370, Aalborg, Denmark.
MacWhinney, B. (1996). The CHILDES System. American Journal of Speech-Language Pathology, 5:5–14.
Martell, C. (2002). FORM: An Extensible, Kinematically-based Gesture Annotation Scheme. In Proceedings of International Language Resources and Evaluation Conference (LREC), pages 183–187. European Language Resources Association. http://www.ldc.upenn.edu/Projects/FORM.
McNeill, D. (1992). Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago, USA.
Neidle, C., Sclaroff, S., and Athitsos, V. (2001). SignStream: A Tool for Linguistic and Computer Vision Research on Visual-Gestural Language Data. In Behavior Research Methods, Instruments, and Computers, volume 33:3, pages 311–320. Psychonomic Society Publications. http://www.bu.edu/asllrp/SignStream/.
Quek, F., Bryll, R., McNeill, D., and Harper, M. (2001). Gestural Origo and Loci-Transitions in Natural Discourse Segmentation. Technical Report VISLab-01-12, Department of Computer Science and Engineering, Wright State University. http://vislab.cs.vt.edu/Publications/2001/QueBMH01.html.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer
About this chapter
Cite this chapter
Martell, C.H. (2005). Form. In: van Kuppevelt, J.C.J., Dybkjær, L., Bernsen, N.O. (eds) Advances in Natural Multimodal Dialogue Systems. Text, Speech and Language Technology, vol 30. Springer, Dordrecht. https://doi.org/10.1007/1-4020-3933-6_4
Download citation
DOI: https://doi.org/10.1007/1-4020-3933-6_4
Publisher Name: Springer, Dordrecht
Print ISBN: 978-1-4020-3932-4
Online ISBN: 978-1-4020-3933-1
eBook Packages: Humanities, Social Sciences and LawSocial Sciences (R0)