Abstract
The study of sign languages attempts to create a coherent model that describes the expressive nature of signs conveyed in gestures with a linguistic framework. 3D gesture modelling offers a precise annotation and representation of linguistic constructs and can become an entry mechanism for sign languages. This paper presents the requirements to build an input method editor for sign languages and the initial experiments using signing avatars input interfaces. The system currently saves and annotates 3D gestures on humanoid models with linguistic labels. Results show that the annotating prototype can be used in turn to ease and guide the task of 3D gesture modelling.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Van Zijl, L., Fourie, J.: Design and development of a generic signing avatar. In: Graphics and Visualization in Eng., Florida, USA, Uni. Stellenbosch (2007)
Potgieter, D.: Signing avatar: Sign editor. Master’s thesis, Uni. Stellenbosch (2006)
Mohlamme, I.: Thibologa Sign Language Institution (2007), http://www.thibologa.co.za
Hanke, T., Storz, J.: Ilex a database tool for integrating sign language corpus linguistics and sign language lexicography. In: LREC (2008)
Cuxac, C.: La langue des signes française - les voies de l’iconicité. Ophrys (2000)
Lenseigne, B.: Intégration de connaissances linguistiques dans un système de vision, application à l’étude de la langue des Signes. PhD thesis, IRIT, FR (2004)
Campr, P., Hrúz, M., Trojanová, J.: Collection and preprocessing of czech sign language corpus for sign language recognition. In: LREC (2008)
Filhol, M., Braffort, A.: A sequential approach to lexical sign description. In: LREC, Orsay, FR (2006)
Huenerfauth, M.: Spatial representation of classifier predicates for machine translation into american sign language. In: LREC, Lisbon, PT (2004)
Hanke, T.: Animating Sign Language: the eSIGN Approach. IST EU FP5 (2003)
Papadogiorgaki, M., Grammalidis, N., Sarris, N., Strintzis, M.G.: Synthesis of virtual reality animation from sign language notation using MPEG-4 body animation parameters. In: ICDVRAT, Thermi-Thessaloniki, GR (2004)
Pyfers, L.: VSign Builder Manual. Pragma, Hoensbroek, NL (2002)
Yi, B.: A Framework for a Sign Language Interfacing System. PhD thesis, Uni. Nevada, Reno, NV (May 2006)
Vcom3D: Vcommunicator Gesture Builder 2.0, Orlando, FL (2007)
Peterson, D.J.: SLIPA: An IPA for Signed Languages, Fullerton, CA (2007)
Funge, J.: Cognitive Modeling: Knowledge, Reasoning and Planning for Intelligent Characters. AI games (1999)
Huang, Z.: STEP: A Scripting Language for Embodied Agents. In: Lifelike Animated Agents (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Olivrin, G.J.L. (2009). Gesture Modelling for Linguistic Purposes. In: Sales Dias, M., Gibet, S., Wanderley, M.M., Bastos, R. (eds) Gesture-Based Human-Computer Interaction and Simulation. GW 2007. Lecture Notes in Computer Science(), vol 5085. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-92865-2_15
Download citation
DOI: https://doi.org/10.1007/978-3-540-92865-2_15
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-92864-5
Online ISBN: 978-3-540-92865-2
eBook Packages: Computer ScienceComputer Science (R0)