Abstract
Facial expressions and head movements communicate essential information during ASL sentences. We aim to improve the facial expressions in ASL animations and make them more understandable, ultimately leading to better accessibility of online information for deaf people with low English literacy. This paper presents how we engineer stimuli and questions to measure whether the viewer has seen and understood the linguistic facial expressions correctly. In two studies, we investigate how changing several parameters (the variety of facial expressions, the language in which the stimuli were invented, and the degree of involvement of a native ASL signer in the stimuli design) affects the results of a user evaluation study of facial expressions in ASL animation.
Chapter PDF
Similar content being viewed by others
Keywords
References
Allbritton, D.W., Mckoon, G., Ratcliff, R.: Reliability of prosodic cues for resolving syntactic ambiguity. J. Exp. Psychol.-Learn. Mem. Cogn. 22, 714–735 (1996)
Boulares, M., Jemni, M.: Toward an example-based machine translation from written text to ASL using virtual agent animation. In: Proceedings of CoRR (2012)
Elliott, R., Glauert, J., Kennaway, J., Marshall, I., Safar, E.: Linguistic modeling and language-processing technologies for avatar-based sign language presentation. Univ. Access. Inf. Soc. 6(4), 375–391 (2008)
Filhol, M., Delorme, M., Braffort, A.: Combining constraint-based models for sign language synthesis. In: Proceedings of 4th Workshop on the Representation and Processing of Sign Languages, Language Resources and Evaluation Conference (LREC), Malta (2010)
Fotinea, S.E., Efthimiou, E., Caridakis, G., Karpouzis, K.: A knowledge-based sign synthesis architecture. Univ. Access. Inf. Soc. 6(4), 405–418 (2008)
Gibet, S., Courty, N., Duarte, K., Le Naour, T.: The SignCom system for data-driven animation of interactive virtual signers: methodology and evaluation. ACM Trans. Interact. Intell. Syst. 1(1), Article 6 (2011)
Holt, J.A.: Stanford achievement test - 8th edn: Reading comprehension subgroup results. American Annals of the Deaf 138, 172–175 (1993)
Huenerfauth, M.: Evaluation of a psycholinguistically motivated timing model for animations of American Sign Language. In: The 10th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2008), Halifax, Nova Scotia, Canada (2008)
Huenerfauth, M., Zhao, L., Gu, E., Allbeck, J.: Evaluation of American Sign Language generation by native ASL signers. ACM Trans. Access. Comput 1(1), 1–27 (2008)
Huenerfauth, M., Hanson, V.: Sign Language in the interface: access for deaf signers. In: Stephanidis, C. (ed.) Universal Access Handbook, pp. 38.1-38.18. Erlbaum, NJ (2009)
Huenerfauth, M., Lu, P., Rosenberg, A.: Evaluating importance of facial expression in American Sign Language and Pidgin Signed English animations. In: Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2011), Dundee, Scotland. ACM Press, New York (2011)
Neidle, C., Kegl, J., MacLaughlin, D., Bahan, B., Lee, R.G.: The syntax of American Sign Language: functional categories & hierarchical structure. MIT Press, Cambridge (2000)
Price, P., Ostendorf, M., Shattuck-Hufnagel, S., Fong, C.: The use of prosody in syntactic disambiguation. Journal of the Acoustical Society of America (1991)
Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J.: An introductory guide to HamNoSys Version 2.0: Hamburg notation system for Sign Languages. In: International Studies on Sign Language and Communication of the Deaf. Signum, Hamburg (1989)
San-Segundo, R., Barra, R., Córdoba, R., D’Haro, L.F., Fernández, F., Ferreiros, J., Lucas, J.M., MacÃas-Guarasa, J., Montero, J.M., Pardo, J.M.: Speech to sign language translation system for Spanish. Speech Commun. 50(11-12), 1009–1020 (2008)
Schnepp, J., Wolfe, R., McDonald, J.: Synthetic corpora: A synergy of linguistics and computer animation. In: 4th Workshop on the Representation and Processing of Sign Languages, LREC 2010, Valetta, Malta (2010)
Traxler, C.: The Stanford achievement test, 9th edn: National norming & performance standards for deaf & hard-of-hearing students. J. Deaf Stud. Deaf Educ. 5(4), 337–348 (2000)
Vcom3D. Homepage (2013), http://www.vcom3d.com/
Wallraven, C., Breidt, M., Cunningham, D.W., Bülthoff, H.H.: Evaluating perceptual realism of animated facial expressions. ACM Trans. Appl. Percept. 4(4), Article 4 (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kacorri, H., Lu, P., Huenerfauth, M. (2013). Evaluating Facial Expressions in American Sign Language Animations for Accessible Online Information. In: Stephanidis, C., Antona, M. (eds) Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. UAHCI 2013. Lecture Notes in Computer Science, vol 8009. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39188-0_55
Download citation
DOI: https://doi.org/10.1007/978-3-642-39188-0_55
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39187-3
Online ISBN: 978-3-642-39188-0
eBook Packages: Computer ScienceComputer Science (R0)