Advertisement

ASL-Pro: American Sign Language Animation with Prosodic Elements

  • Nicoletta Adamo-VillaniEmail author
  • Ronnie B. Wilbur
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9176)

Introduction

As a general notion, prosody includes relative prominence, rhythm, and timing of articulations. Prosodic markers indicate which units are grouped together and serve as cues for parsing the signal for comprehension. Speech has a Prosodic Hierarchy [1]: (smallest to largest) Syllable \(<\)

Keywords

American Sign Language (ASL) ASL animation ASL prosody Deaf education Facial articulation 

References

  1. 1.
    Nespor, M., Vogel, I.: Prosodic Phonology. Foris, Dordrecht (1986)Google Scholar
  2. 2.
    Pfau, R., Steinbach, M., Woll, B. (eds.): Sign language. In: An International Handbook (HSK - Handbooks of Linguistics and Communication Science). Mouton de Gruyter, Berlin (2012)Google Scholar
  3. 3.
    Sandler, W., Lillo-Martin, D.: Sign Language and Linguistic Universals. Cambridge University Press, Cambridge (2006)CrossRefGoogle Scholar
  4. 4.
    Brentari, D.: A Prosodic Model of Sign Language Phonology. MIT Press, Cambridge (1998)Google Scholar
  5. 5.
    Wilbur, R.B.: Effects of varying rate of signing on ASL manual signs and nonmanual markers. Lang. Speech. 52(2/3), 245–285 (2009)CrossRefGoogle Scholar
  6. 6.
    Wilbur, R.B.: Eyeblinks and ASL phrase structure. Sign Lang. Stud. 84, 221–240 (1994)CrossRefGoogle Scholar
  7. 7.
    Brentari, D., Crossley, L.: Prosody on the hands and face: evidence from american sign language. Sign Lang. Linguist. 5(2), 105–130 (2002)CrossRefGoogle Scholar
  8. 8.
    Sandler, W.: Prosody in Israeli sign language. Lang. Speech. 42(23), 127–142 (1999)CrossRefGoogle Scholar
  9. 9.
    Wilbur, R.B.: Sign syllables. In: Van Oostendorp, M., Colin, J.E., Elizabeth, H., Keren, R. (eds.) The Blackwell Companion to Phonology. Blackwell Publishing, Oxford (2011)Google Scholar
  10. 10.
    Wilbur, R. B., Malaia, E.: A new technique for assessing narrative prosodic effects in sign languages. In: Linguistic Foundations of Narration in Spoken and Sign Languages, Potsdam, Germany (2013)Google Scholar
  11. 11.
    Watson, K.: Wh-Questions in American Sign Language: Contributions of Non-manual Marking to Structure and Meaning. MA thesis, Purdue University, IN (2010)Google Scholar
  12. 12.
    Weast, T.: Questions in American Sign Language: A Quantitative Analysis of Raised and Lowered Eyebrows. University of Texas, Arlington (2008)Google Scholar
  13. 13.
  14. 14.
  15. 15.
    TERC (2006). http://www.terc.edu/
  16. 16.
    Signing Science (2007). http://signsci.terc.edu/
  17. 17.
    Adamo-Villani, N., Wilbur, R.: Two novel technologies for accessible math and science education. IEEE Multimed. Spec. Issue Accessibility 15(4), 38–46 (2008)CrossRefGoogle Scholar
  18. 18.
    Adamo-Villani, N., Wright, K.: SMILE: an immersive learning game for deaf and hearing children. In: ACM SIGGRAPH 2007 Educators Program, Article 17 (2006)Google Scholar
  19. 19.
    Zhao, L., Kipper, K., Schuler, W., Vogler, C., Badler, N.I., Palmer, M.: A machine translation system from english to american sign language. In: White, J.S. (ed.) AMTA 2000. LNCS (LNAI), vol. 1934, pp. 54–67. Springer, Heidelberg (2000) CrossRefGoogle Scholar
  20. 20.
    Huenerfauth, M.: A multi-path architecture for machine translation of english text into american sign language animation. In: Student Workshop at the Human Language Technology Conference/North American Chapter of the Association for Computational Linguistics (HLTNAACL) (2004)Google Scholar
  21. 21.
    Grieve-Smith, A.: SignSynth: a sign language synthesis application using Web3D and Perl. In: Wachsmuth, I., Sowa, T. (eds.) Gesture and Sign Language in Human-Computer Interaction, 2298. Lecture Notes in Computer Science, pp. 134–145. Springer-Verlag, Berlin (2002)CrossRefGoogle Scholar
  22. 22.
    Huenerfauth, M.: Evaluation of a psycholinguistically motivated timing model for animations of american sign language. In: 10th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS08), pp. 129–136 (2008)Google Scholar
  23. 23.
    Huenerfauth, M.: A linguistically motivated model for speed and pausing in animations of american sign language. ACM Trans. Access. Comput. 2(2), 9 (2009)CrossRefGoogle Scholar
  24. 24.
    Chi, D., Costa, M., Zhao, L., Badler, N.: The EMOTE model for effort and shape. In: ACM Computer Graphics Annual Conference, New Orleans, LA (2000)Google Scholar
  25. 25.
    Toro, J., Furst, J., Alkoby, K., Carte, R., Christopher, J., Craft, B., Davidson, M., Hinkle, D., Lancaster, D., Morris, A., McDonald, J., Sedgwick, E., Wolfe, R.: An improved graphical environment for transcription and display of american sign language. Information 4(4), 533–539 (2001)Google Scholar
  26. 26.
    Hayward, K., Adamo-Villani, N., Lestina, J.: A computer animation system for creating deaf-accessible math and science curriculum materials. In: Eurographics 2010-Education Papers, Norrkoping, Sweden (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Purdue UniversityWest LafayetteUSA

Personalised recommendations