Synthesizing Sign Language by Connecting Linguistically Structured Descriptions to a Multi-track Animation System

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10278)

Abstract

Animating sign language requires both a model of the structure of the target language and a computer animation system capable of producing the resulting avatar motion. On the language modelling side, AZee proposes a methodology and formal description mechanism to build grammars of Sign languages. It has mostly assumed the existence of an avatar capable of rendering its low-level articulation specifications. On the computer animation side, the Paula animator system proposes a multi-track SL generation platform designed for realism of movements, programmed from its birth to be driven by linguistic input.

This paper presents a system architecture making use of the advantages of the two research efforts that have matured in recent years to the point where a connection is now possible. This paper describes the essence of both systems and describes the foundations of a connected system, resulting in a full process from abstract linguistic input straight to animated video. The main contribution is in addressing the trade-off between coarser natural-looking segments and composition of linguistically relevant atoms.

Keywords

Sign language Computational linguistics Computer animation Avatar 

References

  1. 1.
    Cuxac, C.: La langue des signes française, les voies de l’iconicité, Editions Ophrys (2000)Google Scholar
  2. 2.
    Van der Kooij, E.: Phonological categories in Sign Language of the Neth-erlands: the role of phonetic implementation of and iconicity, Utrecht: Ph.D. Thesis (2002)Google Scholar
  3. 3.
    Stokoe, W.C.: Semantic phonology. Sign Lang. Stud. 71(1), 107–114 (1991)CrossRefGoogle Scholar
  4. 4.
    Filhol, M., Hadjadj, M.N.: Juxtaposition as a form feature; syntax captured and explained rather than assumed and modelled. In: Language Resources and Evaluation Conference (LREC), Representation and Processing of Sign Languages, Portorož, Slovenia (2016)Google Scholar
  5. 5.
    Filhol, M., Hadjadj, M.N., Choisier, A.: Non-manual features: the right to indifference. In: Language Resource and Evaluation Conference (LREC), 6th Workshop on the Representation and Processing of Sign Language: Beyond the Manual Channel, Reykjavik, Islande (2014)Google Scholar
  6. 6.
    Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J.: HamNoSys version 2.0, Hamburg Notation System for Sign Languages: An Introductory Guide. Signum Press, Hamburg (1989)Google Scholar
  7. 7.
    Moody, B.: Dictionnaire bilingue LSF/français, IVT edn. (1998)Google Scholar
  8. 8.
    Elliott, R., Glauert, J., Kennaway, J., Marshall, I., Safar, E.: Linguistic modelling and language-processing technologies for Avatar-based sign language presentation. Univ. Access Inf. Soc. 6(4), 375–391 (2007)CrossRefGoogle Scholar
  9. 9.
    Parent, R.: Computer animation: algorithms and techniques. Newnes, Boston (2012)Google Scholar
  10. 10.
    McDonald, J., Wolfe, R., Schnepp, J., Hochgesang, J., Jamrozik, D., Stumbo, M., Berke, L., Bialek, M., Thomas, F.: An automated technique for real-time production of lifelike animations of American Sign Language, pp. 1–16 (2015)Google Scholar
  11. 11.
    Wilbur, R.B., Malaia, E.: A new technique for analyzing narrative prosodic effects in sign languages using motion capture technology. In: A. H. &. M. S. (eds.) Linguistic foundations of narration in spoken and sign languages, Amsterdam, John Benjamins (in Press)Google Scholar
  12. 12.
    McDonald, J., Wolfe, R., Wilbur, R., Moncrief, R., Malaia, E., Fujimoto, S., Baowidan, S., Stec, J.: A new tool to facilitate prosodic analysis of motion capture data and a data-driven technique for the improvement of avatar motion, Portoroz (2016)Google Scholar
  13. 13.
    Dent, S.: What you need to know about 3D motion capture, Engadget, 14 August 2014. https://www.engadget.com/2014/07/14/motion-capture-explainer/. Accessed 21 Jan 2017
  14. 14.
    Gleicher, M.: Animation from observation: Motion capture and motion editing. ACM SIGGRAPH Comput. Graph. 33(4), 51–54 (1999)CrossRefGoogle Scholar
  15. 15.
    Wolfe, R., McDonald, J., Moncrief, R., Baowidan, S., Stumbo, M.: Inferring biomechanical kinematics from linguistic data: a case study for role shift. In: Symposium on Sign Language Translation and Avatar Technology (SLTAT), Paris, France (2015)Google Scholar
  16. 16.
    McDonald, J., Wolfe, R., Moncrief, R., Baowidan, S.: Analysis for synthesis: Investigating corpora for supporting the automatic generation of role shift. In: Proceedings of the 6th Workshop on the Representation and Processing of Sign Languages: Beyond the Manual Channel. Language Resources and Evaluation Conference (LREC), Reykjavik, Iceland (2014)Google Scholar
  17. 17.
    Kipp, M., Nguyen, Q., Heloir, A., Matthes, S.: Assessing the deaf user perspective on sign language avatars. In: The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility. ACM (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.DePaul UniversityChicagoUSA

Personalised recommendations