ARGo: An Architecture for Sign Language Recognition and Interpretation

  • A. Braffort
Conference paper


This paper presents a recognition and interpretation architecture dedicated to Sign Language.

A sign is composed of several co-occurring parameters that allows several heterogeneous bits of information to be emitted simultaneously depending on the variation of one of these parameters. Sign languages vary from one country to another and each has a specific vocabulary. These signs are called conventional signs and can be listed in dictionaries. A second kind of signs is extremely frequent in sign language communications: the non-conventional signs. They are created during discourse, depending on need and context, and cannot be listed in dictionaries. Moreover, some conventional signs may have one or more variable parameters, depending on context. These signs are named Variable signs.

Sign language functioning is based on simultaneousness of information and spatial rules governing sign relationships, and the vocabulary is not completely known a priori. For these reasons, classical sequential treatments are not sufficient for sign recognition. Our architecture tries to take into account such a functioning. It is composed of recognition and interpretation modules. The first module is based on Hidden Markov Models and allows us to classify conventional, non-conventional and variable signs. In the interpretation module, a virtual scene allows to store context and to complete the meaning of variable and non-conventional signs.

Because of its ability to deal with non-conventional signs, this system can be used both for sign language and co-verbal gestures.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    A. Braffort. Reconnaissance et compréhension de gestes, application à la langue des signes. PhD thesis, L’université d’Orsay, 1996.Google Scholar
  2. [2]
    A. Braffort, T. Baudel, and D. Teil. Utilisation des gestes de la main pour l’interaction homme-machine. In GDR-PRC-CHM (Ed.), IHM’92, pages 193196, Paris, 1992.Google Scholar
  3. [3]
    J. Cagin. Une étude sur la reconnaissance de formes dynamiques - application à la langue des signes. Projet de fin d’études de l’ENSTA, Paris, 1993.Google Scholar
  4. [4]
    C. Cuxac. Autour de la langue des signes. Journée d’études n°10, Université Paris V, Paris, 1983. (In French).Google Scholar
  5. [5]
    S. S. Fels. Glove-TalkII: Mapping Hand Gestures to Speech Using Neural Networks-An Approach to Building Adaptive Interfaces. PhD thesis, Department of Computer Science, University of Toronto, Canada, 1994.Google Scholar
  6. [6]
    J. Gauvain, L. Lamel, and M. Adda-Decker. Developments in large vocabulary dictation: The LIMSI Nov94 NAB system. In ARPA SLT’95, Austin (Texas), 1995.Google Scholar
  7. [7]
    L. Messing, R. Erenshteyn, R. Foulds, S. Galuska, and G. Stern. American Sign Language computer recognition: Its present and its promise. In ISAAC’94, pages 289–291, Maastricht, NL, 1994.Google Scholar
  8. [8]
    B. Moody. La langue des signes. Histoire et grammaire. Ellipses, Paris, 1983. (In French).Google Scholar
  9. [9]
    K. Murakami and H. Taguchi. Gesture recognition using recurrent neural networks. In CHI ‘81 Proceedings, pages 237–242, New Orleans (Louisiana), 1991.Google Scholar
  10. [10]
    L. R. Rabiner. A tutorial on Hidden Markov Models and selected applications in speech recognition. IEEE ASSP Magazine, 77 (2): 257–285, 1989.Google Scholar
  11. [11]
    D. Rubine. The automatic recognition of gestures. PhD thesis, Carnegie Mellon University, 1991.Google Scholar
  12. [12]
    H. Sagawa, H. Sakou, and M. Abe. Sign Language translation using continuous DP matching. In MVA’92-IAPR Workshop on Machine Vision Applications, Tokyo, 1992.Google Scholar
  13. [13]
    T. Starner. Visual recognition of American Sign Language using Hidden Markov Models. Master’s thesis, Media Arts and Sciences, MIT, 1995.Google Scholar
  14. [14]
    T. Starner and A. Pentland. Visual recognition of American Sign Language using Hidden Markov Models. In M. Bichsel, editor, Proceedings of the International Workshop on Automatic Face-and Gesture-Recognition, pages 189–194, 26–28 June 1995.Google Scholar
  15. [15]
    D. Sturman. Whole-hand Input. PhD thesis, MIT, 1992.Google Scholar

Copyright information

© Springer-Verlag London 1997

Authors and Affiliations

  • A. Braffort
    • 1
  1. 1.LIMSI - CNRSOrsay CedexFrance

Personalised recommendations