Semantic Representations of Situations that Fit their Description in a Sign Language

  • F. Forest
Conference paper


My work addresses the problem of knowledge representation and its links to language. It is based on two hypotheses:
  1. 1.

    We learn by experience, that is to say we memorize a great amount of situations we have been involved in, while learning language.

  2. 2.

    Experience is memorized through spatio-temporal means which are nearer from the linguistic expression conveyed by sign languages (American Sign Language or French Sign Language) than from linguistic expression conveyed by the traditional linear vocal languages (English or French.)

In collaboration with A. Braffort who works on Sign Language recognition, and with a group of colleagues working on pragmatic knowledge representation based on individual experience, I propose a representation of these situations which compose the individual experience. In this approach, a situation is a set of segments in a five-dimension space. Each segment counts for an entity involved in the situation. The dimensions are three dimensions for space, one for the salience of each entity in the situation, and one for the time. Because sign languages are visual, and “draw” the situation in the space in front of the speaker, they offer an easier way of building such a representation.

Our final purpose is to be able to “understand” a sentence expressed in French Sign Language, ie to build the representation of the situation described in the sentence, using the information carried by the sentence, and also every information already memorized in the individual previous experience.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    F. Bordeaux, F. Forest, and B. Grau. MoHA, a hybrid learning model: a model based on the perception of the environment by an individual. In B. Bouchon-Meunier, L. Valverde, and R. R. Yager, editors, IPMU ‘82 — Advanced methods in artificial intelligence, number 682 in Lecture Notes in Computer Science. Springer-Verlag, 1993.Google Scholar
  2. [2]
    A. Braffort. ARGo: An architecture for sign language recognition and interpretation. In this book, 1996.Google Scholar
  3. [3]
    G. Calbris and J. Montredon. Des gestes et des mots pour le dire. CLE International, 1986.Google Scholar
  4. [4]
    J. Cosnier and Brossard. Communication non-verbale, co-texte et contexte. In Textes de base en psychologie, pages 10–26. Delachaux et Niestlé, Paris, 1984.Google Scholar
  5. [5]
    C. Cuxac. Le langage des sourds. Payot, Paris, 1983.Google Scholar
  6. [6]
    C. Cuxac. L’icônicité dans la langue des signes. In VENACO, 1993.Google Scholar
  7. [7]
    M. Denis. Image et cognition. PUF, Paris, 1989.Google Scholar
  8. [8]
    Lambert and Rondal. Le mongolisme. Pierre Mardaga, Brussels, 1979.Google Scholar
  9. [9]
    B. Moody. La langue des signes. Ellipses, Paris, 1983.Google Scholar
  10. [10]
    J.-A. Rondal. L’interaction adulte-enfant et la construction du langage. Pierre Mardaga, Brussels, 1983.Google Scholar
  11. [11]
    R. Thom. Stabilité structurelle et morphogénèse. Interéditions, Paris, 2nd edition, 1977.zbMATHGoogle Scholar
  12. [12]
    R. Thom. Esquisse d’une sémiophysique. Interéditions, Paris, 1991.Google Scholar
  13. [13]
    A. van der Straten. Premiers gestes, premiers mots. Bayard Presse, Paris, 1991.Google Scholar
  14. [14]
    L. S. Vygotsky. Thought and language. MIT Press, Cambridge, 1962.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 1997

Authors and Affiliations

  • F. Forest
    • 1
  1. 1.Language and Cognition GroupLIMSIOrsayFrance

Personalised recommendations