Skip to main content

Gesture in Human-Machine Communication: capture, analysis-synthesis, recognition, semantics

  • Conference paper
Progress in Gestural Interaction

Abstract

This paper presents an overview of the different works around gesture conducted at LIMSI Laboratory. Different kinds of capture devices are available, which allow us to deal with 2D or 3D gestures. Depending on projects, gestures are studied alone (monomodality), or combined with other modalities (speech). Different approaches are used, from analysis and synthesis to recognition and interpretation, along with knowledge representation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Y. Bellik and D. Buger. The potential of multimodal interfaces for the blind: an exploratory study. In RESNA’95, Vancouver, Canada, 1995.

    Google Scholar 

  2. P. Bourdot, M. Krus, and R. Gherbi. Mix3d: une palte-forme expérimentale pour des interfaces multimodales dédiées à la cao. In IHM 6th conference, Toulouse, France, 1995.

    Google Scholar 

  3. A. Braffort. ARGo: An architecture for sign language recognition and interpretation. In Gesture Workshop’96, York, 1996.

    Google Scholar 

  4. A. Braffort, C. Collet, and D. Teil. Anthopomorphic model for hand gesture interface. In CHI’94, pages 259–260, Boston, MA., 1994.

    Google Scholar 

  5. F. Forest. Semantic representaion of situations which fit their description in a sign language. In Gesture Workshop’96, York, 1996.

    Google Scholar 

  6. S. Gibet. Modèle sensori-moteur pour la génération de mouvements. Notes et Documents 95–09, LIMSI, 1995.

    Google Scholar 

  7. S. Gibet and P. Marteau. A self-organized model for the control, planning and learning of nonlinear multi-dimensional systems using a sensory feedback. The International Journal of Artificial Intelligence, 4: 337–349, 1994.

    Google Scholar 

  8. T. Lebourque and S. Gibet. Synthesis of hand-arm gestures. In Gesture Workshop’96, York, 1996.

    Google Scholar 

  9. B. Moody. La langue des signes — Tome 2: Dictionnaire bilingue élémentaire. Ellipses, Paris, 1986.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag London

About this paper

Cite this paper

Gibet, S., Braffort, A., Collet, C., Forest, F., Gherbi, R., Lebourque, T. (1997). Gesture in Human-Machine Communication: capture, analysis-synthesis, recognition, semantics. In: Harling, P.A., Edwards, A.D.N. (eds) Progress in Gestural Interaction. Springer, London. https://doi.org/10.1007/978-1-4471-0943-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0943-3_8

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76094-8

  • Online ISBN: 978-1-4471-0943-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics