Advertisement

How Do Users Manipulate Graphical Icons? An Empirical Study

  • S. Robbe
  • N. Carbonell
  • P. Dauchy
Conference paper

Abstract

According to [1] pointing one’s finger at a graphical object then at an empty location on the screen while saying “Put this here,” are semiotic gestures, since they contribute to the meaning of the concomitant utterance. On the other hand, dragging one’s fingertip on the surface of the screen may be termed an ‘ergotic’ gesture, inasmuch as it represents an action, namely the drawing of a 2D graphic or the moving of an icon, according to the current context.

We have conducted a Wizard of Oz experiment on the spontaneous use of speech and 2D gestures for interacting with standard graphical software. Overall results [9, 6, 2] indicate that, in such contexts, hand gestures are used either for pointing at objects and locations on the screen or for acting on a 2D representation of the application.

Our study of the subjects’ multimodal expression being completed, we have now focused our analysis on their use of gestures. We aim at defining useful criteria for the design of gestural human-computer interaction. In this paper, we present user profiles that were defined from a thorough analysis of the subjects’ gestures.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    C. Cadoz. Le geste canal de communication homme/machine. Technique et Science Informatiques, 13 (1): 31–61, 1994.Google Scholar
  2. [2]
    N. Carbonell and C. Mignot. Natural multimodal HCI: Experimental results on the use of spontaneous speech and hand gestures. In Multimodal Human-Computer Interaction, ERCIM Workshop Report, pages 97–112. Rocquencourt(F): INRIA, 1994.Google Scholar
  3. [3]
    N. Carbonell, C. Valot, C. Mignot, and P. Dauchy. Etude empirique: usage du geste et de la parole en situation de communication homme-machine. In ErgoIA’94 Ergonomie et Informatique Avance,Biarritz, 1994. Bayonne(F): IDLS.Google Scholar
  4. [4]
    J. Cosnier. Communications et langages gestuels. In J. Cosnier, J. Coulon, A. Berrendonner, and C. Orecchioni, editors, Les voix du langage, communications verbales, gestuelles et animales, chapter 4, pages 255–304. Paris: Dunod, 1982.Google Scholar
  5. [5]
    J. Coutaz and J. Caelen. A taxonomy for multimedia and multimodal user interfaces. In Proceedings of the ERCIM Workshop, pages 143–148, Lisbon Portugal, November 1991.Google Scholar
  6. [6]
    P. Dauchy, C. Mignot, and C. Valot. Joint speech and gesture analysis — some experimental results on multimodal interface. In EUROSPEECH’93, pages 1315–1318, Berlin, September 1993.Google Scholar
  7. [7]
    P. Ekman and W. V. Friesen. The repertoire of nonverbal behavior: categories, origins, usage, and coding. Semiotica, 1 (1): 49–98, 1969.Google Scholar
  8. [8]
    C. Mignot. Usage de la parole et du geste dans les interfaces multimodales — tude exprimentale et modlisation. Doctorat de l’Universit Henri Poincar, Nancy, 1995.Google Scholar
  9. [9]
    C. Mignot, C. Valot, and N. Carbonell. An experiment study of future ‘natural’ multimodal human-computer interaction. In INTERCHI’93,pages 67–69, Amsterdam, April 1993. New York: ACM Press, Addison Wesley.Google Scholar
  10. [10]
    B. Rim and L. Schiaratura. Gesture and speech. In R. S. Feldman and B. Rim, editors, Fundamentals of nonverbal behavior, chapter 7, pages 229–238. Cambridge University Press, 1991.Google Scholar
  11. [11]
    B. Shneiderman. The future of interactive systems and the emergence of direct manipulation. Behaviour and Information Technology, 1 (3): 237–256, 1982.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 1997

Authors and Affiliations

  • S. Robbe
    • 1
  • N. Carbonell
    • 1
  • P. Dauchy
    • 2
  1. 1.CRIN-CNRS & INRIA-LorraineVandoeuvre-ls-Nancy CedexFrance
  2. 2.Dpt. Sciences Cognitives et ErgonomieIMASSABrtigny-sur-Orge CedexFrance

Personalised recommendations