Skip to main content

Data Handling

  • Chapter
  • First Online:
Multimodal Usability

Part of the book series: Human-Computer Interaction Series ((HCIS))

  • 902 Accesses

Abstract

In this chapter we come, finally, to the third CoMeDa cycle phase (Section 1.3.1) in which we have to process the data collected with any of the 24 usability methods described in Chapters 8, 9, 10, 11 and 12 and backed up by Chapter 13 on working with users in the lab. In multimodal conceptual diagram terms, so to speak (Section 4.2.4), we now ask what happens to the data you stuffed into your box for analysis in Fig. 1.1. What happens is data handling, the processing of raw usability data through to presentation of the final results of its analysis. Results may be used for many different purposes, including improvement of the system model and, in particular, its AMITUDE part (Box 1.1), component training, theory development, decision to launch a project, etc.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • AGTK (2007) Annotation graph toolkit. http://sourceforge.net/projects/agtk/. Accessed 21 February 2009.

  • Allan J, Core M (1997) Draft of DAMSL: dialog act markup in several layers. http://www.cs.rochester.edu/research/cisd/resources/damsl/RevisedManual. Accessed 21 February 2009.

  • Argyle M (1988) Bodily communication. London, Methuen, 1975. 2nd edn, Routledge.

    Google Scholar 

  • Bernsen NO (2006) Speech and 2D deictic gesture reference to virtual scenes. In: André E, Dybkjær L, Minker W, Neumann H, Weber M (eds) Perception and interactive technologies. Proceedings of international tutorial and research workshop. Springer: LNAI 4021.

    Google Scholar 

  • Bernsen NO, Dybkjær H, Dybkjær L (1998) Designing interactive speech systems. From first ideas to user testing. Springer Verlag, Heidelberg.

    Google Scholar 

  • Core M, Allen J (1997) Coding dialogs with the DAMSL annotation scheme. Proceedings of the AAAI fall symposium on communicative action in humans and machines 28–35. Boston.

    Google Scholar 

  • DataFace (2003) Description of facial action coding system (FACS). http://face-and-emotion.com/dataface/facs/description.jsp. Accessed 21 February 2009.

  • Dybkjær L, Berman S, Kipp M, Olsen MW, Pirrelli V, Reithinger N, Soria C (2001) Survey of existing tools, standards and user needs for annotation of natural interaction and multimodal data. ISLE deliverable D11.1. NISLab, Denmark.

    Google Scholar 

  • Dybkjær L, Bernsen NO (2004) Towards general-purpose annotation tools – how far are we today? Proceedings of LREC 1: 197–200. Lisbon, Portugal.

    Google Scholar 

  • Dybkjær L, Bernsen NO, Carletta J, Evert S, Kolodnytsky M, O’Donnell T (2002) The NITE markup framework. NITE deliverable D2.2. NISLab, Denmark.

    Google Scholar 

  • Ekman P (1993) Facial expression and emotion. American Psychologist 48(4), 384–392.

    Article  Google Scholar 

  • Ekman P (1999a) Facial expressions. In: Dalgleish T, Power, M. (eds) Handbook of cognition and emotion, Chapter 16. John Wiley & Sons, New York.

  • Ekman P (1999b) Basic emotions. In: Dalgleish T, Power, M. (eds) Handbook of cognition and emotion, Chapter 3. John Wiley & Sons, New York.

  • FACS Manual (2002) Chapter 2: upper face action units. http://face-and-emotion.com/dataface/facs/manual/Chapter2.html. Accessed 21 February 2009.

  • HUMAINE (2006) HUMAINE emotion annotation and representation language (EARL). http://emotion-research.net/projects/humaine/earl. Accessed 21 February 2009.

  • Kendon A (1990) Conducting interaction. Cambridge University Press, Cambridge.

    Google Scholar 

  • Kipp M (2008) Anvil: the video annotation research tool. http://www.anvil-software.de/. Accessed 21 February 2009.

  • Magno Caldognetto E, Poggi I, Cosi P, Cavicchio F, Merola G (2004) Multimodal score: an ANVIL based annotation scheme for multimodal audio-video analysis. Proceedings of LREC workshop on multimodal corpora, models of human behaviour for the specification and evaluation of multimodal input and output interfaces: 29–33. Lisbon, Portugal.

    Google Scholar 

  • Martell C (2005) An extensible, kinematically-based gesture annotation scheme. In: van Kuppevelt J, Dybkjær L, Bernsen NO (eds) Advances in natural multimodal dialogue systems. Springer Series Text, Speech and Language Technology, 30: 79–95.

    Google Scholar 

  • Massaro DW, Ouni S, Cohen MM, Clark R (2005) A multilingual embodied conversational agent. Proceedings of the 38th Hawaii International Conference on System Sciences, 296b.

    Google Scholar 

  • McNeill (1992) Hand and mind. University of Chicago Press, Chicago.

    Google Scholar 

  • NITE (2005) Natural interactivity tools engineering. http://nite.nis.sdu.dk. Accessed 21 February 2009.

  • NITE (2007) The NITE XML toolkit. http://www.ltg.ed.ac.uk/software/nxt. Accessed 21 February 2009.

  • NITE (2008) NITE XML toolkit homepages. http://www.ltg.ed.ac.uk/NITE/. Accessed 21 February 2009.

  • Noldus (2009) The Observer XT. http://www.noldus.com/human-behavior-research/products/the-observer-xt. Accessed 21 February 2009.

  • Ortony A, Clore A, Collins G (1988) The cognitive structure of emotions. Cambridge University Press, Cambridge.

    Google Scholar 

  • Ortony A, Turner TJ (1990) What’s basic about basic emotions? Psychological Review 97(3): 315–331.

    Article  Google Scholar 

  • Praat (2009) Praat: doing phonetics by computer. http://www.fon.hum.uva.nl/praat/. Accessed 21 February 2009.

  • SAMPA (2005) SAMPA computer readable phonetic alphabet. http://www.phon.ucl.ac.uk/home/sampa/index.html. Accessed 21 February 2009.

  • Searle J (1969) Speech acts. Cambridge University Press, Cambridge.

    Google Scholar 

  • Searle J (1983) Intentionality. Cambridge University Press, Cambridge.

    Google Scholar 

  • Speecon (2004) Speech driven interfaces for consumer devices. http://www.speechdat.org/speecon/ index.html. Accessed 6 February 2009.

  • Sperberg-McQueen CM, Burnard L (1994) Guidelines for electronic text encoding and interchange. TEI P3, Text Encoding Initiative, Chicago, Oxford.

    Google Scholar 

  • Sperberg-McQueen CM, Burnard L (2004) The XML version of the TEI guidelines. Text Encoding Initiative, http://www.tei-c.org/P4X/. Accessed 21 February 2009.

  • ToBI (1999) ToBI. http://www.ling.ohio-state.edu/tobi. Accessed 21 February 2009.

  • TRAINS (2000) The TRAINS project: natural spoken dialogue and interactive planning. http://www.cs.rochester.edu/research/trains. Accessed 21 February 2009.

  • Transcriber (2008) A tool for segmenting, labeling and transcribing speech. http://trans.sourceforge.net/en/presentation.php. Accessed 21 February 2009.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag London Limited

About this chapter

Cite this chapter

Bernsen, N.O., Dybkjær, L. (2010). Data Handling. In: Multimodal Usability. Human-Computer Interaction Series. Springer, London. https://doi.org/10.1007/978-1-84882-553-6_15

Download citation

  • DOI: https://doi.org/10.1007/978-1-84882-553-6_15

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84882-552-9

  • Online ISBN: 978-1-84882-553-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics