Advertisement

Patterns of Synchronization of Non-verbal Cues and Speech in ECAs: Towards a More “Natural” Conversational Agent

  • Nicla Rossini
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6456)

Abstract

This paper presents an analysis of the verbal and non-verbal cues of Conversational Agents, with a special focus on REA and GRETA, in order to allow further research aimed at correcting some traits of their performance still considered unnatural by their final users. Despite the striking performance of new generation ECA, some important features make these conversational agents unreliable to the users, who usually prefer interacting with a classical computer for information retrieval. The users’ preference can be due to several factors, such as the quality of speech synthesis, or the inevitable unnaturalness of the graphics animating the avatar. Apart from the unavoidable traits that can render ECAs unnatural to the ultimate users, instances of poor synchronization between verbal and non-verbal behaviour may contribute to unfavourable results. An instance of synchronization patterns between non-verbal cues and speech is here analysed and re-applied to the basic architecture of an ECA in order to improve the ECA’s verbal and non-verbal synchronization. A proposal for future inquiry aimed at creating alternative model for the ultimate Mp4 output is also proposed, for further development in this field.

Keywords

Embodied Conversational Agents Prosody Non-verbal Communication Expressions Gesture-Speech Synchronization 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hartmann, B., Mancini, M., Pelachaud, C.: Implementing Expressive Gesture Synthesis for Embodied Conversational Agents. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 188–199. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  2. 2.
    Cassell, J., Stocky, T., Bickmore, T., Gao, Y., Nakano, Y., Ryokai, K., Tversky, D., Vaucelle, C., Vilhjálmsson, H.: MACK: Media lab Autonomous Conversational Kiosk. In: Proceedings of IMAGINA 2002, Monte Carlo, January 12-15 (2002), http://www.media.mit.edu/gnl/pubs/imagina02.pdf
  3. 3.
    Cassell, J.: Trading spaces: Gesture Morphology and Semantics in Humans and Virtual Humans. In: Second ISGS Conference “Interacting bodies”. École normale supérieure Lettres et Sciences humaines Lyon - France, June 15-18 (2005)Google Scholar
  4. 4.
    Cassell, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K., Vilhjálmsson, Yan, H.: Embodiment in Conversational Interfaces: Rea. In: Proceedings of the CHI 1999 Conference, Pittsburgh, PA, pp. 520–527 (1999)Google Scholar
  5. 5.
    Mancini, M., Bresin, R., Pelachaud, C.: An expressive virtual agent head driven by music performance. IEEE Transactions on Audio, Speech and Language Processing 15(6), 1833–1841 (2007)CrossRefGoogle Scholar
  6. 6.
    Cassell, J., Nakano, Y., Bickmore, T., Sidner, C., Rich, C.: Annotating and Generating Posture from Discourse Structure in Embodied Conversational Agents. In: Workshop on Representing, Annotating, and Evaluating Non-Verbal and Verbal Communicative Acts to Achieve Contextual Embodied Agents, Autonomous Agents 2001 Conference, Montreal, Quebec, May 29 (2001), http://www.ccs.neu.edu/home/bickmore/publications/agents01.pdf
  7. 7.
    Poggi, I., Pelachaud, C.: Performative facial expressions in animated ‘faces’. Speech Communication 26, 5–21 (1998)CrossRefGoogle Scholar
  8. 8.
    Niewiadomski, R., Ochs, M., Pelachaud, C.: Expressions of Empathy in ECAs. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 37–44. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  9. 9.
    Rossini, N.: The analysis of gesture: Establishing a set of parameters. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 124–131. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  10. 10.
    McNeill, D.: Hand and Mind. What Gestures Reveal about Thought. University of Chicago Press, Chicago (1992)Google Scholar
  11. 11.
    Eibl-Eibesfeldt, I.: Similarities and differences between cultures in expressive movements. In: Hinde, A. (ed.) Non-verbal Communication, pp. 297–312. Cambridge University Press, Cambridge (1972)Google Scholar
  12. 12.
    Kita, S., van Gijn, I., van der Hulst, H.: The non-linguistic status of the Symmetry Condition in signed languages: Evidence from a comparison from signs and speech-accompanying representational gestures (in progress)Google Scholar
  13. 13.
    Rossini, N.: Sociolinguistics in gesture. How about the Mano a Borsa? In: Intercultural Communication Studies, XIII, 3, pp. 144–154; Proceedings of the 9th International Conference on Cross-Cultural Communication (2004)Google Scholar
  14. 14.
    Rossini, N.: Gesture and its cognitive origin: Why do we gesture? Experiments on hearing and deaf people. Università di Pavia Ph.D. thesis (2004)Google Scholar
  15. 15.
    Thies, A.: First the hand, then the word: On gestural displacement in non-native English speech. Universität Bielefeld Ph.D. Thesis (2003)Google Scholar
  16. 16.
    Gibbon, D.: Modelling gesture as speech: a linguistic approach. In: Proceedings of GESPIN 2009, Conference on Gestures and Speech in Interaction, Poznan (September 2009) (to appear)Google Scholar
  17. 17.
    Rossini, N.: Il gesto. Gestualità e tratti non verbali in interazioni diadiche. Bologna, Pitagora (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Nicla Rossini
    • 1
  1. 1.Dipartimento di Studi UmanisticiUniversità del Piemonte Orientale, Li.Co.T.T.- Palazzo TartaraVercelliItaly

Personalised recommendations