Skip to main content

Estimation of Dialogue Moods Using the Utterance Intervals Features

  • Conference paper
Intelligent Interactive Multimedia: Systems and Services

Part of the book series: Smart Innovation, Systems and Technologies ((SIST,volume 14))

Abstract

Many recent studies have focused on dialogue communication. In this paper, our target is to make a robot support a communication between humans. To support a communication between humans, we believe that there are two important functions: estimating dialogue moods and behaving suitably. In this paper, we propose dialogue mood estimation model using the utterance intervals. The proposed estimation model is composed by relating the subjective evaluations for several adjectives with the utterance intervals features. Through the estimation experiments, we confirmed that the proposed system could estimate the dialogue moods with a high degree of accuracy, especially for “excitement,” “seriousness,” and “closeness.” And we suggested that the utterance intervals features had a high potential for the dialogue mood estimation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Akaike, H.: Information theory and an extension of the maximum likelihood principle. In: 2nd Inter. Symp. on Information Theory, vol. 1, pp. 267–281 (1973)

    Google Scholar 

  2. Bruner, G.: Music, mood, and marketing. Journal of Marketing 54(4), 94–104 (1990)

    Article  Google Scholar 

  3. Cohen, I., Sebe, N., Chen, L., Garg, A., Huang, T.S.: Facial expression recognition from video sequences: Temporal and static modelling. In: Computer Vision and Image Understanding, pp. 160–187 (2003)

    Google Scholar 

  4. Consortium, N.T.S.R. Priority area spoken dialogue spoken dialogue corpus (pasd) (1993-1996)

    Google Scholar 

  5. Gatica-perez, D., Mccowan, I., Zhang, D., Bengio, S.: Detecting group interest level in meetings. In: Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 489–492 (2005)

    Google Scholar 

  6. Hayashi, T., Kato, S., Itoh, H.: A Synchronous Model of Mental Rhythm Using Paralanguage for Communication Robots. In: Yang, J.-J., Yokoo, M., Ito, T., Jin, Z., Scerri, P. (eds.) PRIMA 2009. LNCS, vol. 5925, pp. 376–388. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  7. Ito, H., Shigeno, S., Nishimoto, T., Araki, M., Nimi, Y.: The analysis of the atmosphere in the dialogues. IPSJ SIG Technical Report, pp. 103–108 (2011) (in Japanese)

    Google Scholar 

  8. Itoh, C., Kato, S., Itoh, H.: Mood-transition-based emotion generation model for the robot’s personality. In: Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2009), San Antonio, TX, USA, pp. 2957–2962 (2009)

    Google Scholar 

  9. Mori, H., Miyawaki, K., Nishiguchi, S., Sano, M., Yamashita, N.: An affections model of group activities for estimation of individual’s affection. IEICE Technical Report, pp. 519–523 (2010)

    Google Scholar 

  10. Siedlecki, W., Sklansky, J.: A note on genetic algorithms for large-scale feature selection. Pattern Recogn. Lett. 10, 335–347 (1989)

    Article  MATH  Google Scholar 

  11. Takasugi, S., Yoshida, S., Okitsu, K., Yokoyama, M., Yamamoto, T., Miyake, Y.: Influence of pause duration and nod response timing in dialogue between human and communication robot. Transactions of the Society of Instrument and Control Engineers, 72–81 (2010) (in Japanese)

    Google Scholar 

  12. Toyoda, K., Yamanishi, R., Kato, S.: Song selection system with affective requests. In: 12th International Symposium on Advanced Intelligent Systems, Suwon, Korea, pp. 462–465 (2011)

    Google Scholar 

  13. Wrede, B., Shriberg, E.: The relationship between dialogue acts and hot spots in meetings. In: Proc. IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Virgin Islands (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kaoru Toyoda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Toyoda, K., Miyakoshi, Y., Yamanishi, R., Kato, S. (2012). Estimation of Dialogue Moods Using the Utterance Intervals Features. In: Watanabe, T., Watada, J., Takahashi, N., Howlett, R., Jain, L. (eds) Intelligent Interactive Multimedia: Systems and Services. Smart Innovation, Systems and Technologies, vol 14. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29934-6_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-29934-6_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-29933-9

  • Online ISBN: 978-3-642-29934-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics