Skip to main content

Issues in Predicting User Satisfaction Transitions in Dialogues: Individual Differences, Evaluation Criteria, and Prediction Models

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6392))

Abstract

This paper addresses three important issues in automatic prediction of user satisfaction transitions in dialogues. The first issue concerns the individual differences in user satisfaction ratings and how they affect the possibility of creating a user-independent prediction model. The second issue concerns how to determine appropriate evaluation criteria for predicting user satisfaction transitions. The third issue concerns how to train suitable prediction models. We present our findings for these issues on the basis of the experimental results using dialogue data in two domains.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dwass, M.: Some k-sample rank-order tests. In: Olkin, I., et al. (eds.) Contributions to Probability and Statistics, pp. 198–202. Stanford University Press, Stanford (1960)

    Google Scholar 

  2. Engelbrech, K.P., Gödde, F., Hartard, F., Ketabdar, H., Möller, S.: Modeling user satisfaction with hidden Markov models. In: Proc. SIGDIAL, pp. 170–177 (2009)

    Google Scholar 

  3. Higashinaka, R., Dohsaka, K., Isozaki, H.: Effects of self-disclosure and empathy in human-computer dialogue. In: Proc. SLT, pp. 109–112 (2008)

    Google Scholar 

  4. Higashinaka, R., Miyazaki, N., Nakano, M., Aikawa, K.: Evaluating discourse understanding in spoken dialogue systems. ACM Trans. Speech Lang. Process. 1, 1–20 (2004)

    Article  Google Scholar 

  5. Lafferty, J.D., McCallum, A., Pereira, F.C.N.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: Proc. ICML, pp. 282–289 (2001)

    Google Scholar 

  6. Meguro, T., Higashinaka, R., Dohsaka, K., Minami, Y., Isozaki, H.: Analysis of listening-oriented dialogue for building listening agents. In: Proc. SIGDIAL, pp. 124–127 (2009)

    Google Scholar 

  7. Möller, S., Engelbrecht, K.P., Schleicher, R.: Predicting the quality and usability of spoken dialogue services. Speech Communication 50(8-9), 730–744 (2008)

    Article  Google Scholar 

  8. Rabiner, L.R., Juang, B.H.: An introduction to hidden Markov models. IEEE ASSP Magazine 3(1), 4–16 (1986)

    Article  Google Scholar 

  9. Subramaniam, L.V., Faruquie, T.A., Ikbal, S., Godbole, S., Mohania, M.K.: Business intelligence from voice of customer. In: Proc. ICDE, pp. 1391–1402 (2009)

    Google Scholar 

  10. Suzuki, J., McDermott, E., Isozaki, H.: Training conditional random fields with multivariate evaluation measures. In: Proc. COLING-ACL, pp. 217–224 (2006)

    Google Scholar 

  11. Takeuchi, H., Subramaniam, L.V., Nasukawa, T., Roy, S., Balakrishnan, S.: A conversation-mining system for gathering insights to improve agent productivity. In: Proc. IEEE International Conference on E-Commerce Technology and IEEE International Conference on Enterprise Computing, E-Commerce, and E-Services, pp. 465–468 (2007)

    Google Scholar 

  12. Walker, M.A., Langkilde-Geary, I., Hastie, H.W., Wright, J., Gorin, A.: Automatically training a problematic dialogue predictor for a spoken dialogue system. Journal of Artificial Intelligence Research 16(1), 293–319 (2002)

    MATH  Google Scholar 

  13. Walker, M.A., Litman, D., Kamm, C.A., Abella, A.: PARADISE: A framework for evaluating spoken dialogue agents. In: Proc. EACL, pp. 271–280 (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Higashinaka, R., Minami, Y., Dohsaka, K., Meguro, T. (2010). Issues in Predicting User Satisfaction Transitions in Dialogues: Individual Differences, Evaluation Criteria, and Prediction Models. In: Lee, G.G., Mariani, J., Minker, W., Nakamura, S. (eds) Spoken Dialogue Systems for Ambient Environments. IWSDS 2010. Lecture Notes in Computer Science(), vol 6392. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-16202-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-16202-2_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-16201-5

  • Online ISBN: 978-3-642-16202-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics