Empirically Evaluating an Adaptable Spoken Dialogue System

  • Diane J. Litman
  • Shimei Pan
Part of the CISM International Centre for Mechanical Sciences book series (CISM, volume 407)


Recent technological advances have made it possible to build real-time, interactive spoken dialogue systems for a wide variety of applications. However, when users do not respect the limitations of such systems, performance typically degrades. Although users differ with respect to their knowledge of system limitations, and although different dialogue strategies make system limitations more apparent to users, most current systems do not try to improve performance by adapting dialogue behavior to individual users. This paper presents an empirical evaluation of TOOT, an adaptable spoken dialogue system for retrieving train schedules on the web. We conduct an experiment in which 20 users carry out 4 tasks with both adaptable and non-adaptable versions of TOOT, resulting in a corpus of 80 dialogues. The values for a wide range of evaluation measures are then extracted from this corpus. Our results show that adaptable TOOT generally outperforms non-adaptable TOOT, and that the utility of adaptation depends on TOOT’s initial dialogue strategies.


User Satisfaction Automatic Speech Recognition System Initiative Dialogue Strategy Novice User 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Cohen, P. (1995). Empirical Methods for Artificial Intelligence. MIT Press, Boston.MATHGoogle Scholar
  2. Danieli, M., and Gerbino, E. (1995). Metrics for evaluating dialogue strategies in a spoken language system. In Proc. AAAI Spring Symposium on Empirical Methods in Discourse Interpretation and Generation, 34–39.Google Scholar
  3. Kamm, C., Narayanan, S., Dutton, D., and Ritenour, R. (1997). Evaluating spoken dialog systems for telecommunication services. In Proc. 5th European Conf. on Speech Communication and Technology.Google Scholar
  4. Kamm, C., Litman, D., and Walker, M. (1998). From novice to expert: The effect of tutorials on user expertise with spoken dialogue systems. In Proc. 5th International Conf. on Spoken Language Processing, 1211–1214.Google Scholar
  5. Levin, E., and Pieraccini, R. (1997). A stochastic model of computer-human interaction for learning dialogue strategies. In Proc. 5th European Conf. on Speech Communication and Technologyk.Google Scholar
  6. Litman, D., Pan, S., and Walker, M. (1998). Evaluating Response Strategies in a Web-Based Spoken Dialogue Agent. In Proc. 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conf. on Computational Linguistics, 780–786.CrossRefGoogle Scholar
  7. Litman, D., Walker, M., and Kearns, M. (1999). Automatic detection of poor speech recognition at the dialogue level. Manuscript submitted for publication.Google Scholar
  8. Monge, P., and Cappella, J., eds. (1980). Multivariate Techniques in Human Communication Research. Academic Press, New York.MATHGoogle Scholar
  9. Smith, R. W. (1998). An evaluation of strategies for selectively verifying utterance meanings in spoken natural language dialog. International Journal of Human-Computer Studies 48:627–647.CrossRefGoogle Scholar
  10. Strachan, L., Anderson, J., Sneesby, M., and Evans, M. (1997). Pragmatic user modelling in a commercial software system. In Proc. 6th International Conf. on User Modeling, 189–200.Google Scholar
  11. van Zanten, G. V. (1998). Adaptive mixed-initiative dialogue management. Technical Report 52, IPO, Center for Research on User-System Interaction.Google Scholar
  12. Walker, M., Hindle, D., Fromer, J., Fabbrizio, G. D., and Mestel, C. (1997a). Evaluating competing agent strategies for a voice email agent. In Proc. 5th European Conf on Speech Communication and Technology.Google Scholar
  13. Walker, M., Litman, D., Kamm, C., and Abella, A. (1997b). PARADISE: A general framework for evaluating spoken dialogue agents. In Proc. 35th Annual Meeting of the Association for Computational Linguistics and 8th Conf. of the European Chapter of the Association for Computational Linguistics, 271–280.Google Scholar
  14. Walker, M., Fromer, J., and Narayanan, S. (1998). Learning optimal dialogue strategies: A case study of a spoken dialogue agent for email. In Proc. 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conf. on Computational Linguistics, 1345–1352.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1999

Authors and Affiliations

  • Diane J. Litman
    • 1
  • Shimei Pan
    • 2
  1. 1.AT&T Labs - ResearchFlorham ParkUSA
  2. 2.Computer Science DepartmentColumbia UniversityNew YorkUSA

Personalised recommendations