Skip to main content

A Semi-automated Evaluation Metric for Dialogue Model Coherence

  • Chapter
  • First Online:

Part of the book series: Signals and Communication Technology ((SCT))

Abstract

We propose a new metric, Voted Appropriateness, which can be used to automatically evaluate dialogue policy decisions, once some wizard data has been collected. We show that this metric outperforms a previously proposed metric Weak agreement. We also present a taxonomy for dialogue model evaluation schemas, and orient our new metric within this taxonomy.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Two of the judges also performed the role of the wizards, but the wizard data collection and the evaluation tasks were separated by a period of over 3 months.

References

  1. Bickmore TW, Pfeifer LM, Jack BW (2009) Taking the time to care: empowering low health literacy hospital patients with virtual nurse agents. In: Proceedings of the 27th international conference on Human factors in computing systems, CHI ’09. ACM, New York, NY, USA, pp 1265–1274. doi:10.1145/1518701.1518891. http://doi.acm.org/10.1145/1518701.1518891

  2. DeVault D, Leuski A, Sagae K (2011) Toward learning and evaluation of dialogue policies with text examples. In: Proceedings of the SIGDIAL 2011 conference. Association for Computational Linguistics, Portland, Oregon, pp 39–48. http://www.aclweb.org/anthology/W/W11/W11-2006

  3. Forbes-Riley K, Litman DJ (2006) Modelling user satisfaction and student learning in a spoken dialogue tutoring system with generic, tutoring, and user affect parameters. In: Proceedings of the main conference on human language technology conference of the North American chapter of the association of computational linguistics, HLT-NAACL ’06. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 264–271. http://dx.doi.org/10.3115/1220835.1220869

  4. Gandhe S, Traum D (2008) Evaluation understudy for dialogue coherence models. In: Proceedings of the 9th SIGdial workshop on discourse and dialogue. Association for Computational Linguistics, Columbus, Ohio, pp 172–181. http://www.aclweb.org/anthology/W/W08/W08-0127

  5. Gandhe S, Traum D (2013) Surface text based dialogue models for virtual humans. In: Proceedings of the SIGDIAL 2013 conference. Association for Computational Linguistics, Metz, France, pp 251–260. http://www.aclweb.org/anthology/W/W13/W13-4039

  6. Gandhe S, Traum D (2014) SAWDUST: a semi-automated wizard dialogue utterance selection tool for domain-independent large-domain dialogue. In: Proceedings of the 15th annual meeting of the special interest group on discourse and dialogue (SIGDIAL). Association for Computational Linguistics, Philadelphia, PA, USA, pp 251–253. http://www.aclweb.org/anthology/W14-4333

  7. Gustafson J, Bell L, Boye J, Lindström A, Wirén M (2004) The nice fairy-tale game system. In: Strube M, Sidner C (eds) Proceedings of the 5th SIGdial workshop on discourse and dialogue. Association for Computational Linguistics, Cambridge, Massachusetts, USA, pp 23–26

    Google Scholar 

  8. Levin E, Pieraccini R, Eckert W (1997) Learning dialogue strategies within the Markov decision process framework. In: Proceedings of the 1997 IEEE workshop on automatic speech recognition and understanding, pp 72–79. doi:10.1109/ASRU.1997.658989

  9. Lin CY, Hovy E (2003) Automatic evaluation of summaries using n-gram co-occurrence statistics. In: NAACL ’03: Proceedings of the 2003 conference of the North American chapter of the association for computational linguistics on human language technology. Association for Computational Linguistics, Morristown, NJ, USA, pp 71–78. http://dx.doi.org/10.3115/1073445.1073465

  10. Papineni KA, Roukos S, Ward T, Zhu WJ (2001) Bleu: a method for automatic evaluation of machine translation. In: Technical report RC22176 (W0109-022), IBM Research Division. http://citeseer.ist.psu.edu/papineni02bleu.html

  11. Swartout W, Traum D, Artstein R, Noren D, Debevec P, Bronnenkant K, Williams J, Leuski A, Narayanan S, Piepol D, Lane C, Morie J, Aggarwal P, Liewer M, Chiang JY, Gerten J, Chu S, White K (2010) Ada and grace: toward realistic and engaging virtual museum guides. In: Proceedings of the 10th international conference on Intelligent virtual agents, IVA’10. Springer, Berlin, pp 286–300. http://dl.acm.org/citation.cfm?id=1889075.1889110

    Google Scholar 

  12. Traum D, Leuksi A, Roque A, Gandhe S, DeVault D, Gerten J, Robinson S, Martinovski B (2008) Natural language dialogue architectures for tactical questioning characters. In: Proceedings of 26th army science conference

    Google Scholar 

  13. Traum D, Swartout W, Gratch J, Marsella S (2005) Virtual humans for non-team interaction training. In: AAMAS-05 workshop on creating bonds with humanoids

    Google Scholar 

  14. Traum D, Swartout W, Gratch J, Marsella S (2008) A virtual human dialogue model for non-team interaction. Text, speech and language technology, vol 39. Springer, New York, pp 45–67. doi:10.1007/978-1-4020-6821-8

    Google Scholar 

  15. Turing AM (1950) Computing machinery and intelligence. Mind 59:433–460. http://cogprints.org/499/

    Google Scholar 

  16. Walker M, Kamm C, Litman D (2000) Towards developing general models of usability with paradise. Natural language engineering: special issue on best practice in spoken dialogue systems. http://citeseer.ist.psu.edu/article/walker00towards.html

    Google Scholar 

  17. Williams JD, Young S (2007) Partially observable Markov decision processes for spoken dialog systems. Comput Speech Lang 21:393–422

    Article  Google Scholar 

Download references

Acknowledgments

The effort described here has been sponsored by the U.S. Army. Any opinions, content or information presented does not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sudeep Gandhe .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Gandhe, S., Traum, D. (2016). A Semi-automated Evaluation Metric for Dialogue Model Coherence. In: Rudnicky, A., Raux, A., Lane, I., Misu, T. (eds) Situated Dialog in Speech-Based Human-Computer Interaction. Signals and Communication Technology. Springer, Cham. https://doi.org/10.1007/978-3-319-21834-2_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-21834-2_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-21833-5

  • Online ISBN: 978-3-319-21834-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics