Skip to main content

Word, Syllable and Phoneme Based Metrics Do Not Correlate with Human Performance in ASR-Mediated Tasks

  • Conference paper
Advances in Natural Language Processing (NLP 2014)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8686))

Included in the following conference series:

Abstract

Automatic evaluation metrics should correlate with human judgement. We collected sixteen ASR mediated dialogues using a map task scenario. The material was assessed extrinsically (i.e. in context) through measures like time to task completion and intrinsically (i.e. out of context) using the word error rate and several variants thereof, which are based on smaller units. Extrinsic and intrinsic results did not correlate, neither for word error rate nor for metrics based on characters, syllables or phonemes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anderson, A., Brown, G., Shillcock, R., Yule, G.: Teaching Talk: Strategies for Production and Assessment. Cambridge University Press (1984)

    Google Scholar 

  2. Apache: ColognePhonetic (2013), http://commons.apache.org/codec/apidocs/org/apache/commons/codec/language/ColognePhonetic.html

  3. Callison-Burch, C., Osborne, M., Koehn, P.: Re-evaluating the Role of BLEU in Machine Translation Research. In: Proceedings of EACL 2006 (2006)

    Google Scholar 

  4. Doddington, G.: Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics. In: Proceedings of HLT 2002 (2002)

    Google Scholar 

  5. Garofolo, J., Voorhees, E., Auzanne, C., Stanford, V., Lund, B.: 1998 TREC-7 Spoken Document Retrieval Track Overview and Results. In: Proceedings of TREC, vol. 7 (1998)

    Google Scholar 

  6. HyFo: HyFo library (2013), http://defoe.sourceforge.net/hyfo/hyfo.html (accessed January 2013)

  7. Kawahara, T.: New perspectives on spoken language understanding: Does machine need to fully understand speech? In: ASRU 2009. IEEE Workshop on Automatic Speech Recognition & Understanding (2009)

    Google Scholar 

  8. McCowan, I., Moore, D., Dines, J., Gatica-Perez, D., Flynn, M., Wellner, P., Bourlard, H.: On the Use of Information Retrieval Measures for Speech Recognition Evaluation- Research Report 04-73. Tech. rep., IDIAP Research Institute (2005)

    Google Scholar 

  9. Möller, S.: Parameters for quantifying the interaction with spoken dialogue telephone services. In: Proc. of SIGDial 2005 (2005)

    Google Scholar 

  10. Möller, S., Ward, N.: A framework for model-based evaluation of spoken dialog systems. In: Proc. of Workshop on Discourse and Dialogue, SIGDial 2008 (2008)

    Google Scholar 

  11. Morris, A., Maier, V., Green, P.: From WER and RIL to MER and WIL: Improved Evaluation Measures for Connected Speech Recognition. In: Proceedings of International Conference on Spoken Language Processing (2004)

    Google Scholar 

  12. Nuance: German academic version of Dragon naturally speaking version 11 (2012)

    Google Scholar 

  13. Postel, H.: Die Kölner Phonetik. Ein Verfahren zur Identifizierung von Personennamen auf der Grundlage der Gestaltanalyse. IBM-Nachrichten 19, 925–931 (1969)

    Google Scholar 

  14. Rodgers, J., Nicewander, W.: Thirteen ways to look at the correlation coefficient. The American Statistician 42(1), 59–66 (1988)

    Article  Google Scholar 

  15. Schneider, A., Luz, S.: Speaker Alignment in Synthesised, Machine Translated Communication. In: Proceedings of IWSLT 2011 (2011)

    Google Scholar 

  16. Skantze, G.: Exploring Human Error Recovery Strategies: Implications for Spoken Dialogue Systems. Speech Communication 45(3), 325–341 (2005)

    Article  Google Scholar 

  17. Stanier, A.: How Accurate is SOUNDEX Matching? Computers in Genealogy 3(7) (1990)

    Google Scholar 

  18. Walker, M., Kamm, C., Litman, D.: Towards developing general models of usability with PARADISE. Natural Language Engineering 6(3-4), 363–377 (2000)

    Article  Google Scholar 

  19. Wang, Y.Y., Acero, A., Chelba, C.: Is word error rate a good indicator for spoken language understanding accuracy. In: 2003 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2003, pp. 577–582 (November 2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Schneider, A.H., Hellrich, J., Luz, S. (2014). Word, Syllable and Phoneme Based Metrics Do Not Correlate with Human Performance in ASR-Mediated Tasks. In: Przepiórkowski, A., Ogrodniczuk, M. (eds) Advances in Natural Language Processing. NLP 2014. Lecture Notes in Computer Science(), vol 8686. Springer, Cham. https://doi.org/10.1007/978-3-319-10888-9_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-10888-9_39

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-10887-2

  • Online ISBN: 978-3-319-10888-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics