Skip to main content

Joint Spoken Language Understanding and Domain Adaptive Language Modeling

  • Conference paper
  • First Online:
Intelligence Science and Big Data Engineering (IScIDE 2018)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11266))

Abstract

Spoken Language Understanding (SLU) aims to extract structured information from speech recognized texts, which suffers from inaccurate automatic speech recognition (ASR) (especially in a specific dialogue domain). Language Modeling (LM) is significant to ASR for producing natural sentences. To improve the SLU performance in a specific domain, we try two ways: (1) domain adaptive language modeling which tends to recognize in-domain utterances; (2) joint modeling of SLU and LM which distills semantic information to build semantic-aware LM, and also helps SLU by semi-supervised learning (LM is unsupervised). To unify these two approaches, we propose a multi-task model (MTM) that jointly performs two SLU tasks (slot filling and intent detection), domain-specific LM and domain-free (general) LM. In the proposed multi-task architecture, the shared-private network is utilized to automatically learn which part of general data can be shared by the specific domain or not. We attempt to further improve the SLU and ASR performance in a specific domain with a few of labeled data in the specific domain and plenty of unlabeled data in general domain. The experiments show that the proposed MTM can obtain 4.06% absolute WER (Word Error Rate) reduction in a car navigation domain, compared to a general-domain LM. For language understanding, the MTM outperforms the baseline (especially slot filling task) on the manual transcript, ASR 1-best output. By exploiting the domain-adaptive LM to rescore ASR output, our proposed model achieves further improvement in SLU (7.08% absolute F1 increase of slot filling task).

This work has been supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002102, the China NSFC project (No. 61573241) and the JiangSu NSF project (BK20161244). Experiments have been carried out on the PI supercomputer at Shanghai Jiao Tong University.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Rei, M.: Semi-supervised multitask learning for sequence labeling. arXiv preprint arXiv:1704.07156 (2017)

  2. Liu, P., Qiu, X., Huang, X.: Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101 (2016)

  3. Peris, A., Casacuberta, F.: A bidirectional recurrent neural language model for machine translation. Procesamiento del Lenguaje Nat. (55) (2015)

    Google Scholar 

  4. He, T., Zhang, Y., Droppo, J., et al.: On training bi-directional neural network language model with noise contrastive estimation. In: 2016 10th International Symposium on Chinese Spoken Language Processing (ISCSLP), pp. 1–5. IEEE (2016)

    Google Scholar 

  5. Raymond, C., Riccardi, G.: Generative and discriminative algorithms for spoken language understanding. In: Eighth Annual Conference of the International Speech Communication Association (2007)

    Google Scholar 

  6. Mesnil, G., Dauphin, Y., Yao, K.: Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Trans. Audio Speech Lang. Process. 23(3), 530–539 (2015)

    Article  Google Scholar 

  7. Liu, B., Lane, I.: Attention-based recurrent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454 (2016)

  8. Zhu, S., Yu, K.: Encoder-decoder with focus-mechanism for sequence labelling based spoken language understanding. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5675–5679. IEEE (2017)

    Google Scholar 

  9. Zhang, W., Yoshida, T., Tang, X.: Text classification based on multi-word with support vector machine. Knowl.-Based Syst. 21(8), 879–886 (2008)

    Article  Google Scholar 

  10. Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)

  11. Brown, P.F., Desouza, P.V., Mercer, R.L.: Class-based n-gram models of natural language. Comput. Linguist. 18(4), 467–479 (1992)

    Google Scholar 

  12. Mikolov, T., Karafiát, M., Burget, L., et al.: Recurrent neural network based language model. In: Eleventh Annual Conference of the International Speech Communication Association (2010)

    Google Scholar 

  13. Morin, F., Bengio, Y.: Hierarchical probabilistic neural network language model. In: Aistats, vol. 5, pp. 246–252 (2005)

    Google Scholar 

  14. Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)

    Article  Google Scholar 

  15. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  16. Jozefowicz, R., Zaremba, W., Sutskever, I.: An empirical exploration of recurrent network architectures. In: International Conference on Machine Learning, pp. 2342–2350 (2015)

    Google Scholar 

  17. Graves, A.: Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013)

  18. Tüske, Z., Irie, K., Schlüter, R., et al.: Investigation on log-linear interpolation of multi-domain neural network language model. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6005–6009. IEEE (2016)

    Google Scholar 

  19. Ma, M., Nirschl, M., Biadsy, F., et al.: Approaches for neural-network language model adaptation. In: Proceedings of Interspeech 2017, pp. 259–263 (2017)

    Google Scholar 

  20. Liu, P., Qiu, X., Huang, X.: Adversarial multi-task learning for text classification. arXiv preprint arXiv:1704.05742 (2017)

  21. Chen, X., Shi, Z., Qiu, X., et al.: Adversarial multi-criteria learning for Chinese word segmentation. arXiv preprint arXiv:1704.07556 (2017)

  22. Zhu, S., Lan, O., Yu, K.: Robust spoken language understanding with unsupervised ASR-error adaptation. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6179–6183. IEEE (2018)

    Google Scholar 

  23. Lan, O., Zhu, S., Yu, K.: Semi-supervised training using adversarial multi-task learning for spoken language understanding. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6049–6053. IEEE (2018)

    Google Scholar 

  24. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  25. Zhang, X., Wang, H.: A joint model of intent determination and slot filling for spoken language understanding. In: IJCAI, pp. 2993–2999 (2016)

    Google Scholar 

  26. Srivastava, N., Hinton, G., Krizhevsky, A.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  27. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  28. Caruana, R.: Multitask learning. In: Thrun, S., Pratt, L. (eds.) Learning to Learn, pp. 95–133. Springer, Boston (1998). https://doi.org/10.1007/978-1-4615-5529-2_5

    Chapter  Google Scholar 

  29. Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160–167. ACM (2008)

    Google Scholar 

  30. Liu, B., Lane, I.: Joint online spoken language understanding and language modeling with recurrent neural networks. arXiv preprint arXiv:1609.01462 (2016)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kai Yu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, H., Zhu, S., Fan, S., Yu, K. (2018). Joint Spoken Language Understanding and Domain Adaptive Language Modeling. In: Peng, Y., Yu, K., Lu, J., Jiang, X. (eds) Intelligence Science and Big Data Engineering. IScIDE 2018. Lecture Notes in Computer Science(), vol 11266. Springer, Cham. https://doi.org/10.1007/978-3-030-02698-1_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-02698-1_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-02697-4

  • Online ISBN: 978-3-030-02698-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics