Abstract
Spoken Language Understanding (SLU) aims to extract structured information from speech recognized texts, which suffers from inaccurate automatic speech recognition (ASR) (especially in a specific dialogue domain). Language Modeling (LM) is significant to ASR for producing natural sentences. To improve the SLU performance in a specific domain, we try two ways: (1) domain adaptive language modeling which tends to recognize in-domain utterances; (2) joint modeling of SLU and LM which distills semantic information to build semantic-aware LM, and also helps SLU by semi-supervised learning (LM is unsupervised). To unify these two approaches, we propose a multi-task model (MTM) that jointly performs two SLU tasks (slot filling and intent detection), domain-specific LM and domain-free (general) LM. In the proposed multi-task architecture, the shared-private network is utilized to automatically learn which part of general data can be shared by the specific domain or not. We attempt to further improve the SLU and ASR performance in a specific domain with a few of labeled data in the specific domain and plenty of unlabeled data in general domain. The experiments show that the proposed MTM can obtain 4.06% absolute WER (Word Error Rate) reduction in a car navigation domain, compared to a general-domain LM. For language understanding, the MTM outperforms the baseline (especially slot filling task) on the manual transcript, ASR 1-best output. By exploiting the domain-adaptive LM to rescore ASR output, our proposed model achieves further improvement in SLU (7.08% absolute F1 increase of slot filling task).
This work has been supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002102, the China NSFC project (No. 61573241) and the JiangSu NSF project (BK20161244). Experiments have been carried out on the PI supercomputer at Shanghai Jiao Tong University.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Rei, M.: Semi-supervised multitask learning for sequence labeling. arXiv preprint arXiv:1704.07156 (2017)
Liu, P., Qiu, X., Huang, X.: Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101 (2016)
Peris, A., Casacuberta, F.: A bidirectional recurrent neural language model for machine translation. Procesamiento del Lenguaje Nat. (55) (2015)
He, T., Zhang, Y., Droppo, J., et al.: On training bi-directional neural network language model with noise contrastive estimation. In: 2016 10th International Symposium on Chinese Spoken Language Processing (ISCSLP), pp. 1–5. IEEE (2016)
Raymond, C., Riccardi, G.: Generative and discriminative algorithms for spoken language understanding. In: Eighth Annual Conference of the International Speech Communication Association (2007)
Mesnil, G., Dauphin, Y., Yao, K.: Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Trans. Audio Speech Lang. Process. 23(3), 530–539 (2015)
Liu, B., Lane, I.: Attention-based recurrent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454 (2016)
Zhu, S., Yu, K.: Encoder-decoder with focus-mechanism for sequence labelling based spoken language understanding. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5675–5679. IEEE (2017)
Zhang, W., Yoshida, T., Tang, X.: Text classification based on multi-word with support vector machine. Knowl.-Based Syst. 21(8), 879–886 (2008)
Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)
Brown, P.F., Desouza, P.V., Mercer, R.L.: Class-based n-gram models of natural language. Comput. Linguist. 18(4), 467–479 (1992)
Mikolov, T., Karafiát, M., Burget, L., et al.: Recurrent neural network based language model. In: Eleventh Annual Conference of the International Speech Communication Association (2010)
Morin, F., Bengio, Y.: Hierarchical probabilistic neural network language model. In: Aistats, vol. 5, pp. 246–252 (2005)
Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Jozefowicz, R., Zaremba, W., Sutskever, I.: An empirical exploration of recurrent network architectures. In: International Conference on Machine Learning, pp. 2342–2350 (2015)
Graves, A.: Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013)
Tüske, Z., Irie, K., Schlüter, R., et al.: Investigation on log-linear interpolation of multi-domain neural network language model. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6005–6009. IEEE (2016)
Ma, M., Nirschl, M., Biadsy, F., et al.: Approaches for neural-network language model adaptation. In: Proceedings of Interspeech 2017, pp. 259–263 (2017)
Liu, P., Qiu, X., Huang, X.: Adversarial multi-task learning for text classification. arXiv preprint arXiv:1704.05742 (2017)
Chen, X., Shi, Z., Qiu, X., et al.: Adversarial multi-criteria learning for Chinese word segmentation. arXiv preprint arXiv:1704.07556 (2017)
Zhu, S., Lan, O., Yu, K.: Robust spoken language understanding with unsupervised ASR-error adaptation. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6179–6183. IEEE (2018)
Lan, O., Zhu, S., Yu, K.: Semi-supervised training using adversarial multi-task learning for spoken language understanding. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6049–6053. IEEE (2018)
Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
Zhang, X., Wang, H.: A joint model of intent determination and slot filling for spoken language understanding. In: IJCAI, pp. 2993–2999 (2016)
Srivastava, N., Hinton, G., Krizhevsky, A.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Caruana, R.: Multitask learning. In: Thrun, S., Pratt, L. (eds.) Learning to Learn, pp. 95–133. Springer, Boston (1998). https://doi.org/10.1007/978-1-4615-5529-2_5
Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160–167. ACM (2008)
Liu, B., Lane, I.: Joint online spoken language understanding and language modeling with recurrent neural networks. arXiv preprint arXiv:1609.01462 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, H., Zhu, S., Fan, S., Yu, K. (2018). Joint Spoken Language Understanding and Domain Adaptive Language Modeling. In: Peng, Y., Yu, K., Lu, J., Jiang, X. (eds) Intelligence Science and Big Data Engineering. IScIDE 2018. Lecture Notes in Computer Science(), vol 11266. Springer, Cham. https://doi.org/10.1007/978-3-030-02698-1_27
Download citation
DOI: https://doi.org/10.1007/978-3-030-02698-1_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-02697-4
Online ISBN: 978-3-030-02698-1
eBook Packages: Computer ScienceComputer Science (R0)