Abstract
Natural language generation (NLG) plays a critical role in spoken dialogue systems. This paper presents a new approach to NLG by using recurrent neural networks (RNN), in which a gating mechanism is applied before RNN computation. This allows the proposed model to generate appropriate sentences. The RNN-based generator can be learned from unaligned data by jointly training sentence planning and surface realization to produce natural language responses. The model was extensively evaluated on four different NLG domains. The results show that the proposed generator achieved better performance on all the NLG domains compared to previous generators.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
A combination of an action type and a list of slot-value pairs. e.g. inform(name = ‘Frances’; area = ‘City Center’).
- 2.
Input texts are delexicalized in which slot values are replaced by its corresponding slot tokens.
- 3.
The process in which slot token is replaced by its value.
- 4.
References
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al.: Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
Cheyer, A., Guzzoni, D.: Method and apparatus for building an intelligent automated assistant, US Patent 8,677,377, 18 March 2014
Dušek, O., Jurčíček, F.: Sequence-to-sequence generation for spoken dialogue via deep syntax trees and strings. arXiv preprint arXiv:1606.05491 (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions. In: Proceedings of CVPR, pp. 3128–3137 (2015)
Lowe, R., Pow, N., Serban, I., Charlin, L., Pineau, J.: Incorporating unstructured textual knowledge sources into neural dialogue systems. In: NIPS Workshop MLNLU (2015)
Mairesse, F., Young, S.: Stochastic language generation in dialogue using factored language models. Comput. Linguist. 40(4), 763–799 (2014)
Mikolov, T.: Recurrent neural network based language model (2010)
Oh, A.H., Rudnicky, A.I.: Stochastic language generation for spoken dialogue systems. In: Proceedings of NAACL. ACL (2000)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of ACL, pp. 311–318. ACL (2002)
Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, vol. 14 (2014)
Ratnaparkhi, A.: Trainable methods for surface natural language generation. In: Proceedings of NAACL. ACL (2000)
Rieser, V., Lemon, O., Liu, X.: Optimising information presentation for spoken dialogue systems. In: Proceedings of ACL, pp. 1009-1018. ACL (2010)
Stent, A., Prasad, R., Walker, M.: Trainable sentence planning for complex information presentation in spoken dialog systems. In: Proceedings of ACL, p. 79. ACL (2004)
Tran, V.K., Nguyen, L.M.: Natural language generation for spoken dialogue system using RNN encoder-decoder networks. In: CoNLL 2017 (2017)
Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: a neural image caption generator. In: CVPR (2015)
Wang, B., Liu, K., Zhao, J.: Inner attention based recurrent neural networks for answer selection (2016)
Wen, T.H., Gašić, M., Kim, D., Mrkšić, N., Su, P.H., Vandyke, D., Young, S.: Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. In: Proceedings of SIGDIAL. ACL (2015)
Wen, T.H., Gasic, M., Mrksic, N., Rojas-Barahona, L.M., Su, P.H., Vandyke, D., Young, S.: Multi-domain neural network language generation for spoken dialogue systems. arXiv preprint arXiv:1603.01232 (2016)
Wen, T.H., Gašic, M., Mrkšic, N., Rojas-Barahona, L.M., Su, P.H., Vandyke, D., Young, S.: Toward multi-domain language generation using recurrent neural networks (2016)
Wen, T.H., Gašić, M., Mrkšić, N., Su, P.H., Vandyke, D., Young, S.: Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In: Proceedings of EMNLP. ACL (2015)
Wen, T.H., Vandyke, D., Mrksic, N., Gasic, M., Rojas-Barahona, L.M., Su, P.H., Ultes, S., Young, S.: A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562 (2016)
Zhang, X., Lapata, M.: Chinese poetry generation with recurrent neural networks. In: EMNLP, pp. 670–680 (2014)
Acknowledgment
This work was supported by the JSPS KAKENHI Grant number JP15K16048.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Tran, VK., Nguyen, LM. (2018). Semantic Refinement GRU-Based Neural Language Generation for Spoken Dialogue Systems. In: Hasida, K., Pa, W. (eds) Computational Linguistics. PACLING 2017. Communications in Computer and Information Science, vol 781. Springer, Singapore. https://doi.org/10.1007/978-981-10-8438-6_6
Download citation
DOI: https://doi.org/10.1007/978-981-10-8438-6_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-8437-9
Online ISBN: 978-981-10-8438-6
eBook Packages: Computer ScienceComputer Science (R0)