Skip to main content

Semantic Refinement GRU-Based Neural Language Generation for Spoken Dialogue Systems

  • Conference paper
  • First Online:
Computational Linguistics (PACLING 2017)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 781))

Abstract

Natural language generation (NLG) plays a critical role in spoken dialogue systems. This paper presents a new approach to NLG by using recurrent neural networks (RNN), in which a gating mechanism is applied before RNN computation. This allows the proposed model to generate appropriate sentences. The RNN-based generator can be learned from unaligned data by jointly training sentence planning and surface realization to produce natural language responses. The model was extensively evaluated on four different NLG domains. The results show that the proposed generator achieved better performance on all the NLG domains compared to previous generators.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A combination of an action type and a list of slot-value pairs. e.g. inform(name = ‘Frances’; area = ‘City Center’).

  2. 2.

    Input texts are delexicalized in which slot values are replaced by its corresponding slot tokens.

  3. 3.

    The process in which slot token is replaced by its value.

  4. 4.

    https://github.com/shawnwun/RNNLG.

References

  1. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al.: Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016)

  2. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  3. Cheyer, A., Guzzoni, D.: Method and apparatus for building an intelligent automated assistant, US Patent 8,677,377, 18 March 2014

    Google Scholar 

  4. Dušek, O., Jurčíček, F.: Sequence-to-sequence generation for spoken dialogue via deep syntax trees and strings. arXiv preprint arXiv:1606.05491 (2016)

  5. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  6. Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions. In: Proceedings of CVPR, pp. 3128–3137 (2015)

    Google Scholar 

  7. Lowe, R., Pow, N., Serban, I., Charlin, L., Pineau, J.: Incorporating unstructured textual knowledge sources into neural dialogue systems. In: NIPS Workshop MLNLU (2015)

    Google Scholar 

  8. Mairesse, F., Young, S.: Stochastic language generation in dialogue using factored language models. Comput. Linguist. 40(4), 763–799 (2014)

    Article  Google Scholar 

  9. Mikolov, T.: Recurrent neural network based language model (2010)

    Google Scholar 

  10. Oh, A.H., Rudnicky, A.I.: Stochastic language generation for spoken dialogue systems. In: Proceedings of NAACL. ACL (2000)

    Google Scholar 

  11. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of ACL, pp. 311–318. ACL (2002)

    Google Scholar 

  12. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, vol. 14 (2014)

    Google Scholar 

  13. Ratnaparkhi, A.: Trainable methods for surface natural language generation. In: Proceedings of NAACL. ACL (2000)

    Google Scholar 

  14. Rieser, V., Lemon, O., Liu, X.: Optimising information presentation for spoken dialogue systems. In: Proceedings of ACL, pp. 1009-1018. ACL (2010)

    Google Scholar 

  15. Stent, A., Prasad, R., Walker, M.: Trainable sentence planning for complex information presentation in spoken dialog systems. In: Proceedings of ACL, p. 79. ACL (2004)

    Google Scholar 

  16. Tran, V.K., Nguyen, L.M.: Natural language generation for spoken dialogue system using RNN encoder-decoder networks. In: CoNLL 2017 (2017)

    Google Scholar 

  17. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: a neural image caption generator. In: CVPR (2015)

    Google Scholar 

  18. Wang, B., Liu, K., Zhao, J.: Inner attention based recurrent neural networks for answer selection (2016)

    Google Scholar 

  19. Wen, T.H., Gašić, M., Kim, D., Mrkšić, N., Su, P.H., Vandyke, D., Young, S.: Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. In: Proceedings of SIGDIAL. ACL (2015)

    Google Scholar 

  20. Wen, T.H., Gasic, M., Mrksic, N., Rojas-Barahona, L.M., Su, P.H., Vandyke, D., Young, S.: Multi-domain neural network language generation for spoken dialogue systems. arXiv preprint arXiv:1603.01232 (2016)

  21. Wen, T.H., Gašic, M., Mrkšic, N., Rojas-Barahona, L.M., Su, P.H., Vandyke, D., Young, S.: Toward multi-domain language generation using recurrent neural networks (2016)

    Google Scholar 

  22. Wen, T.H., Gašić, M., Mrkšić, N., Su, P.H., Vandyke, D., Young, S.: Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In: Proceedings of EMNLP. ACL (2015)

    Google Scholar 

  23. Wen, T.H., Vandyke, D., Mrksic, N., Gasic, M., Rojas-Barahona, L.M., Su, P.H., Ultes, S., Young, S.: A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562 (2016)

  24. Zhang, X., Lapata, M.: Chinese poetry generation with recurrent neural networks. In: EMNLP, pp. 670–680 (2014)

    Google Scholar 

Download references

Acknowledgment

This work was supported by the JSPS KAKENHI Grant number JP15K16048.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Van-Khanh Tran .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tran, VK., Nguyen, LM. (2018). Semantic Refinement GRU-Based Neural Language Generation for Spoken Dialogue Systems. In: Hasida, K., Pa, W. (eds) Computational Linguistics. PACLING 2017. Communications in Computer and Information Science, vol 781. Springer, Singapore. https://doi.org/10.1007/978-981-10-8438-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-8438-6_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-8437-9

  • Online ISBN: 978-981-10-8438-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics