Advertisement

Hierarchical Hybrid Code Networks for Task-Oriented Dialogue

  • Weiri Liang
  • Meng Yang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10955)

Abstract

Task-oriented dialog system is a research hotspot in natural language processing field. In recent years, the application of neural network (NN) has greatly improved the performance of dialog agent. However, there is still a big gap of performance between human beings and dialog agent, in which the domain knowledge and semantic analysis are not well exploited. In this paper we propose a model of Hierarchical Hybrid Code Networks (HHCNs), in which a word-character RNN for semantic representation and a NN-based selection for domain knowledge are integrated. Thus the proposed HHCNs can effectively conduct semantic analysis (e.g., identify proper nouns and misspelling word) and select meaningful responses for the dialog. The experimental results on the dataset of Dialog State Tracking Challenge 2 (DSTC2) have shown a superior performance of HHCNs.

Keywords

Task-oriented dialogue Hybrid Code Network Dialog systems 

Notes

Acknowledgments

This work is partially supported by the National Natural Science Foundation of China (Grant no. 61772568), Guangzhou Science and Technology Program (Grant no. 201804010288), and Shenzhen Scientific Research and Development Funding Program (Grant no. JCYJ20170302153827712).

References

  1. 1.
    Levin, E., Pieraccini, R., Eckertm, W.: A stochastic model of human-machine interaction for learning dialog strategies. IEEE Trans. Speech Audio Process. 8(1), 11–23 (2000)CrossRefGoogle Scholar
  2. 2.
    Singh, S., Litman, D., Kearns, M., Walker, M.: Optimizing dialogue management with reinforcement learning: experiments with the NJFun system. J. Artif. Intell. Res. 16(1), 105–133 (2011)MATHGoogle Scholar
  3. 3.
    Williams, J.D., Young, S.: Partially Observable Markov Decision Processes for Spoken Dialog Systems. Academic Press Ltd., London (2007)Google Scholar
  4. 4.
    Hori, C., Ohtake, K., Misu, T., Kashioka, H., Nakamura, S.: Statistical dialog management applied to WFST-based dialog systems. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4793–4796 (2009)Google Scholar
  5. 5.
    Lee, C., Jung, S., Kim, S., Lee, G.G.: Example-based dialog modeling for practical multi-domain dialog system. Speech Commun. 51(5), 466–484 (2009)CrossRefGoogle Scholar
  6. 6.
    Griol, D., Hurtado, L.F., Segarra, E., Sanchis, E.: A statistical approach to spoken dialog systems design and evaluation. Speech Commun. 50(8–9), 666–682 (2008)CrossRefGoogle Scholar
  7. 7.
    Young, S., Gai, M., Thomson, B., Williams, J.D.: Pomdp-based statistical spoken dialog systems: a review. Proc. IEEE 101(5), 1160–1179 (2013)CrossRefGoogle Scholar
  8. 8.
    Li, L., He, H., Williams, J.D.: Temporal supervised learning for inferring a dialog policy from example conversations. In: Spoken Language Technology Workshop, pp. 312–317 (2014)Google Scholar
  9. 9.
    Serban, I.V., Sordoni, A., Bengio, Y., Courville, A., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: Thirtieth AAAI Conference on Artificial Intelligence, pp. 3776–3783 (2016)Google Scholar
  10. 10.
    Sordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.Y., Gao, J., Dolan, B.: A neural network approach to context-sensitive generation of conversational responses (2015)Google Scholar
  11. 11.
    Shang, L., Lu, Z., Li, H.: Neural responding machine for short-text conversation, pp. 52–58 (2015)Google Scholar
  12. 12.
    Vinyals, O., Le, Q.: A neural conversational model. Computer Science (2015)Google Scholar
  13. 13.
    Yao, K., Zweig, G., Peng, B.: Attention with intention for a neural network conversation model. Computer Science (2015)Google Scholar
  14. 14.
    Li, J., Galley, M., Brockett, C., Spithourakis, G., Gao, J., Dolan, B.: A persona-based neural conversation model. Meeting of the Association for Computational Linguistics, pp. 994–1003 (2016)Google Scholar
  15. 15.
    Luan, Y., Ji, Y., Ostendorf, M.: LSTM based conversation models (2016)Google Scholar
  16. 16.
    Xu, Z., Liu, B., Wang, B., Sun, C., and Wang, X.: In-corporating loose-structured knowledge into LSTM with recall gate for conversation modeling. pp. 3506–3513 (2016)Google Scholar
  17. 17.
    Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.: A diversity-promoting objective function for neural conversation models. Computer Science (2015)Google Scholar
  18. 18.
    Lowe, R.T., Pow, N., Serban, I.V., Charlin, L., Liu, C.-W., Pineau, J.: Training end-to-end dialogue systems with the ubuntu dialogue corpus. Dialogue Discourse 8(1), 31–65 (2017)Google Scholar
  19. 19.
    Serban, I.V., Sordoni, A., Lowe, R., Charlin, L., Pineau, J., Courville, A., Bengio, Y.: A hierarchical latent variable encoder-decoder model for generating dialogues (2016)Google Scholar
  20. 20.
    Sukhbaatar, S., Szlam, A., Weston, J., Fergus, R.: End-to-end memory networks. Computer Science (2015)Google Scholar
  21. 21.
    Perez, J., Liu, F.: Gated end-to-end memory networks (2016)Google Scholar
  22. 22.
    Gu, J., Lu, Z., Li, H., Li, V.O.K.: Incorporating copying mechanism in sequence-to-sequence learning. pp. 1631–1640 (2016)Google Scholar
  23. 23.
    Gulcehre, C., Ahn, S., Nallapati, R., Zhou, B., Bengio, Y.: Pointing the unknown words. pp. 140–149 (2016)Google Scholar
  24. 24.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. Computer Science (2014)Google Scholar
  25. 25.
    Luong, M.T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. Computer Science (2015)Google Scholar
  26. 26.
    Eric, M., Manning, C.D.: A copy-augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. pp. 468–473 (2017)Google Scholar
  27. 27.
    Liu, B., Lane, I.: Iterative policy learning in end-to-end trainable task-oriented neural dialog models (2017)Google Scholar
  28. 28.
    Seo, M., Min, S., Farhadi, A., Hajishirzi, H.: Query-reduction networks for question answering (2016)Google Scholar
  29. 29.
    Williams, J.D., Asadi, K., Zweig, G.: Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. pp. 665–677 (2017)Google Scholar
  30. 30.
    Henderson, M., Thomson, B., Williams, J.D.: The second dialog state tracking challenge. In: Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pp. 263–272 (2014)Google Scholar
  31. 31.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. 26, 3111–3119 (2013)Google Scholar
  32. 32.
    Luong, M.T., Manning, C.D.: Achieving open vocabulary neural machine translation with hybrid word-character models. pp. 1054–1063 (2016)Google Scholar
  33. 33.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  34. 34.
    Chung, J., Gulcehre, C., Cho, K.H., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. Eprint Arxiv (2014)Google Scholar
  35. 35.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. Computer Science (2014)Google Scholar
  36. 36.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetMATHGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Data and Computer ScienceSun Yat-sen UniversityGuangzhouChina

Personalised recommendations