Advertisement

End-to-End Task-Oriented Dialogue System with Distantly Supervised Knowledge Base Retriever

  • Libo Qin
  • Yijia Liu
  • Wanxiang Che
  • Haoyang Wen
  • Ting Liu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11221)

Abstract

Task-oriented dialog systems usually face the challenge of querying knowledge base. However, it usually cannot be explicitly modeled due to the lack of annotation. In this paper, we introduce an explicit KB retrieval component (KB retriever) into the seq2seq dialogue system. We first use the KB retriever to get the most relevant entry according to the dialogue history and KB, and then apply the copying mechanism to retrieve entities from the retrieved KB in decoding time. Moreover, the KB retriever is trained with distant supervision, which does not need any annotation efforts. Experiments on Stanford Multi-turn Task-oriented Dialogue Dataset shows that our framework significantly outperforms other sequence-to-sequence based baseline models on both automatic and human evaluation.

Keywords

Task-oriented dialog systems Sequence-to-sequence Knowledge base 

Notes

Acknowledgements

We are grateful for helpful comments and suggestions from the anonymous reviewers. This work was supported by the National Key Basic Research Program of China via grant 2014CB340503 and the National Natural Science Foundation of China (NSFC) via grant 61632011 and 61772153.

References

  1. 1.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
  2. 2.
    Bordes, A., Boureau, Y.L., Weston, J.: Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683 (2016)
  3. 3.
    Dhingra, B., et al.: End-to-end reinforcement learning of dialogue agents for information access. arXiv preprint arXiv:1609.00777 (2016)
  4. 4.
    Eric, M., Manning, C.D.: A copy-augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. arXiv preprint arXiv:1701.04024 (2017)
  5. 5.
    Eric, M., Manning, C.D.: Key-value retrieval networks for task-oriented dialogue. arXiv preprint arXiv:1705.05414 (2017)
  6. 6.
    Gu, J., Lu, Z., Li, H., Li, V.O.: Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393 (2016)
  7. 7.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  8. 8.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  9. 9.
    Li, X., Chen, Y.N., Li, L., Gao, J.: End-to-end task-completion neural dialogue systems. arXiv preprint arXiv:1703.01008 (2017)
  10. 10.
    Luong, M.T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015)
  11. 11.
    Min, B., Grishman, R., Wan, L., Wang, C., Gondek, D.: Distant supervision for relation extraction with an incomplete knowledge base. In: Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 777–782 (2013)Google Scholar
  12. 12.
    Mintz, M., Bills, S., Snow, R., Jurafsky, D.: Distant supervision for relation extraction without labeled data. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pp. 1003–1011. Association for Computational Linguistics (2009)Google Scholar
  13. 13.
    Möller, S., et al.: Memo: towards automatic usability evaluation of spoken dialogue services by user error simulations. In: Ninth International Conference on Spoken Language Processing (2006)Google Scholar
  14. 14.
    Nallapati, R., Xiang, B., Zhou, B.: Sequence-to-sequence RNNs for text summarization (2016)Google Scholar
  15. 15.
    Nallapati, R., Zhou, B., Gulcehre, C., Xiang, B., et al.: Abstractive text summarization using sequence-to-sequence RNNs and beyond. arXiv preprint arXiv:1602.06023 (2016)
  16. 16.
    Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)Google Scholar
  17. 17.
    Pavez, J., Allende, H., Allende-Cid, H.: Working memory networks: augmenting memory networks with a relational reasoning module. arXiv preprint arXiv:1805.09354 (2018)
  18. 18.
    Ritter, A., Cherry, C., Dolan, W.B.: Data-driven response generation in social media. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 583–593. Association for Computational Linguistics (2011)Google Scholar
  19. 19.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Sukhbaatar, S., Weston, J., Fergus, R., et al.: End-to-end memory networks. In: Advances in Neural Information Processing Systems, pp. 2440–2448 (2015)Google Scholar
  21. 21.
    Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)Google Scholar
  22. 22.
    Wen, T.H., et al.: A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562 (2016)
  23. 23.
    Williams, J.D., Asadi, K., Zweig, G.: Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. arXiv preprint arXiv:1702.03274 (2017)
  24. 24.
    Xiong, C., Merity, S., Socher, R.: Dynamic memory networks for visual and textual question answering. In: International Conference on Machine Learning, pp. 2397–2406 (2016)Google Scholar
  25. 25.
    Xu, W., Hoffmann, R., Zhao, L., Grishman, R.: Filling knowledge base gaps for distant supervision of relation extraction. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), vol. 2, pp. 665–670 (2013)Google Scholar
  26. 26.
    Zeng, D., Liu, K., Chen, Y., Zhao, J.: Distant supervision for relation extraction via piecewise convolutional neural networks. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1753–1762 (2015)Google Scholar
  27. 27.
    Zhao, T., Eskenazi, M.: Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. arXiv preprint arXiv:1606.02560 (2016)

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Libo Qin
    • 1
  • Yijia Liu
    • 1
  • Wanxiang Che
    • 1
  • Haoyang Wen
    • 1
  • Ting Liu
    • 1
  1. 1.Research Center for Social Computing and Information RetrievalHarbin Institute of TechnologyHarbinChina

Personalised recommendations