Advertisement

An Auto-Encoder for Learning Conversation Representation Using LSTM

  • Xiaoqiang ZhouEmail author
  • Baotian Hu
  • Qingcai Chen
  • Xiaolong Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9489)

Abstract

In this paper, an auto-encoder is proposed to learn conversation representation. First, the long short term memory (LSTM) neural network is used to encode the sequence of sentences in a conversation. The interactive context is encoded into a fixed-length vector. Then, through the LSTM-decoder, the learnt representation is used to reconstruct the sentence vectors of a conversation. To train our model, we construct one corpus with 32,881 conversations from the online shopping platform. Finally, experiments on topic recognition task demonstrate the effectiveness of the proposed auto-encoder on learning conversation representation, especially when training data of topic recognition is relatively small.

Keywords

Auto-encoder LSTM Conversation representation 

Notes

Acknowledgements

This paper is supported in part by grants: National 863 Program of China (2015AA015405), National Natural Science Foundation of China (61473101 and 61272383).

References

  1. 1.
    Hsueh, P.-Y., Moore, J.D., Renals, S.: Automatic segmentation of multi-party dialogue. In: Proceedings of EACL, pp. 273–280 (2006)Google Scholar
  2. 2.
    Purver, M., Körding, K., Griffiths, T.L., Tenenbaum, J.: Unsupervised topic modelling for multi-party spoken discourse. In: Proceedings of COLING-ACL, pp. 17–24 (2006)Google Scholar
  3. 3.
    Stolcke, A., Coccaro, N., Bates, R., Taylor, P., Van Ess-Dykema, C., Ries, K., Shriberg, E., Jurafsky, D., Martin, R., Meteer, M.: Dialogue act modeling for automatic tagging and recognition of conversational speech. Comput. Linguist. 26(3), 339–374 (2000)CrossRefGoogle Scholar
  4. 4.
    Rieser, V., Lemon, O.: Natural language generation as planning under uncertainty for spoken dialogue systems. In: Proceedings of EACL, pp. 683–691 (2009)Google Scholar
  5. 5.
    Liu, J., Seneff, S., Zue, V.: Dialogue-oriented review summary generation for spoken dialogue recommendation systems. In: Proceedings of NAACL, pp. 64–72 (2010)Google Scholar
  6. 6.
    Graves, A.: Generating sequences with recurrent neural networks. CoRR, abs/1308.0850 (2013)
  7. 7.
    Sutskever, I., Vinyals, O., Le, Q.V.V.: Sequence to sequence learning with neural networks. In: Advances in NIPS, pp. 3104–3112, (2014)Google Scholar
  8. 8.
    Cho, K., van Merrienboer, B.., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of EMNLP, pp. 1724–1734 (2014)Google Scholar
  9. 9.
    Shang, L., Lu, Z., Li, H.: Neural responding machine for short-text conversation. CoRR, abs/1503.02364 (2015)
  10. 10.
    Srivastava, N., Mansimov, E., Salakhutdinov, R.: Unsupervised learning of video representations using LSTMs. CoRR, abs/1502.04681 (2015)
  11. 11.
    Zhai, K., Williams, J.: Discovering latent structure in task-oriented dialogues. In: Proceedings of ACL. pp. 36–46 (2014)Google Scholar
  12. 12.
    Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J.: Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: Kolen, J.F., Kremer, S. (eds.) A Field Guide to Dynamical Recurrent Neural Networks, vol. 28, pp. 297–318. IEEE Press, New York (2001)Google Scholar
  13. 13.
    Graves, A., Jaitly, N.: Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of ICML, pp. 1764–1772, (2014)Google Scholar
  14. 14.
    Mikolov, T.., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. CoRR, abs/1301.3781 (2013)
  15. 15.
    Zhou, X., Hu, B., Chen, Q., Tang, B., Wang, X.: Answer sequence learning with neural networks for answer selection in community question answering. In: Proceedings of ACL-IJCNLP, pp. 713–718 (2015)Google Scholar
  16. 16.
    Li, J., Luong, M.-T., Jurafsky, D.: A hierarchical neural autoencoder for paragraphs and documents. In: Proceedings of ACL-IJCNLP, pp. 1106–1115 (2015)Google Scholar
  17. 17.
    Hu, B., Lu, Z., Li, H., Chen, Q.: Convolutional neural network architectures for matching natural language sentences. In: Advances in NIPS, pp. 2042–2050 (2014)Google Scholar
  18. 18.
    Socher, R., Pennington, J., Huang, E.H., Ng, A.Y., Manning, C.D.: Semi-supervised recursive autoencoders for predicting sentiment distributions. In: Proceedings of EMNLP, pp. 151–161 (2011)Google Scholar
  19. 19.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Le, Q., Mikolov, T.: Distributed representations of sentences and documents. In: Proceedings of ICML, pp. 1189–1196 (2014)Google Scholar
  21. 21.
    Hu, B., Chen, Q., Zhu, F.: LCSTS: a large scale chinese short text summarization dataset. CoRR, abs/1506.05865 (2015)
  22. 22.
    Jurafsky, D., Shriberg, E., Biasca, D.: Switchboard SWBD-DAMSL shallowdiscourse-function annotation coders manual. Institute of Cognitive Science Technical report, pp. 97–02 (1997)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Xiaoqiang Zhou
    • 1
    Email author
  • Baotian Hu
    • 1
  • Qingcai Chen
    • 1
  • Xiaolong Wang
    • 1
  1. 1.Intelligent Computing Research CenterHarbin Institute of Technology Shenzhen Graduate SchoolShenzhenChina

Personalised recommendations