Advertisement

A Case Study of Closed-Domain Response Suggestion with Limited Training Data

  • Lukas Galke
  • Gunnar Gerstenkorn
  • Ansgar Scherp
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 903)

Abstract

We analyze the problem of response suggestion in a closed domain along a real-world scenario of a digital library. We present a text-processing pipeline to generate question-answer pairs from chat transcripts. On this limited amount of training data, we compare retrieval-based, conditioned-generation, and dedicated representation learning approaches for response suggestion. Our results show that retrieval-based methods that strive to find similar, known contexts are preferable over parametric approaches from the conditioned-generation family, when the training data is limited. We, however, identify a specific representation learning approach that is competitive to the retrieval-based approaches despite the training data limitation.

Notes

Acknowledgements

This research was co-financed by the EU H2020 project MOVING (see footnote 10) under contract no 693092. We thank Nicole Krueger from ZBW for providing the chat transcripts and helpful discussions on requirements and possible applications.

References

  1. 1.
    Abadi, M., Agarwal, A., Barham, P., Brevdo, E., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. CoRR abs/1603.04467 (2016)Google Scholar
  2. 2.
    Al-Rfou, R., Pickett, M., Snaider, J., Sung, Y., Strope, B., Kurzweil, R.: Conversational contextual cues: the case of personalization and history for response ranking. CoRR abs/1606.00372 (2016)Google Scholar
  3. 3.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473 (2014)Google Scholar
  4. 4.
    Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. TACL 5, 135–146 (2017)Google Scholar
  5. 5.
    Chen, B., Cherry, C.: A systematic comparison of smoothing techniques for sentence-level BLEU. In: WMT@ACL. The Association for Computer Linguistics (2014)Google Scholar
  6. 6.
    Galke, L., Saleh, A., Scherp, A.: Word embeddings for practical information retrieval. In: GI-Jahrestagung. LNI, vol. P-275. GI (2017)Google Scholar
  7. 7.
    Grave, E., Bojanowski, P., Gupta, P., Joulin, A., Mikolov, T.: Learning word vectors for 157 languages. In: Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018) (2018)Google Scholar
  8. 8.
    Henderson, M., et al.: Efficient natural language response suggestion for smart reply. CoRR abs/1705.00652 (2017)Google Scholar
  9. 9.
    Huang, P., He, X., Gao, J., Deng, L., Acero, A., Heck, L.P.: Learning deep structured semantic models for web search using clickthrough data. In: CIKM. ACM (2013)Google Scholar
  10. 10.
    Kannan, A., et al.: Smart reply: Automated response suggestion for email. In: KDD. ACM (2016)Google Scholar
  11. 11.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2014)Google Scholar
  12. 12.
    Kusner, M.J., Sun, Y., Kolkin, N.I., Weinberger, K.Q.: From word embeddings to document distances. In: ICML. JMLR Workshop and Conference Proceedings, vol. 37. JMLR.org (2015)Google Scholar
  13. 13.
    Manning, C.D., Raghavan, P., Schütze, H.: Introduction to Information Retrieval. Cambridge University Press, Cambridge (2008)CrossRefGoogle Scholar
  14. 14.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NIPS (2013)Google Scholar
  15. 15.
    Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: 40th Annual meeting of the Association for Computational Linguistics, ACL-2002 (2002)Google Scholar
  16. 16.
    Ritter, A., Cherry, C., Dolan, W.B.: Data-driven response generation in social media. In: EMNLP. ACL (2011)Google Scholar
  17. 17.
    Salton, G., Buckley, C.: Term-weighting approaches in automatic text retrieval. Inf. Process. Manag. 24(5), 513–523 (1988)CrossRefGoogle Scholar
  18. 18.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetMATHGoogle Scholar
  19. 19.
    Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS (2014)Google Scholar
  20. 20.
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: NIPS (2017)Google Scholar
  21. 21.
    Vinyals, O., Le, Q.V.: A neural conversational model. CoRR abs/1506.05869 (2015)Google Scholar
  22. 22.
    Wu, L., Fisch, A., Chopra, S., Adams, K., Bordes, A., Weston, J.: StarSpace: embed all the things! CoRR abs/1709.03856 (2017)Google Scholar
  23. 23.
    Wu, Y., Wu, W., Yang, D., Xu, C., Li, Z., Zhou, M.: Neural response generation with dynamic vocabularies. CoRR abs/1711.11191 (2017)Google Scholar
  24. 24.
    Xu, Z., et al.: Neural response generation via GAN with an approximate embedding layer. In: EMNLP. Association for Computational Linguistics (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.University of KielKielGermany
  2. 2.University of PotsdamPotsdamGermany
  3. 3.University of StirlingStirlingScotland, UK

Personalised recommendations