Abstract
As an indispensable influencing factor of human-computer interaction experience, emotional cognitive behaviors in dialogues have aroused spread concern of researchers. However, existing emotional dialogue generation models tend to generate generic and universal responses. To address this problem, this paper proposes a topical and emotional chatting machine (TECM) that generates not only high-quality but also emotional responses. TECM utilizes the information obtained by the topic model as a prior knowledge to guide the generation of the responses, and the topic information is used as the input of the topic attention mechanism to improve the quality of responses. TECM also adopts a method of emotion category embedding to generate emotional responses. The empirical study on automatic evaluation metrics shows that TECM can generate diverse, informative and emotional responses.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Prendinger, H., Mori, J., Ishizuka, M.: Using human physiology to evaluate subtle expressivity of a virtual quizmaster in a mathematical game. Int. J. Hum.-Comput. Stud. 62, 231–245 (2005)
Zhang, Y., Huang, M.: Overview of the NTCIR-14 short text generation subtask: emotion generation challenge. In: Proceedings of the 14th NTCIR Conference on Evaluation of Information Access Technologies (2019)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)
Ji, Z., Lu, Z., Li, H.: An Information Retrieval Approach to Short Text Conversation. arXiv preprint arXiv:1408.6988 (2014)
Shang, L., Lu, Z., Li, H.: Neural responding machine for short-text conversation. In: ACL, pp. 1577–1586 (2015)
Serban, I.V., Sordoni, A., Lowe, R., et al.: A hierarchical latent variable encoder-decoder model for generating dialogues. In: Thirty-First AAAI Conference on Artificial Intelligence. (2017)
Li, J., Galley, M., Brockett, C., et al.: A diversity-promoting objective function for neural conversation models. In: ACL, pp. 110–119 (2016)
Yao, K., Peng, B., Zweig, G., et al.: An Attentional Neural Conversation Model with Improved Specificity. arXiv preprint arXiv:1606.01292 (2016)
Xing, C., Wu, W., Wu, Y., et al.: Topic aware neural response generation. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)
Zhou, H., Huang, M., Zhang, T., et al.: Emotional chatting machine: emotional conversation generation with internal and external memory. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Ghosh, S., Chollet, M., Laksana, E., et al.: Affect-LM: a neural language model for customizable affective text generation. In: ACL, pp. 634–642 (2017)
Asghar, N., Poupart, P., Hoey, J., Jiang, X., Mou, L.: Affective neural response generation. In: Pasi, G., Piwowarski, B., Azzopardi, L., Hanbury, A. (eds.) ECIR 2018. LNCS, vol. 10772, pp. 154–166. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-76941-7_12
Chung, J., Gulcehre, C., Cho, K.H., et al.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)
Luong, M.T., Pham, H., Manning, C.D.: Effective Approaches to attention-based neural machine translation. In: EMNLP, pp. 1412–1421 (2015)
Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)
Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: Advances in neural information processing systems, pp. 556–562 (2001)
Devlin, J., Chang, M.W., Lee, K., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. In: ACL, pp. 4171–4186 (2018)
Papineni, K., Roukos, S., Ward, T., et al.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, pp. 311–318 (2002)
Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)
Liu, C.W., Lowe, R., Serban, I.V., et al.: How NOT to evaluate your dialogue system: an empirical study of unsupervised evaluation metrics for dialogue response generation. In: ACL, pp. 2122–2132 (2016)
Acknowledgments
The work presented in this paper is partially supported by the Major Projects of National Social Foundation of China under Grant No. 11&ZD189.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhou, Z., Liu, M., Zhang, Z., Fu, Y., Xiang, J. (2019). Generating Topical and Emotional Responses Using Topic Attention. In: Kato, M., Liu, Y., Kando, N., Clarke, C. (eds) NII Testbeds and Community for Information Access Research. NTCIR 2019. Lecture Notes in Computer Science(), vol 11966. Springer, Cham. https://doi.org/10.1007/978-3-030-36805-0_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-36805-0_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36804-3
Online ISBN: 978-3-030-36805-0
eBook Packages: Computer ScienceComputer Science (R0)