Advertisement

Semi-interactive Attention Network for Answer Understanding in Reverse-QA

  • Qing Yin
  • Guan Luo
  • Xiaodong Zhu
  • Qinghua Hu
  • Ou WuEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11440)

Abstract

Question answering (QA) is an important natural language processing (NLP) task and has received much attention in academic research and industry communities. Existing QA studies assume that questions are raised by humans and answers are generated by machines. Nevertheless, in many real applications, machines are also required to determine human needs or perceive human states. In such scenarios, machines may proactively raise questions and humans supply answers. Subsequently, machines should attempt to understand the true meaning of these answers. This new QA approach is called reverse-QA (rQA) throughout this paper. In this work, the human answer understanding problem is investigated and solved by classifying the answers into predefined answer-label categories (e.g., True, False, Uncertain). To explore the relationships between questions and answers, we use the interactive attention network (IAN) model and propose an improved structure called semi-interactive attention network (Semi-IAN). Two Chinese data sets for rQA are compiled. We evaluate several conventional text classification models for comparison, and experimental results indicate the promising performance of our proposed models.

Keywords

Question answering Reverse-QA Attention LSTM 

Notes

Acknowledgments

This work is partially supported by NSFC (61673377 and 61732011), and Tianjin AI Funding (17ZXRGGX00150).

References

  1. 1.
    Kumar, A., et al.: Ask me anything: dynamic memory networks for natural language processing. In: ICML, pp. 1378–1387 (2016)Google Scholar
  2. 2.
    Hixon, B., Clark, P., Hajishirzi, H.: Learning knowledge graphs for question answering through conversational dialog. In: NAACL-HLT 2015, pp. 851–861 (2015)Google Scholar
  3. 3.
    Tan, M., Santos, C., Xiang, B., Zhou, B.: LSTM-based deep learning models for non-factoid answer selection. arXiv preprint arXiv:1511.04108 (2015)
  4. 4.
    Xiong, C., Zhong, V., Socher, R.: Dynamic coattention networks for question answering. In: ICLR (2017)Google Scholar
  5. 5.
    Richardson, M., Burges, C.J.C., Renshaw, E.: MCTest: a challenge dataset for the open-domain machine comprehension of text. In: EMNLP, pp. 1532–1543 (2014)Google Scholar
  6. 6.
    Wang, H., Bansal, M., Gimpel, K., Mcallester, D.: Machine comprehension with syntax, frames, and semantics. In: ACL & IJNLP, pp. 700–706 (2015)Google Scholar
  7. 7.
    Chen, D., Bolton, J., Manning, C.D.: A thorough examination of the CNN/Daily mail reading comprehension task. In: ACL (2016)Google Scholar
  8. 8.
    Hill, F., Bordes, A., Chopra, S., Weston, J.: The Goldilocks principle: reading children’s books with explicit memory representations. In: ICLR (2016)Google Scholar
  9. 9.
    Kadlec, R., Schmid, M., Bajgar, O., Kleindienst, J.: Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547 (2016)
  10. 10.
    Bao, J., Duan, N., Yan, Z., Zhou, M., Zhao, T.: Constraint-based question answering with knowledge graph. In: COLING, pp. 2503–2514 (2016)Google Scholar
  11. 11.
    Malinowski, M., Rohrbach, M., Fritz, M.: Ask your neurons: a neural-based approach to answering questions about images. In: ICCV, pp. 1–9 (2015)Google Scholar
  12. 12.
    Sasaki, M., Kita, K.: Rule-based text categorization using hierarchical categories. In: IEEE International Conference on SMC, pp. 2827–2830 (1998)Google Scholar
  13. 13.
    Kiritchenko, S., Zhu, X., Cherry, C., Mohammad, S.M.: NRC-Canada-2014: detecting aspects and sentiment in customer reviews. In: SemEval, pp. 437–442 (2014)Google Scholar
  14. 14.
    Deng, Z., Zhu, X., Cheng, D., Zong, M., Zhang, S.: Efficient kNN classification algorithm for big data. Neurocomputing 195(26), 143–148 (2016)CrossRefGoogle Scholar
  15. 15.
    Lipps, O., Pekari, N., Roberts, C.: Undercoverage and nonresponse in a list-sampled telephone election survey. J. Eur. Surv. Res. Assoc. 9(2), 71–82 (2015)Google Scholar
  16. 16.
    Zhang, X., Zhao, J., Cun, Y.L.: Character-level convolutional networks for text classification. In: NIPS, pp. 649–657 (2015)Google Scholar
  17. 17.
    Tang, D., Qin, B., Liu, T.: Document modeling with gated recurrent neural network for sentiment classication. In: EMNLP 2015, pp. 1422–1432 (2015)Google Scholar
  18. 18.
    Ma, D., Li, S., Zhang, X., Wang, H.: Interactive attention networks for aspect-level sentiment classification. In: IJCAI, pp. 4068–4074 (2017)Google Scholar
  19. 19.
    Zhang, L., Wang, S, Liu, B.: Deep learning for sentiment analysis: a survey. WIREs: Data Min. Knowl. Disc., 25 (2018)Google Scholar
  20. 20.
    Mullen, T., Collier, N.: Sentiment analysis using support vector machines with diverse information sources. In: EMNLP, pp. 412–418 (2004)Google Scholar
  21. 21.
    Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, pp. 1532–1543 (2014)Google Scholar
  22. 22.
    Wu, O., Yang, T., Yang, M., Li, M.: \(\rho \)-hot lexical embedding-based two-level LSTM for sentiment analysis. arXiv preprint arXiv: 1803.07771 (2018)
  23. 23.
    Wang, B., Lu, W.: Learning latent opinions for aspect-level sentiment classification. In: AAAI (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Qing Yin
    • 1
  • Guan Luo
    • 2
  • Xiaodong Zhu
    • 3
  • Qinghua Hu
    • 1
  • Ou Wu
    • 1
    Email author
  1. 1.Tianjin UniversityTianjinChina
  2. 2.NLPRChinese Academy of SciencesBeijingChina
  3. 3.University of Shanghai for ScienceShanghaiChina

Personalised recommendations