Combine Non-text Features with Deep Learning Structures Based on Attention-LSTM for Answer Selection

  • Chang’e Jia
  • Chengjie Sun
  • Bingquan Liu
  • Lei LinEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10390)


Because of the lexical gap between questions and answer candidates, methods with only word features cannot solve Answer Selection (AS) problem well. In this paper, we apply a LSTMs with Attention model to extract the latent semantic information of sentences and propose a method to learning non-text features. Besides, we propose an index to evaluate the sorting ability of models with the same accuracy value. Our model achieved the best accuracy and F1 performance than other known models, and the ranking index results, including MAP, AvgRec and MRR index’s result, are after only KeLP system and Beihang MSRA system in SemEval-2017 Task 3 Subtask A.


LSTMs Attention Answer selection Non-text features 



This work is sponsored by the National High Technology Research and Development Program of China (2015AA015405) and National Natural Science Foundation of China (61572151 and 61602131).


  1. 1.
    Berger, A.L., Caruana, R., Cohn, D., Freitag, D., Mittal, V.O.: Bridging the lexical chasm: statistical approaches to answer-finding, pp. 192–199 (2000)Google Scholar
  2. 2.
    Riezler, S., Vasserman, A., Tsochantaridis, I., Mittal, V., Liu, Y.: Statistical machine translation for query expansion in answer retrieval (2007)Google Scholar
  3. 3.
    Guzman, F., Marquez, L., Nakov, P.: Machine translation evaluation meets community question answering, pp. 460–466 (2016)Google Scholar
  4. 4.
    Filice, S., Croce, D., Moschitti, A., Basili, R.: KeLP at SemEval-2016 task 3: learning semantic relations between questions and answers. Proc. SemEval 16, 1116–1123 (2016)Google Scholar
  5. 5.
    Mihaylov, T., Nakov, P.: SemanticZ at SemEval-2016 task 3: ranking relevant answers in community question answering using semantic similarity based on fine-tuned word embeddings. In: Proceedings of SemEval, pp. 879–886 (2016)Google Scholar
  6. 6.
    Wang, D., Nyberg, E.: A long short-term memory model for answer sentence selection in question answering. In: ACL, vol. 2, pp. 707–712 (2015)Google Scholar
  7. 7.
    Severyn, A., Moschitti, A.: Learning to rank short text pairs with convolutional deep neural networks, pp. 373–382 (2015)Google Scholar
  8. 8.
    Zhou, X., Hu, B., Chen, Q., Tang, B., Wang, X.: Answer sequence learning with neural networks for answer selection in community question answering, pp. 713–718 (2015)Google Scholar
  9. 9.
    Lin, X., Wang, Y.X.X.: ICRC-HIT: a deep learning based comment sequence labeling system for answer selection challenge. In: SemEval-2015, p. 210 (2015)Google Scholar
  10. 10.
    Wang, B., Liu, K., Zhao, J.: Inner attention based recurrent neural networks for answer selection. In: The Annual Meeting of the Association for Computational Linguistics (2016)Google Scholar
  11. 11.
    Miao, Y., Yu, L., Blunsom, P.: Neural variational inference for text processing. In: Proceedings of ICML (2016)Google Scholar
  12. 12.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  13. 13.
    Tan, M., Santos, C.d., Xiang, B., Zhou, B.: LSTM-based deep learning models for non-factoid answer selection. arXiv preprint arXiv:1511.04108 (2015)
  14. 14.
    Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. EMNLP 14, 1532–1543 (2014)Google Scholar
  15. 15.
    Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)
  16. 16.
    Feng, M., Xiang, B., Glass, M.R., Wang, L., Zhou, B.: Applying deep learning to answer selection: a study and an open task. In: 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 813–820. IEEE (2015)Google Scholar
  17. 17.
    Bordes, A., Weston, J., Usunier, N.: Open question answering with weakly supervised embedding models, pp. 165–180 (2014)Google Scholar
  18. 18.
    Zeng, D., Liu, K., Lai, S., Zhou, G., Zhao, J.: Relation classification via convolutional deep neural network (2014)Google Scholar
  19. 19.
    Nakov, P., Marquez, L., Magdy, W., Moschitti, A., Glass, J., Randeree, B.: SemEval-2015 task 3: answer selection in community question answering. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval), vol. 15, pp. 269–281 (2015)Google Scholar
  20. 20.
    Nakov, P., Màrquez, L., Moschitti, A., Magdy, W., Mubarak, H., Freihat, A.A., Glass, J., Randeree, B.: SemEval-2016 task 3: community question answering. In: Proceedings of SemEval, vol. 16 (2016)Google Scholar
  21. 21.
    Nakov, P., Hoogeveen, D., Marquez, L., Moschitti, A, Mubarak, H., Baldwin, T., Verspoor, K: SemEval-2017 task 3: community question answering. In: Proceedings of the 11th International Workshop on Semantic Evaluation. Association for Computational Linguistics (SemEval), Vancouver, vol. 17 (2017)Google Scholar
  22. 22.
    Rehurek, R., Sojka, P.: Software framework for topic modelling with large corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. Citeseer (2010)Google Scholar
  23. 23.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)zbMATHMathSciNetGoogle Scholar
  24. 24.
    Tieleman, T., Hinton, G.: Rmsprop: divide the gradient by a running average of its recent magnitude. COURSERA: neural networks for machine learning. Technical report, p. 31 (2012)Google Scholar
  25. 25.
    Nandi, T., Biemann, C., Yimam, S.M., Gupta, D., Kohail, S., Ekbal, A., Bhattacharyya, P.: IIT-UHH at SemEval-2017 task 3: exploring multiple features for community question answering and implicit dialogue identification. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval), Vancouver, vol. 17, pp. 91–98 (2017)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Chang’e Jia
    • 1
  • Chengjie Sun
    • 1
  • Bingquan Liu
    • 1
  • Lei Lin
    • 1
    Email author
  1. 1.Harbin Institute of TechnologyHarbinChina

Personalised recommendations