Skip to main content

MFM: A Multi-level Fused Sequence Matching Model for Candidates Filtering in Multi-paragraphs Question-Answering

  • Conference paper
  • First Online:
Advances in Multimedia Information Processing – PCM 2018 (PCM 2018)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11166))

Included in the following conference series:

Abstract

Text-based question-answering (QA in short) is a popular application on multimedia environments. In this paper, we mainly focus on the multi-paragraphs QA systems, which can retrieve many candidate paragraphs to feed into the extraction module to locate the answers in the paragraphs. However, according to our observations, there are no real answer in many candidate paragraphs. To filter these paragraphs, we propose a multi-level fused sequence matching (MFM in short) model through deep network methods. Then we construct a distant supervision dataset based on Wikipedia and carry out several experiments on that. Also we use another popular sequence matching dataset to test the performance of our model. Experiments show that our MFM model can outperform recent models not only on the filtering candidates in multi-paragraphs QA task but also on the sequence matching task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.wikipedia.org.

  2. 2.

    https://github.com/facebookresearch/DrQA.

References

  1. Chen, D., Fisch, A., Weston, J., et al.: Reading wikipedia to answer open-domain questions, 1870–1879 (2017)

    Google Scholar 

  2. Wang, S., Jiang, J.: A compare-aggregate model for matching text sequences. In: Conference on ICLR 2017 (2017)

    Google Scholar 

  3. Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Conference on Empirical Methods in Natural Language Processing, pp. 1532–1543 (2014)

    Google Scholar 

  4. Huang, H.Y., Zhu, C., Shen, Y., et al.: FusionNet: fusing via fully-aware attention with application to machine comprehension (2017)

    Google Scholar 

  5. Robertson, S., Zaragoza, H.: The probabilistic relevance framework: BM25 and beyond. Found. Trends\(\textregistered \) Inf. Retr. 3(4), 333–389 (2009)

    Article  Google Scholar 

  6. Yin, W., Schütze, H., Xiang, B., et al.: ABCNN: attention-based convolutional neural network for modeling sentence Pairs. Comput. Sci. (2015)

    Google Scholar 

  7. Hochreiter, S., Schmidhuber, J.: Long short-term memory. In: Supervised Sequence Labelling with Recurrent Neural Networks, pp. 1735–1780. Springer, Heidelberg (1997)

    Article  Google Scholar 

  8. Zaragoza, H., Craswell, N., Taylor, M.J., et al.: Microsoft Cambridge at TREC 13: web and hard tracks. In: TREC 2004 (2004)

    Google Scholar 

  9. Xiong, C., Zhong, V., Socher, R.: Dynamic coattention networks for question answering (2016)

    Google Scholar 

  10. Wang, Z., Hamza, W., Florian, R.: Bilateral multi-perspective matching for natural language sentences (2017)

    Google Scholar 

  11. Ponte, J.M., Croft, W.B.: A language modeling approach to information retrieval. In: Research and Development in Information Retrieval, pp. 275–281 (1998)

    Google Scholar 

  12. Kadlec, R., Schmid, M., Bajgar, O., et al.: Text understanding with the attention sum reader network, 908–918 (2016)

    Google Scholar 

  13. Seo, M., Kembhavi, A., Farhadi, A., et al.: Bidirectional attention flow for machine comprehension (2016)

    Google Scholar 

  14. Tan, M., Xiang, B., Zhou, B.: LSTM-based deep learning models for non-factoid answer selection. Comput. Sci. (2015)

    Google Scholar 

  15. He, H., Lin, J.: Pairwise word interaction modeling with deep neural networks for semantic similarity measurement. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 937–948 (2016)

    Google Scholar 

  16. Yu, L., Hermann, K.M., Blunsom, P., et al.: Deep learning for answer sentence selection. Comput. Sci. (2014)

    Google Scholar 

  17. Bowman, S.R., Angeli, G., Potts, C., et al.: A large annotated corpus for learning natural language inference. Comput. Sci. (2015)

    Google Scholar 

  18. Feng, M., Xiang, B., Glass, M.R., et al.: Applying deep learning to answer selection: a study and an open task, 813–820 (2015)

    Google Scholar 

  19. Cheng, J., Dong, L., Lapata, M.: Long short-term memory-networks for machine reading (2016)

    Google Scholar 

  20. Wang, S., Jiang, J.: Learning natural language inference with LSTM (2015)

    Google Scholar 

  21. Rocktaschel, T., Grefenstette, E., Hermann, K.M., et al.: Reasoning about entailment with neural attention (2015)

    Google Scholar 

  22. Tan, M., Santos, C.D., Xiang, B., et al.: Improved representation learning for question answer matching. In: Meeting of the Association for Computational Linguistics, pp. 464–473 (2016)

    Google Scholar 

  23. Hermann, K.M., Kociský, T., Grefenstette, E., et al.: Teaching machines to read and comprehend, 1693–1701 (2015)

    Google Scholar 

  24. Chen, Q., Zhu, X., Ling, Z., et al.: Enhancing and combining sequential and tree LSTM for natural language inference, 1657–1668 (2016)

    Google Scholar 

  25. Parikh, A.P., Täckström, O., Das, D., et al.: A decomposable attention model for natural language inference, 2249–2255 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhen Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, Y. et al. (2018). MFM: A Multi-level Fused Sequence Matching Model for Candidates Filtering in Multi-paragraphs Question-Answering. In: Hong, R., Cheng, WH., Yamasaki, T., Wang, M., Ngo, CW. (eds) Advances in Multimedia Information Processing – PCM 2018. PCM 2018. Lecture Notes in Computer Science(), vol 11166. Springer, Cham. https://doi.org/10.1007/978-3-030-00764-5_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-00764-5_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-00763-8

  • Online ISBN: 978-3-030-00764-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics