Skip to main content

Reconstructed Option Rereading Network for Opinion Questions Reading Comprehension

  • Conference paper
  • First Online:
Chinese Computational Linguistics (CCL 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11856))

Included in the following conference series:

  • 4293 Accesses

Abstract

Multiple-choice reading comprehension task has seen a recent surge of popularity, aiming at choosing the correct option from candidate options for the question referring to a related passage. Previous work focuses on factoid-based questions but ignore opinion-based questions. Options of opinion-based questions are usually sentiment phrases, such as “Good” or “Bad”. It causes that previous work fail to model the interactive information among passage, question and options, because their approaches are based on the premise that options contain rich semantic information. To this end, we propose a Reconstructed Option Rereading Network (RORN) to tackle it. We first reconstruct the options based on question. Then, the model utilize the reconstructed options to generate the representation of options. Finally, we fed into a max-pooling layer to obtain the ranking score for each opinion. Experiments show that our proposed achieve state-of-art performance on the Chinese opinion questions machine reading comprehension datasets in AI challenger competition.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://challenger.ai/competition/oqmrc2018.

  2. 2.

    The dataset can be downloaded in https://challenger.ai/competition/oqmrc2018.

  3. 3.

    https://github.com/pytorch/pytorch.

  4. 4.

    https://github.com/huggingface/pytorch-pretrained-BERT.

  5. 5.

    https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip.

  6. 6.

    https://github.com/AIChallenger/AI_Challenger_2018.

  7. 7.

    https://github.com/HKUST-KnowComp/R-Net.

References

  1. Chen, D., Bolton, J., Manning, C.D.: A thorough examination of the CNN/daily mail reading comprehension task. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 2358–2367 (2016)

    Google Scholar 

  2. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805 (2018). http://arxiv.org/abs/1810.04805

  3. Dhingra, B., Liu, H., Yang, Z., Cohen, W., Salakhutdinov, R.: Gated-attention readers for text comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1832–1846 (2017)

    Google Scholar 

  4. Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: RACE: large-scale reading comprehension dataset from examinations. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 785–794 (2017)

    Google Scholar 

  5. Parikh, S., Sai, A.B., Nema, P., Khapra, M.M.: ElimiNet: a model for eliminating options for reading comprehension with multiple choice questions. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 4272–4278. AAAI Press (2018)

    Google Scholar 

  6. Peters, M.E., et al.: Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018)

  7. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018). https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/languageunsupervised/languageunderstandingpaper.pdf

  8. Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383–2392 (2016)

    Google Scholar 

  9. Ran, Q., Li, P., Hu, W., Zhou, J.: Option comparison network for multiple-choice reading comprehension. arXiv preprint arXiv:1903.03033 (2019)

  10. Richardson, M., Burges, C.J., Renshaw, E.: MCTest: a challenge dataset for the open-domain machine comprehension of text. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 193–203 (2013)

    Google Scholar 

  11. Seo, M., Kembhavi, A., Farhadi, A., Hajishirzi, H.: Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 (2016)

  12. Sun, K., Yu, D., Yu, D., Cardie, C.: Improving machine reading comprehension with general reading strategies. arXiv preprint arXiv:1810.13441 (2018)

  13. Tan, C., Wei, F., Wang, W., Lv, W., Zhou, M.: Multiway attention networks for modeling sentence pairs. In: IJCAI, pp. 4411–4417 (2018)

    Google Scholar 

  14. Tay, Y., Tuan, L.A., Hui, S.C.: Multi-range reasoning for machine comprehension. arXiv preprint arXiv:1803.09074 (2018)

  15. Trischler, A., et al.: NewsQA: a machine comprehension dataset. arXiv preprint arXiv:1611.09830 (2016)

  16. Trischler, A., Ye, Z., Yuan, X., He, J., Bachman, P.: A parallel-hierarchical model for machine comprehension on sparse data. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 432–441 (2016)

    Google Scholar 

  17. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  18. Wang, S., Yu, M., Jiang, J., Chang, S.: A co-matching model for multi-choice reading comprehension. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 746–751 (2018)

    Google Scholar 

  19. Wang, W., Yang, N., Wei, F., Chang, B., Zhou, M.: Gated self-matching networks for reading comprehension and question answering. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 189–198 (2017)

    Google Scholar 

  20. Xiong, C., Zhong, V., Socher, R.: Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 (2016)

  21. Xu, Y., Liu, J., Gao, J., Shen, Y., Liu, X.: Towards human-level machine reading comprehension: reasoning and inference with multiple strategies. arXiv preprint arXiv:1711.04964 (2017)

  22. Yin, W., Ebert, S., Schütze, H.: Attention-based convolutional neural network for machine comprehension. In: Proceedings of the Workshop on Human-Computer Question Answering, pp. 15–21 (2016)

    Google Scholar 

  23. Zhang, S., Zhao, H., Wu, Y., Zhang, Z., Zhou, X., Zhou, X.: Dual co-matching network for multi-choice reading comprehension. arXiv preprint arXiv:1901.09381 (2019)

  24. Zhu, H., Wei, F., Qin, B., Liu, T.: Hierarchical attention flow for multiple-choice reading comprehension. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

Download references

Acknowledgement

This work was supported by the National Natural Science Foundation of China (No. 61772135, No. U1605251, No.61533018), the Natural Key R&D Program of China (No. 2018YFC0830101). This work was also supported by the Open Project of Key Laboratory of Network Data Science & Technology of Chinese Academy of Sciences (No. CASNDST201708 and No. CASNDST201606), the Open Project of National Laboratory of Pattern Recognition at the Institute of Automation of the Chinese Academy of Sciences (201900041).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kang Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Qiu, D. et al. (2019). Reconstructed Option Rereading Network for Opinion Questions Reading Comprehension. In: Sun, M., Huang, X., Ji, H., Liu, Z., Liu, Y. (eds) Chinese Computational Linguistics. CCL 2019. Lecture Notes in Computer Science(), vol 11856. Springer, Cham. https://doi.org/10.1007/978-3-030-32381-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32381-3_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32380-6

  • Online ISBN: 978-3-030-32381-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics