Skip to main content

Legal Question Answering System Using FrameNet

  • Conference paper
  • First Online:
New Frontiers in Artificial Intelligence (JSAI-isAI 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11717))

Included in the following conference series:

Abstract

A central issue of yes/no question answering is the usage of knowledge source given a question. While yes/no question answering has been studied for a long time, legal yes/no question answering largely differs from other domains. The most distinguishing characteristic is that legal issues require precise analysis of predicate argument structures and semantical abstraction in these sentences. We have developed a yes/no question answering system for answering questions for a statute legal domain. Our system uses a semantic database based on FrameNet, which works with a predicate argument structure analyzer, in order to recognize semantic correspondences rather than surface strings between given problem sentences and knowledge source sentences. We applied our system to the COLIEE (Competition on Legal Information Extraction/Entailment) 2018 task. Our frame based system achieved better scores on average than our previous system in COLIEE 2017, and was the second best score among participants of Task 4. We confirmed effectiveness of the frame information with the COLIEE training dataset. Our result shows the importance of the points described above, revealing opportunities to continue further work on improving our system’s accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Lin, D., Pantel, P.: Discovery of inference rules for question-answering. Nat. Lang. Eng. 7(4), 343–360 (2001)

    Article  Google Scholar 

  2. Ravichandran, D., Hovy, E.: Learning surface text patterns for a question answering system. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 41–47 (2002)

    Google Scholar 

  3. Yu, H., Hatzivassiloglou, V.: Towards answering opinion questions: separating facts from opinions and identifying the polarity of opinion sentences. In: Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pp. 129–136 (2003)

    Google Scholar 

  4. Pinto, D., McCallum, A., Wei, X., Croft, W.B.: Table extraction using conditional random fields. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 235–242 (2003)

    Google Scholar 

  5. Cui, H., Sun, R., Li, K., Kan, M.-Y., Chua, T.-S.: Question answering passage retrieval using dependency relations. In: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 400–407 (2005)

    Google Scholar 

  6. Xue, X., Jeon, J., Croft, W.B.: Retrieval models for question and answer archives. In: Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 475–482 (2008)

    Google Scholar 

  7. Bian, J., Liu, Y., Agichtein, E., Zha, H.: Finding the right facts in the crowd: factoid question answering over social media. In: Proceedings of the 17th International Conference on World Wide Web, pp. 467–476 (2008)

    Google Scholar 

  8. Voorhees, E.M., Harman, D.K.: TREC: Experiment and Evaluation in Information Retrieval. The MIT Press, Cambridge (2005). (Digital Libraries and Electronic Publishing)

    Google Scholar 

  9. Kando, N., Kuriyama, K., Nozue, T.: NACSIS test collection workshop (NTCIR-1) (poster abstract). In: Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 299–300 (1999)

    Google Scholar 

  10. Braschler, M.: CLEF 2000 — overview of results. In: Peters, C. (ed.) CLEF 2000. LNCS, vol. 2069, pp. 89–101. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44645-1_9

    Chapter  Google Scholar 

  11. Kwok, C.C.T., Etzioni, O., Weld, D.S.: Scaling question answering to the web. In: Proceedings of the 10th International Conference on World Wide Web, pp. 150–161 (2001)

    Google Scholar 

  12. Etzioni, O., et al.: Web-scale information extraction in KnowItAll: (preliminary results). In: Proceedings of the 13th International Conference on World Wide Web, pp. 100–110 (2004)

    Google Scholar 

  13. Jeon, J., Croft, W.B., Lee, J.H.: Finding similar questions in large question and answer archives. In: Proceedings of the 14th ACM International Conference on Information and Knowledge Management, pp. 84–90 (2005)

    Google Scholar 

  14. Kanayama, H., Miyao, Y., Prager, J.: Answering yes/no questions via question inversion. In: The 24th International Conference on Computational Linguistics (COLING 2012), pp. 1377–1391 (2012)

    Google Scholar 

  15. Dumais, S., Banko, M., Brill, E., Lin, J., Ng, A.: Web question answering: is more always better? In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 291–298 (2002)

    Google Scholar 

  16. Ferrucci, D.: Introduction to ‘This is Watson’. IBM J. Res. Dev. 56(3.4), 1:1–1:15 (2012)

    Article  Google Scholar 

  17. Ferrucci, D., et al.: Building watson: an overview of the DeepQA project. AI Mag. 31(3), 59–79 (2010). https://doi.org/10.1609/aimag.v31i3.2303

    Article  Google Scholar 

  18. Arai, N.H.: The impact of AI—can a robot get into the University of Tokyo? Natl. Sci. Rev. 2(2), 135–136 (2015)

    Article  Google Scholar 

  19. The Todai Robot Project. http://21robot.org/

  20. Dagan, I., Glickman, O., Magnini, B.: The PASCAL recognising textual entailment challenge. In: Quiñonero-Candela, J., Dagan, I., Magnini, B., d’Alché-Buc, F. (eds.) MLCW 2005. LNCS (LNAI), vol. 3944, pp. 177–190. Springer, Heidelberg (2006). https://doi.org/10.1007/11736790_9

    Chapter  Google Scholar 

  21. Giampiccolo, D., Magnini, B., Dagan, I., Dolan, B.: The third PASCAL recognizing textual entailment challenge. In: Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pp. 1–9 (2007)

    Google Scholar 

  22. Negri, M., Marchetti, A., Mehdad, Y., Bentivogli, L., Giampiccolo, D.: Semeval-2012 task 8: cross-lingual textual entailment for content synchronization. In: Proceedings of the First Joint Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pp. 399–407 (2012)

    Google Scholar 

  23. Shima, H., et al.: Overview of NTCIR-9 RITE: recognizing inference in TExt. In: NTCIR-9 Workshop, pp. 291–301 (2011)

    Google Scholar 

  24. Watanabe, Y., et al.: Overview of the recognizing inference in text (RITE-2) at NTCIR-10. In: The NTCIR-10 Workshop, pp. 385–404 (2013)

    Google Scholar 

  25. Matsuyoshi, S., et al.: Overview of the NTCIR-11 recognizing inference in TExt and validation (RITE-VAL) Task. In: The 11th NTCIR (NII Testbeds and Community for Information Access Research) Workshop, pp. 223–232 (2014)

    Google Scholar 

  26. Competition on Legal Information Extraction/Entailment (COLIEE-14), Workshop on Juris-informatics (JURISIN) 2014 (2014). http://webdocs.cs.ualberta.ca/~miyoung2/jurisin_task/index.html

  27. Kim, M.-Y., Goebel, R., Ken, S.: COLIEE-2015: evaluation of legal question answering. In: Ninth International Workshop on Juris-Informatics (JURISIN 2015) (2015)

    Google Scholar 

  28. Kim, M.-Y., Goebel, R., Kano, Y., Ken, S.: COLIEE-2016: evaluation of the competition on legal information extraction and entailment. In: Tenth International Workshop on Juris-informatics (JURISIN 2016) (2016)

    Google Scholar 

  29. Kano, Y., Kim, M.-Y., Goebel, R., Ken, S.: Overview of COLIEE2017. In: International Conference on Artificial Intelligence and Law (ICAIL 2017) (2017)

    Google Scholar 

  30. Taniguchi, R., Kano, Y.: Legal yes/no question answering system using case-role analysis. In: Kurahashi, S., Ohta, Y., Arai, S., Satoh, K., Bekki, D. (eds.) JSAI-isAI 2016. LNCS (LNAI), vol. 10247, pp. 284–298. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61572-1_19

    Chapter  Google Scholar 

  31. JUMAN (a User-Extensible Morphological Analyzer for Japanese). http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN

  32. Japanese Dependency and Case Structure Analyzer KNP. http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?KNP

  33. Kano, Y., Hoshino, R., Taniguchi, R.: Analyzable legal yes/no question answering system using linguistic structures. In: International Conference on Artificial Intelligence and Law (ICAIL 2017) (2017)

    Google Scholar 

  34. Welcome to FrameNet! https://framenet.icsi.berkeley.edu/fndrupal/

  35. Ruppenhofer, J., Ellsworth, M., Petruck, M.R.L., Johnson, C.R., Baker, C.F., Scheffczyk, J.: FrameNet II: Extended Theory and Practice (2016)

    Google Scholar 

  36. Fillmore, C.J.: Frame semantics. In: Linguistics in the Morning Calm, pp. 111–137 (1982)

    Google Scholar 

  37. Japanese FrameNet. http://jfn.st.hc.keio.ac.jp

  38. Japanese WordNet. http://compling.hss.ntu.edu.sg/wnja/index.en.html

  39. Graph Compute with Neo4j: Built-in Algorithms, Spark & Extensions. https://neo4j.com/blog/graph-compute-neo4j-algorithms-spark-extensions/

Download references

Acknowledgements

This work was partially supported by MEXT Kakenhi and JST CREST.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoshinobu Kano .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Taniguchi, R., Hoshino, R., Kano, Y. (2019). Legal Question Answering System Using FrameNet. In: Kojima, K., Sakamoto, M., Mineshima, K., Satoh, K. (eds) New Frontiers in Artificial Intelligence. JSAI-isAI 2018. Lecture Notes in Computer Science(), vol 11717. Springer, Cham. https://doi.org/10.1007/978-3-030-31605-1_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-31605-1_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-31604-4

  • Online ISBN: 978-3-030-31605-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics