Advertisement

Abstract

This paper proposes an integration of Recognizing Textual Entailment (RTE) with other additional information to deal with the Answer Validation task. The additional information used in our participation in the Answer Validation Exercise (AVE 2008) is from named-entity (NE) recognizer, question analysis component, etc. We have submitted two runs, one run for English and the other for German, achieving f-measures of 0.64 and 0.61 respectively. Compared with our system last year, which purely depends on the output of the RTE system, the extra information does show its effectiveness.

Keywords

Question Answering Computational Linguistics Working Note Answer Validation Information Synthesis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bar-Haim, R., Dagan, I., Dolan, B., Ferro, L., Giampiccolo, D., Magnini, B., Szpektor, I.: The Second PASCAL Recognising Textual Entailment Challenge. In: Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment (2006)Google Scholar
  2. 2.
    Bunescu, R., Mooney, R.: Subsequence Kernels for Relation Extraction. In: Advances in Neural Information Processing Systems, vol. 18. MIT Press, Cambridge (2006)Google Scholar
  3. 3.
    Dagan, I., Glickman, O., Magnini, B.: The PASCAL Recognising Textual Entailment Challenge. In: Quiñonero-Candela, J., Dagan, I., Magnini, B., d’Alché-Buc, F. (eds.) MLCW 2005. LNCS (LNAI), vol. 3944, pp. 177–190. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  4. 4.
    Finkel, J.R., Grenager, T., Manning, C.: Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling. In: Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, ACL 2005 (2005)Google Scholar
  5. 5.
    Giampiccolo, D., Magnini, B., Dagan, I., Dolan, B.: The Third PASCAL Recognizing Textual Entailment Challenge. In: Proceedings of the Workshop on Textual Entailment and Paraphrasing, Prague, June 2007, pp. 1–9 (2007)Google Scholar
  6. 6.
    Gildea, D., Palmer, M.: The Necessity of Parsing for Predicate Argument Recognition. In: Proceedings of the 40th Meeting of the Association for Computational Linguistics (ACL 2002), Philadelphia, PA, pp. 239–246 (2002)Google Scholar
  7. 7.
    Lin, D.: Dependency-based Evaluation of MINIPAR. In Workshop on the Evaluation of Parsing Systems (1998)Google Scholar
  8. 8.
    Neumann, G., Piskorski, J.: A Shallow Text Processing Core Engine. Journal of Computational Intelligence 18(3), 451–476 (2002)CrossRefGoogle Scholar
  9. 9.
    Peñas, A., Rodrigo, Á., Verdejo, F.: Overview of the Answer Validation Exercise 2007. In: The CLEF 2007 Working Notes (2007)Google Scholar
  10. 10.
    Wang, R., Neumann, G.: Recognizing Textual Entailment Using a Subsequence Kernel Method. In: Proc. of AAAI 2007 (2007a)Google Scholar
  11. 11.
    Wang, R., Neumann, G.: Recognizing Textual Entailment Using Sentence Similarity based on Dependency Tree Skeletons. In: Proceedings of the Workshop on Textual Entailment and Paraphrasing, Prague, June 2007, pp. 36–41 (2007b)Google Scholar
  12. 12.
    Wang, R., Neumann, G.: DFKI–LT at AVE 2007: Using Recognizing Textual Entailment for Answer Validation. In: Online Proceedings of CLEF 2007 Working Notes, Budapest, Hungary, September 2007 (2007c) ISBN: 2-912335-31-0Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Rui Wang
    • 1
  • Günter Neumann
    • 2
  1. 1.Saarland UniversitySaarbrückenGermany
  2. 2.LT-Lab, DFKISaarbrückenGermany

Personalised recommendations