Advertisement

Overview of the Answer Validation Exercise 2008

  • Álvaro Rodrigo
  • Anselmo Peñas
  • Felisa Verdejo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5706)

Abstract

The Answer Validation Exercise at the Cross Language Evaluation Forum (CLEF) is aimed at developing systems able to decide whether the answer of a Question Answering (QA) system is correct or not. We present here the exercise description, the changes in the evaluation with respect to the last edition and the results of this third edition (AVE 2008). Last year’s changes allowed us to measure the possible gain in performance obtained by using AV systems as the selection method of QA systems. Then, in this edition we wanted to reward AV systems able to detect also if all the candidate answers to a question are incorrect. 9 groups have participated with 24 runs in 5 different languages, and compared with the QA systems, the results show an evidence of the potential gain that more sophisticated AV modules might introduce in the task of QA.

Keywords

Correct Answer Question Answering Test Collection Name Entity Question Group 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dietterich, T.G.: Machine learning research: Four current directions. AI Magazine 18(4), 97–136 (1997)Google Scholar
  2. 2.
    Forner, P., Peñas, A., Alegria, I., Forascu, C., Moreau, N., Osenova, P., Prokopidis, P., Rocha, P., Sacaleanu, B., Sutcliffe, R., Sang, E.T.K.: Overview of the CLEF 2008 Multilingual Question Answering Track. In: Working Notes for the CLEF 2008 Workshop, Aarhus, Denmark, September 17-19 (2008)Google Scholar
  3. 3.
    Giampiccolo, D., Forner, P., Herrera, J., Peñas, A., Ayache, C., Forascu, C., Jijkoun, V., Osenova, P., Rocha, P., Sacaleanu, B., Sutcliffe, R.F.E.: Overview of the CLEF 2007 Multilingual Question Answering Track. In: Peters, et al [16], pp. 200–236Google Scholar
  4. 4.
    Glöckner, I.: RAVE: A Fast Logic-Based Answer Validator. In: Peters, C., et al. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 468–471. Springer, Heidelberg (2009)Google Scholar
  5. 5.
    Harabagiu, S., Hickl, A.: Methods for Using Textual Entailment in Open-Domain Question Answering. In: Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, Sydney, pp. 905–912 (2006)Google Scholar
  6. 6.
    Hartrumpf, S., Glöckner, I., Leveling, J.: University of Hagen at QA@CLEF 2008: Efficient Question Answering with Question Decomposition and Multiple Answer Streams. In: Working Notes for the CLEF 2008 Workshop, Aarhus, Denmark, September 17-19 (2008)Google Scholar
  7. 7.
    Iftene, A., Balahur-Dobrescu, A.: Answer Validation on English and Romanian Languages. In: Peters, C., et al. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 448–451. Springer, Heidelberg (2009)Google Scholar
  8. 8.
    Jacquin, C., Monceaux, L., Desmontils, E.: The Answer Validation System ProdicosAV dedicated to French. In: Peters, C., et al. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 452–459. Springer, Heidelberg (2009)Google Scholar
  9. 9.
    Magnini, B., Giampiccolo, D., Forner, P., Ayache, C., Jijkoun, V., Osenova, P., Peñas, A., Rocha, P., Sacaleanu, B., Sutcliffe, R.F.E.: Overview of the CLEF 2006 Multilingual Question Answering Track. In: Peters, et al [15], pp. 223–256Google Scholar
  10. 10.
    Moriceau, V., Tannier, X., Grappy, A., Grau, B.: Justification of Answers by Verification of Dependency Relations - The French AVE Task. In: Working Notes for the CLEF 2008 Workshop, Aarhus, Denmark, September 17-19 (2008)Google Scholar
  11. 11.
    Ferrández, Ó., Muñoz, R., Palomar, M.: Studying the Influence of Semantic Constraints in AVE. In: Peters, C., et al. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 460–467. Springer, Heidelberg (2009)Google Scholar
  12. 12.
    Peñas, A., Rodrigo, Á., Sama, V., Verdejo, F.: Overview of the Answer Validation Exercise 2006. In: Peters, et al [15], pp. 257–264Google Scholar
  13. 13.
    Peñas, A., Rodrigo, Á., Sama, V., Verdejo, F.: Testing the Reasoning for Question Answering Validation. Journal of Logic and Computation 18(3), 459–474 (2008)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Peñas, A., Rodrigo, Á., Verdejo, F.: Overview of the Answer Validation Exercise 2007. In: Peters, et al [16], pp. 237–248Google Scholar
  15. 15.
    Peters, C., Clough, P., Gey, F.C., Karlgren, J., Magnini, B., Oard, D.W., de Rijke, M., Stempfhuber, M. (eds.): CLEF 2006. LNCS, vol. 4730. Springer, Heidelberg (2007)Google Scholar
  16. 16.
    Peters, C., Jijkoun, V., Mandl, T., Müller, H., Oard, D.W., Peñas, A., Petras, V., Santos, D. (eds.): CLEF 2007. LNCS, vol. 5152. Springer, Heidelberg (2008)Google Scholar
  17. 17.
    Rodrigo, Á., Peñas, A., Herrera, J., Verdejo, F.: The Effect of Entity Recognition on Answer Validation. In: Peters, et al [15], pp. 483–489Google Scholar
  18. 18.
    Téllez-Valero, A., Juárez-González, A., Gómez, M.M., Villaseñor-Pineda, L.: Using Non-Overlap Features for Supervised Answer Validation. In: Peters, C., et al. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 476–479. Springer, Heidelberg (2009)Google Scholar
  19. 19.
    Wang, R., Neumann, G.: Information Synthesis for Answer Validation. In: Peters, C., et al. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 472–475. Springer, Heidelberg (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Álvaro Rodrigo
    • 1
  • Anselmo Peñas
    • 1
  • Felisa Verdejo
    • 1
  1. 1.Dpto. Lenguajes y Sistemas InformáticosUNEDSpain

Personalised recommendations