Advertisement

Exeter at CLEF 2003: Experiments with Machine Translation for Monolingual, Bilingual and Multilingual Retrieval

  • Adenike M. Lam-Adesina
  • Gareth J. F. Jones
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3237)

Abstract

The University of Exeter group participated in the monolingual, bilingual and multilingual-4 retrieval tasks this year. The main focus of our investigation this year was the small multilingual task comprising four languages, French, German, Spanish and English. We adopted a document translation strategy and tested four merging techniques to combine results from the separate document collections, as well as a merged collection strategy. For both the monolingual and bilingual tasks we explored the use of a parallel collection for query expansion and term weighting, and also experimented with extending synonym information to conflate British and American English word spellings.

Keywords

Machine Translation Average Precision Query Expansion Retrieval Result Term Weight 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Jones, G.J.F., Lam-Adesina, A.M.: Exeter at CLEF 2001. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2001. LNCS, vol. 2406, pp. 59–77. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  2. 2.
    Lam-Adesina, A.M., Jones, G.J.F.: Exeter at CLEF 2002: Cross-Language Spoken Document Retrieval Experiments. In: Peters, C., Braschler, M., Gonzalo, J. (eds.) CLEF 2002. LNCS, vol. 2785, pp. 127–146. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  3. 3.
    Jones, G.J.F., Sakai, T., Collier, N.H., Kumano, A., Sumita, K.: A Comparison of Query Translation Methods for English-Japanese Cross-Language Information Retrieval. In: Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, San Francisco, pp. 269–270. ACM, New York (1999)CrossRefGoogle Scholar
  4. 4.
    Ballesteros, L., Croft, W.B.: Phrasal Translation and Query Expansion Techniques for Cross-Language Information Retrieval. In: Proceedings of the 20th Annual International ACM SIGIR conference on Research and Development in Information Retrieval, Philadelphia, pp. 84–91. ACM, New York (1997)CrossRefGoogle Scholar
  5. 5.
    Salton, G., Buckley, C.: Improving Retrieval performance by Relevance Feedback. Journal of the American Society for Information Science, 288–297 (1990)Google Scholar
  6. 6.
    Lam-Adesina, A.M., Jones, G.J.F.: Applying Summarization Techniques for Term Selection in Relevance Feedback. In: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, New Orleans, pp. 1–9. ACM, New York (2001)CrossRefGoogle Scholar
  7. 7.
    Robertson, S.E., Walker, S., Beaulieu, M.M.: Okapi at TREC-7: automatic ad hoc, filtering, VLS and interactive track. In: Voorhees, E., Harman, D.K. (eds.) Overview of the Seventh Text REtrieval Conference (TREC-7), NIST, pp. 253–264 (1999)Google Scholar
  8. 8.
    Porter, M.F.: An algorithm for suffix stripping. Program 14, 130–137 (1980)Google Scholar
  9. 9.
    Robertson, S.E., Walker, S.: Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval. In: Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Dublin, pp. 232–241. ACM, New York (1994)Google Scholar
  10. 10.
    Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M.M., Gatford, M.: Okapi at TREC-3. In: Harman, D.K. (ed.) Proceedings of the Third Text REtrieval Conference (TREC-3), NIST, pp. 109–126 (1995)Google Scholar
  11. 11.
    Luhn, H.P.: The Automatic Creation of Literature Abstracts. IBM Journal of Research and Development 2(2), 159–165 (1958)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Edmundson, H.P.: New Methods in Automatic Abstracting. Journal of the ACM 16(2), 264–285 (1969)zbMATHCrossRefGoogle Scholar
  13. 13.
    Tombros, A., Sanderson, M.: The Advantages of Query-Biased Summaries in Information Retrieval. In: Proceedings of the 21st Annual International ACM SIGIR Conference Research and Development in Information Retrieval, Melbourne, pp. 2–10. ACM, New York (1998)CrossRefGoogle Scholar
  14. 14.
    Robertson, S.E.: On term selection for query expansion. Journal of Documentation 46, 359–364 (1990)CrossRefGoogle Scholar
  15. 15.
    Robertson, S.E., Walker, S.: Okapi/Keenbow. In: Voorhees, E., Harman, D.K. (eds.) Overview of the Eighth Text REtrieval Conference (TREC-8), NIST, pp. 151–162 (2000)Google Scholar
  16. 16.
    Savoy, J.: Report on CLEF-2002 Experiments: Combining Multiple Sources of Evidence. In: Peters, C., Braschler, M., Gonzalo, J. (eds.) CLEF 2002. LNCS, vol. 2785, pp. 66–90. Springer, Heidelberg (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Adenike M. Lam-Adesina
    • 1
  • Gareth J. F. Jones
    • 1
  1. 1.Department of Computer ScienceUniversity of ExeterUnited Kingdom

Personalised recommendations