Advertisement

Océ at CLEF 2003

  • Roel Brand
  • Marvin Brünner
  • Samuel Driessen
  • Pascha Iljin
  • Jakob Klok
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3237)

Abstract

This report describes the work done at Océ Research for the Cross-Language Evaluation Forum (CLEF) 2003. This year we participated in seven mono-lingual tasks (all languages except Russian). We developed a generic probabilistic model that does not make use of global statistics from a document collection to rank documents. The relevance of a document to a given query is calculated using the term frequencies of the query terms in the document and the length of the document. We used the BM25 model, our new probabilistic model and (for Dutch only) a statistical model to rank documents. Our main goals were to compare the BM25 model and our probabilistic model, and to evaluate the performance of a statistical model that uses ’knowledge’ from relevance assessments from previous years. Furthermore, we give some comments on the standard performance measures used in the CLEF.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Brand, R., Brünner, M.: Océ at CLEF 2002. LNCS. Springer, Heidelberg (2003)Google Scholar
  2. 2.
    Iljin, P.: Modeling Document Relevancy Clues in Information Retrieval Systems. SAI (2004) (to appear) Google Scholar
  3. 3.
    Hiemstra, D.: Using Language Models for Information Retrieval. Ph.D. Thesis. Centre for Telematics and Information Technology, University of Twente (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Roel Brand
    • 1
  • Marvin Brünner
    • 1
  • Samuel Driessen
    • 1
  • Pascha Iljin
    • 1
  • Jakob Klok
    • 1
  1. 1.Océ-Technologies B.V.VenloThe Netherlands

Personalised recommendations