Advertisement

Merging Results by Predicted Retrieval Effectiveness

  • Wen-Cheng Lin
  • Hsin-Hsi Chen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3237)

Abstract

In this paper we propose several merging strategies to integrate the result lists of each intermediate run in distributed MLIR. The prediction of retrieval effectiveness was used to adjust the similarity scores of documents in the result lists. We introduced three factors affecting the retrieval effectiveness, i.e., the degree of translation ambiguity, the number of unknown words and the number of relevant documents in a collection for a given query. The results showed that the normalized-by-top-k merging with translation penalty and collection weight outperformed the other merging strategies except for the raw-score merging.

Keywords

Similarity Score Relevant Document Document Collection Retrieval Performance Query Term 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Braschler, M., Göhring, A., Schäuble, P.: Eurospider at CLEF 2002. In: Peters, C. (ed.) Working Notes for the CLEF 2002 Workshop, pp. 127–132 (2002) Google Scholar
  2. 2.
    Callan, J.P., Lu, Z., Croft, W.B.: Searching Distributed Collections With Inference Networks. In: Fox, E.A., Ingwersen, P., Fidel, R. (eds.) Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 21–28. ACM Press, New York (1995)CrossRefGoogle Scholar
  3. 3.
    Chen, A.: Cross-language Retrieval Experiments at CLEF 2002. In: Peters, C. (ed.) Working Notes for the CLEF 2002 Workshop, pp. 5–20 (2002)Google Scholar
  4. 4.
    Lin, W.C., Chen, H.H.: NTU at NTCIR3 MLIR Task. In: Kishida, K., Ishida, E. (eds.) Working Notes of the Third NTCIR Workshop Meeting. Part II: Cross Lingual Information Retrieval Task, pp. 101–105. National Institute of Informatics, Tokyo, Japan (2002)Google Scholar
  5. 5.
    Lin, W.C., Chen, H.H.: Merging Mechanisms in Multilingual Information Retrieval. In: Peters, C. (ed.) Working Notes for the CLEF 2002 Workshop, pp. 97–102 (2002) Google Scholar
  6. 6.
    Moulinier, I., Molina-Salgado, H.: Thomson Legal and Regulatory experiments for CLEF 2002. In: Peters, C. (ed.) Working Notes for the CLEF 2002 Workshop, pp. 91–96 (2002)Google Scholar
  7. 7.
    Oard, D., Diekema, A.: Cross-Language Information Retrieval. Annual Review of Information Science and Technology 33, 223–256 (1998)Google Scholar
  8. 8.
    Peters, C. (ed.): Working Notes for the CLEF 2002 Workshop (2002)Google Scholar
  9. 9.
    Robertson, S.E., Walker, S., Beaulieu, M.: Okapi at TREC-7: automatic ad hoc, filtering, VLC and interactive. In: Voorhees, E.M., Harman, D.K. (eds.) Proceedings of the Seventh Text REtrieval Conference (TREC-7). National Institute of Standards and Technology, pp. 253–264 (1998)Google Scholar
  10. 10.
    Savoy, J.: Report on CLEF-2001 Experiments: Effective Combined Query-Translation Approach. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2001. LNCS, vol. 2406, pp. 27–43. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  11. 11.
    Savoy, J.: Report on CLEF-2002 Experiments: Combining Multiple Sources of Evidence. In: Peters, C. (ed.) Working Notes for the CLEF 2002 Workshop, pp. 31–46 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Wen-Cheng Lin
    • 1
  • Hsin-Hsi Chen
    • 1
  1. 1.Department of Computer Science and Information EngineeringNational Taiwan UniversityTaipeiTaiwan

Personalised recommendations