Abstract
Sheffield’s contribution to the interactive cross language information retrieval track took the approach of comparing users’ abilities to judge the relevance of machine translated French documents against ones written in the users’ native language: English. Conducting such an experiment is challenging, and the issues surrounding the experimental design are discussed. Experimental results strongly suggest that users are just as capable of judging relevance of translated documents as they are for documents in their native language
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Van Rijsbergen, C.J. (1979): Information Retrieval (second edition), Butterworths, London
Voorhees, E. (1998): Variations in Relevance Judgements and the Measurement of Retrieval Effectiveness, in Proceedings of the 21st Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval: 315–323
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sanderson, M., Bathie, Z. (2002). iCLEF at Sheffield. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds) Evaluation of Cross-Language Information Retrieval Systems. CLEF 2001. Lecture Notes in Computer Science, vol 2406. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45691-0_32
Download citation
DOI: https://doi.org/10.1007/3-540-45691-0_32
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-44042-0
Online ISBN: 978-3-540-45691-9
eBook Packages: Springer Book Archive