Skip to main content

Explaining User Performance in Information Retrieval: Challenges to IR Evaluation

  • Conference paper
Advances in Information Retrieval Theory (ICTIR 2009)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 5766))

Included in the following conference series:

Abstract

The paper makes three points of significance for IR research: (1) The Cranfield paradigm of IR evaluation seems to lose power when one looks at human instead of system performance. (2) Searchers using IR systems in real-life use rather short queries, which individually often have poor performance. However, when used in sessions, they may be surprisingly effective. The searcher’s strategies have not been sufficiently described and cannot therefore be properly understood, supported nor evaluated. (3) Searchers in real-life seek to optimize the entire information access process, not just result quality. Evaluation of output alone is insufficient to explain searcher behavior.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Allan, J., Carterette, B., Lewis, J.: When will information retrieval be “good enough”? In: Proc. ACM SIGIR 2005, pp. 433–440. ACM Press, New York (2005)

    Google Scholar 

  2. Cleverdon, C., Mills, L., Keen, M.: Factors determining the performance of indexing systems, vol. 1 - design. Aslib Cranfield Research Project, Cranfield (1966)

    Google Scholar 

  3. Ellis, D.: Progress and problems in information retrieval. Library Assoc., London (1996)

    Google Scholar 

  4. Hersh, W.: Relevance and Retrieval Evaluation: Perspectives from Medicine. J. Amer. Soc. Inform. Sci. 45, 201–206 (1994)

    Article  Google Scholar 

  5. Huuskonen, S., Vakkari, P.: Students’ search process and outcome in Medline in writing an essay for a class on evidence based medicine. J. Documentat. 64, 287–303 (2008)

    Article  Google Scholar 

  6. Ingwersen, P., Järvelin, K.: The turn: Integration of information seeking and retrieval in context. Springer, Heidelberg (2005)

    MATH  Google Scholar 

  7. ISO Ergonomic requirements for office work with visual display terminals (VDTs), Part 11: Guidance on usability. ISO 9241-11:1998(E) (1998)

    Google Scholar 

  8. Jansen, B.J., Spink, A., Saracevic, T.: Real Life, Real Users, and Real Needs: A Study and Analysis of User Queries on the Web. In: Inform. Proc. Manag., vol. 36, pp. 207–227 (2000)

    Google Scholar 

  9. Järvelin, K.: An Analysis of Two Approaches in Information Retrieval: From Frameworks to Study Designs. J. Amer. Soc. Inform. Sci. 58, 971–986 (2007)

    Article  Google Scholar 

  10. Järvelin, K., Price, S.L., Delcambre, L.M.L., Nielsen, M.L.: Discounted Cumulated Gain Based Evaluation of Multiple-Query IR Sessions. In: Macdonald, C., Ounis, I., Plachouras, V., Ruthven, I., White, R.W. (eds.) ECIR 2008. LNCS, vol. 4956, pp. 4–15. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  11. Keskustalo, H., Järvelin, K., Pirkola, A., Sharma, T., Lykke Nielsen, M.: Test Collection-Based IR Evaluation Needs Extension Toward Sessions - A Case Study of Extremely Short Queries. In: Proc. AIRS 2009. LNCS. Springer, Heidelberg (to appear, 2009)

    Google Scholar 

  12. Salton, G.: Evaluation Problems in Interactive Information Retrieval. Inform. Stor. Retr. 6, 29–44 (1970)

    Article  Google Scholar 

  13. Saracevic, T.: User lost: reflections on the past, future, and limits of information science. ACM SIGIR Forum 31(2), 16–27 (1997)

    Article  Google Scholar 

  14. Smith, C.L., Kantor, P.B.: User Adaptation: Good Results from Poor Systems. In: Proc. ACM SIGIR 2008, pp. 147–154. ACM Press, New York (2008)

    Google Scholar 

  15. Turpin, A., Scholer, F.: User performance versus precision measures for simple search tasks. In: Proc. ACM SIGIR 2006, pp. 11–18. ACM Press, New York (2006)

    Google Scholar 

  16. Wagner, D., Berger, J., Zeldith, M.: A working strategy for constructing theories. In: Ritzer, G. (ed.) Metatheorizing, pp. 107–123. Sage, Tousand Oaks (1992)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Järvelin, K. (2009). Explaining User Performance in Information Retrieval: Challenges to IR Evaluation. In: Azzopardi, L., et al. Advances in Information Retrieval Theory. ICTIR 2009. Lecture Notes in Computer Science, vol 5766. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04417-5_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04417-5_28

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04416-8

  • Online ISBN: 978-3-642-04417-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics