Skip to main content

Overview of the CLEF 2017 Personalised Information Retrieval Pilot Lab (PIR-CLEF 2017)

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10456))

Abstract

The Personalised Information Retrieval Pilot Lab (PIR-CLEF 2017) provides a forum for the exploration of evaluation of personalised approaches to information retrieval (PIR). The Pilot Lab provides a preliminary edition of a Lab task dedicated to personalised search. The PIR-CLEF 2017 Pilot Task is the first evaluation benchmark based on the Cranfield paradigm, with the potential benefits of producing evaluation results that are easily reproducible. The task is based on search sessions over a subset of the ClueWeb12 collection, undertaken by 10 users by using a clearly defined and novel methodology. The collection provides data gathered by the activities undertaken during the search sessions by each participant, including details of relevant documents as marked by the searchers. The intention of the collection is to allow research groups working on PIR to both experience with and provide feedback about our proposed PIR evaluation methodology with the aim of launching a more formal PIR Lab at CLEF 2018.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://trec.nist.gov/data/session.html.

References

  1. Sanvitto, C., Ganguly, D., Jones, G.J.F., Pasi, G.: A laboratory-based method for the evaluation of personalised search. In: Proceedings of the Seventh International Workshop on Evaluating Information Access (EVIA 2016), a Satellite Workshop of the NTCIR-12 Conference, Tokyo, 7 June 2016

    Google Scholar 

  2. Pasi, G.: Issues in personalising information retrieval. IEEE Intell. Inform. Bull. 11(1), 3–7 (2010)

    Google Scholar 

  3. Tamine-Lechani, L., Boughanem, M., Daoud, M.: Evaluation of contextual information retrieval effectiveness: overview of issues and research. Knowl. Inf. Syst. 24(1), 1–34 (2009)

    Article  Google Scholar 

  4. Harman, D.: Overview of the fourth text retrieval conference (TREC-4). In: Harman, D.K. (ed.) TREC, vol. Special Publication 500-236. National Institute of Standards and Technology (NIST) (1995)

    Google Scholar 

  5. Allan, J.: HARD track overview in TREC 2003: high accuracy retrieval from documents. In: Proceedings of The Twelfth Text REtrieval Conference (TREC 2003), Gaithersburg, pp. 24–37 (2003)

    Google Scholar 

  6. Dean-Hall, A., Clarke, C.L.A., Kamps, J., Thomas, P., Voorhees, E.M.: Overview of the TREC 2012 contextual suggestion track. In: Voorhees and Bucklan

    Google Scholar 

  7. Carterette, B., Kanoulas, E., Hall, M.M., Clough, P.D.: Overview of the TREC 2014 session track. In: Proceedings of the Twenty-Third Text REtrieval Conference (TREC 2014), Gaithersburg (2014)

    Google Scholar 

  8. Ganguly, D., Leveling, J., Jones, G.J.F.: Overview of the Personalized and Collaborative Information Retrieval (PIR) Track at FIRE-2011. In: Majumder, P., Mitra, M., Bhattacharyya, P., Subramaniam, L.V., Contractor, D., Rosso, P. (eds.) FIRE 2010-2011. LNCS, vol. 7536, pp. 227–240. Springer, Heidelberg (2013). doi:10.1007/978-3-642-40087-2_22

    Chapter  Google Scholar 

  9. Villegas, M., Puigcerver, J.A., Toselli, H., Sanchez, J.A., Vidal, E.: Overview of the image CLEF 2016 handwritten scanned document retrieval task. In: Proceedings of CLEF (2016)

    Google Scholar 

  10. Robertson, S.: A new interpretation of average precision. In: Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2008), pp. 689–690. ACM, New York (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefania Marrara .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Pasi, G., Jones, G.J.F., Marrara, S., Sanvitto, C., Ganguly, D., Sen, P. (2017). Overview of the CLEF 2017 Personalised Information Retrieval Pilot Lab (PIR-CLEF 2017). In: Jones, G., et al. Experimental IR Meets Multilinguality, Multimodality, and Interaction. CLEF 2017. Lecture Notes in Computer Science(), vol 10456. Springer, Cham. https://doi.org/10.1007/978-3-319-65813-1_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-65813-1_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-65812-4

  • Online ISBN: 978-3-319-65813-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics