Abstract
The evaluation process for the TREC Contextual Suggestion Track consumes substantial time and resources, taking place over several weeks and costing thousands of dollars in assessor remuneration. The track evaluates a point-of-interest recommendation task, using crowdsourced workers as a source of user profiles and judgments. Given the cost of assessment, we examine track data to provide guidance for future experiments on this task, particularly with respect to the number of assessors required. To provide insight, we first consider the potential impact of fewer assessors on the TREC 2013 experiments. We then provide recommendations for future experiments. Our goal is to minimize costs, while still meeting the requirements of those experiments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Adomavicius, G., Tuzhilin, A.: Context-aware recommender systems. In: Recommender Systems Handbook, pp. 217–253. Springer (2011)
Baltrunas, L., Ludwig, B., Peer, S., Ricci, F.: Context relevance assessment and exploitation in mobile recommender systems. Personal Ubiquitous Comput. 16(5), 507–526 (2012)
Bellogín, A., Samar, T., de Vries, A.P., Said, A.: Challenges on combining open web and dataset evaluation results: The case of the contextual suggestion track. In: de Rijke, M., Kenter, T., de Vries, A.P., Zhai, C., de Jong, F., Radinsky, K., Hofmann, K. (eds.) ECIR 2014. LNCS, vol. 8416, pp. 430–436. Springer, Heidelberg (2014)
Braunhofer, M., Elahi, M., Ricci, F.: Usability assessment of a context-aware and personality-based mobile recommender system. In: Hepp, M., Hoffner, Y. (eds.) E-Commerce and Web Technologies, vol. 188, pp. 77–88. Springer, Heidelberg (2014)
Büttcher, S., Clarke, C.L.A., Cormack, G.V.: Information retrieval: Implementing and evaluating search engines. MIT Press (2010)
Dean-Hall, A., Clarke, C.L.A., Kamps, J., Thomas, P., Simone, N., Voorhees, E.: Overview of the TREC 2013 contextual suggestion track. In: Proc. of TREC (2013)
Milne, D., Thomas, P., Paris, C.: Finding, weighting and describing venues: Csiro at the 2012 trec contextual suggestion track. In: Proc. of TREC (2012)
Sakai, T.: Designing test collections for comparing many systems. In: Proc. CIKM, pp. 61–70 (2014)
Sanderson, M., Zobel, J.: Information retrieval system evaluation: Effort, sensitivity, and reliability. In: Proc. ACM SIGIR, pp. 162–169 (2005)
Soboroff, I.: Computing confidence intervals for common ir measures. In: Proc. of EVIA (2014)
Voorhees, E.M., Buckley, C.: The effect of topic set size on retrieval experiment error. In: Proc. of ACM SIGIR, pp. 316–323 (2002)
Webber, W., Moffat, A., Zobel, J.: Statistical power in retrieval experimentation. In: Proc. of ACM CIKM, pp. 571–580 (2008)
Yang, P., Fang, H.: An opinion-aware approach to contextual suggestion. In: Proc. of TREC (2013)
Zobel, J.: How reliable are the results of large-scale information retrieval experiments? In: Proc. of ACM SIGIR, pp. 307–314 (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Dean-Hall, A., Clarke, C.L.A. (2015). The Power of Contextual Suggestion. In: Hanbury, A., Kazai, G., Rauber, A., Fuhr, N. (eds) Advances in Information Retrieval. ECIR 2015. Lecture Notes in Computer Science, vol 9022. Springer, Cham. https://doi.org/10.1007/978-3-319-16354-3_39
Download citation
DOI: https://doi.org/10.1007/978-3-319-16354-3_39
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-16353-6
Online ISBN: 978-3-319-16354-3
eBook Packages: Computer ScienceComputer Science (R0)