Advertisement

Human Computation for Information Retrieval

Chapter

Abstract

Human computation techniques, such as crowdsourcing and games, have demonstrated their ability to accomplish portions of information retrieval (IR) tasks that machine-based techniques find challenging. Query refinement is one such IR task that may benefit from human involvement. We conduct an experiment that evaluates the contributions of participants from Amazon Mechanical Turk (N = 40). Each of our crowd participants is randomly assigned to use one of two query interfaces: a traditional web-based interface or a game-based interface. We ask each participant to manually construct queries to respond to a set of OHSUMED information needs and we calculate their resulting recall and precision. Those using a web interface are provided feedback on their initial queries and asked to use this information to reformulate their original queries. Game interface users are provided with instant scoring and asked to refine their queries based on their scores. In our experiment, crowdsourcing-based methods in general provide a significant improvement over machine algorithmic methods, and among crowdsourcing methods, games provide a better mean average precision (MAP) for query reformulations as compared to a non-game interface.

Keywords

Relevance Feedback Mean Average Precision Initial Query Pseudo Relevance Feedback Query Reformulation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Ageev M, Guo Q, Lagun D, Agichtein E (2011) Find it if you can: a game for modeling different types of web search success using interaction data. In: Proceedings of SIGIR’11. ACM, New York, pp 345–354Google Scholar
  2. Allan J, Papka R, Lavrenko V (1998) On-line new event detection and tracking. In: Proceedings of SIGIR’98. ACM, New York, pp 37–45Google Scholar
  3. Alonso O, Lease M (2011) Crowdsourcing for information retrieval: principles, methods, and applications. In: Proceedings of SIGIR’11. ACM, New York, pp 1299–1300Google Scholar
  4. Alonso O, Mizzaro S (2012) Using crowdsourcing for TREC relevance assessment. Inf Process Manage 48(6):1053–1066CrossRefGoogle Scholar
  5. Anick P (2003) Using terminological feedback for web search refinement: a log-based study. In: Proceedings of SIGIR’03. ACM, New York, pp 88–95Google Scholar
  6. Buckley C, Voorhees EM (2004) Retrieval evaluation with incomplete information. In: Proceedings of SIGIR’04. ACM, New York, pp 25–32Google Scholar
  7. Dasdan A, Drome C, Kolay S, Alpern M, Han A, Chi T, Hoover J, Davtchev I, Verma S (2009) Thumbs-Up: a game for playing to rank search results. In: Proceedings of the ACM SIGKDD workshop on human computation, Paris. ACM, New York, pp 36–37Google Scholar
  8. Efthimiadis EN (2000) Interactive query expansion: a user-based evaluation in a relevance feedback environment. J Am Soc Inf Sci 51(11):989–1003CrossRefGoogle Scholar
  9. Harris CG (2012) An evaluation of search strategies for user-generated video content. In: Proceedings of the WWW Workshop on Crowdsourcing Web Search (Lyon, France), pp 48–53Google Scholar
  10. Harris CG, Srinivasan P (2012) Applying human computation mechanisms to information retrieval. Proc Am Soc Inf Sci Technol 49(1):1–10CrossRefGoogle Scholar
  11. Harris CG, Srinivasan P (2013) Comparing crowd-based, game-based, and machine-based approaches in initial query and query refinement tasks. In: Advances in information retrieval. Springer, Berlin Heidelberg, pp 495–506Google Scholar
  12. Hersh W, Buckley C, Leone T, Hickam D (1994) OHSUMED: an interactive retrieval evaluation and new large test collection for research. In: Proceedings of SIGIR’94. Springer, London, pp 192–201Google Scholar
  13. Jones KS (1972) A statistical interpretation of term specificity and its application in retrieval. J Doc 28(1):11–21CrossRefGoogle Scholar
  14. Law E, Ahn L von, Mitchell T (2009) Search war: a game for improving web search. In: Proceedings of the ACM SIGKDD workshop on human computation, Paris. ACM, New York, pp 31–31Google Scholar
  15. Lease M, Yilmaz E (2012) Crowdsourcing for information retrieval. ACM, New York, SIGIR Forum 45:2 (January 2012), pp 66–75Google Scholar
  16. McKibbon KA, Haynes RB, Walker Dilks CJ, Ramsden MF, Ryan NC, Baker L, Flemming T, Fitzgerald D (1990) How good are clinical MEDLINE searches? A comparative study of clinical end-user and librarian searches. Comput Biomed Res 23(6):583–593CrossRefGoogle Scholar
  17. Milne D, Nichols DM, Witten IH (2008) A competitive environment for exploratory query expansion. In: Proceedings of the 8th ACM/IEEE-CS joint conference on Digital libraries (JCDL’08). ACM, New York, pp 197–200Google Scholar
  18. Robertson SE, Walker S, Jones S, Hancock-Beaulieu MM, Gatford M (1995) Okapi at TREC-3. NIST Special Publication SP-1995. Gaithersburg, Maryland, USA, pp 109–121Google Scholar
  19. Ruthven I (2003) Re-examining the potential effectiveness of interactive query expansion. In: Proceedings of SIGIR’03. ACM, New York, pp 213–220Google Scholar
  20. Spink A, Jansen BJ, Wolfram D, Saracevic T (2002) From e-sex to e-commerce: web search changes. Computer 35(3):107–109CrossRefGoogle Scholar
  21. Strohman T, Metzler D, Turtle H, Croft WB (2005) Indri: a language model-based search engine for complex queries. In: Proceedings of the international conference on intelligence analysis, McLean, VA. Poster, 2–6 May 2005Google Scholar
  22. Xu J, Croft WB (1996) Query expansion using local and global document analysis. In: Proceedings of SIGIR’96. ACM, New York, pp 4–11Google Scholar
  23. Yan T, Kumar V, Ganesan D (2010) CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones. In: Proceedings of MobiSys’10. ACM, New York, pp 77–90Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.SUNY OswegoOswegoUSA
  2. 2.Iowa CityUSA

Personalised recommendations