Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Cost and Quality Trade-Offs in Crowdsourcing

  • Lei ChenEmail author
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_80658


Incentive and performance trade-offs; Payment and quality trade-offs


In crowdsourcing, some tasks are conducted by the crowd due to enjoyment [8] or social reward [6]. However, arbitrary tasks are seldom enjoyable, and social award is often associated to some specific tasks, such as Wikipedia (https://en.wikipedia.org/wiki/Main_Page) and Stack Overflow (http://stackoverflow.com/). Thus, given an arbitrary task, a requester often needs to offer incentive (i.e., the cost of the task) to motivate workers to conduct the task. The cost per task is often paid in the form of financial compensation, a few cents per task. The quality of a crowdsourcing task is often referred as accuracy. Since the workers are humans, which may make errors when they perform tasks, the results returned by the crowd will have errors as a consequence. The trade-offs between cost and quality refer to the relationships between the financial incentive and the performance.

Historical Background


This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Ariely D, Gneezy U, Loewenstein G, Mazar N. Large stakes and big mistakes. Rev Econ Stud. 2009;76:451–69.zbMATHCrossRefGoogle Scholar
  2. 2.
    Faradani S, Hartmann B, Ipeirotis PG. What’s the right price? Pricing tasks for finishing on time. In: Proceedings of the 2011 AAAI Conference on Artificial Intelligence. 2011. p. 26–31.Google Scholar
  3. 3.
    Gneezy U, Rustichini A. Pay enough or don’t pay at all. Q J Econ. 2000;115(3):791–810.CrossRefGoogle Scholar
  4. 4.
    Kazai G. An exploration of the influence that task parameters have on the performance of crowds. In: Proceedings of the First International Conference on Crowdsourcing. 2010.Google Scholar
  5. 5.
    Mason W, Watts DJ. Financial incentives and the performance of crowds. In: ACM SIGKDD human computation. 2009. p. 100–08.Google Scholar
  6. 6.
    Nov O, Naaman M, Ye C. What drives content tagging: the case of photos on Flickr. In: Proceedings of the ACM Conference on Human Factors in Computing Systems; 2008. p. 1097–1110.Google Scholar
  7. 7.
    Snow R, O’Connor B, Jurafsky D, Ng AY. Cheap and fast – but is it good?: evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2008. p. 254–63.Google Scholar
  8. 8.
    von Ahn L. Games with a purpose. Computer. 2006;39(6):92–4.CrossRefGoogle Scholar
  9. 9.
    Xie H, Lui JCS, Jiang JW, Chen W. Incentive mechanism and protocol design for crowdsourcing systems. In: Proceedings of the 2014 Annual Allerton Conference on Communication, Control, and Computing. 2014. p. 140–47.Google Scholar
  10. 10.
    Xintong G, Hongzhi W, Song Y, Hong G. Brief survey of crowdsourcing for data mining. Expert Syst Appl. 2014;41(17):7987–94.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Hong Kong University of Science and TechnologyHong KongChina