Synonyms
Incentive and performance trade-offs; Payment and quality trade-offs
Definition
In crowdsourcing, some tasks are conducted by the crowd due to enjoyment [8] or social reward [6]. However, arbitrary tasks are seldom enjoyable, and social award is often associated to some specific tasks, such as Wikipedia (https://en.wikipedia.org/wiki/Main_Page) and Stack Overflow (http://stackoverflow.com/). Thus, given an arbitrary task, a requester often needs to offer incentive (i.e., the cost of the task) to motivate workers to conduct the task. The cost per task is often paid in the form of financial compensation, a few cents per task. The quality of a crowdsourcing task is often referred as accuracy. Since the workers are humans, which may make errors when they perform tasks, the results returned by the crowd will have errors as a consequence. The trade-offs between cost and quality refer to the relationships between the financial incentive and the performance.
Historical Background
Wikip...
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Recommended Reading
Ariely D, Gneezy U, Loewenstein G, Mazar N. Large stakes and big mistakes. Rev Econ Stud. 2009;76:451–69.
Faradani S, Hartmann B, Ipeirotis PG. What’s the right price? Pricing tasks for finishing on time. In: Proceedings of the 2011 AAAI Conference on Artificial Intelligence. 2011. p. 26–31.
Gneezy U, Rustichini A. Pay enough or don’t pay at all. Q J Econ. 2000;115(3):791–810.
Kazai G. An exploration of the influence that task parameters have on the performance of crowds. In: Proceedings of the First International Conference on Crowdsourcing. 2010.
Mason W, Watts DJ. Financial incentives and the performance of crowds. In: ACM SIGKDD human computation. 2009. p. 100–08.
Nov O, Naaman M, Ye C. What drives content tagging: the case of photos on Flickr. In: Proceedings of the ACM Conference on Human Factors in Computing Systems; 2008. p. 1097–1110.
Snow R, O’Connor B, Jurafsky D, Ng AY. Cheap and fast – but is it good?: evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2008. p. 254–63.
von Ahn L. Games with a purpose. Computer. 2006;39(6):92–4.
Xie H, Lui JCS, Jiang JW, Chen W. Incentive mechanism and protocol design for crowdsourcing systems. In: Proceedings of the 2014 Annual Allerton Conference on Communication, Control, and Computing. 2014. p. 140–47.
Xintong G, Hongzhi W, Song Y, Hong G. Brief survey of crowdsourcing for data mining. Expert Syst Appl. 2014;41(17):7987–94.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Science+Business Media, LLC, part of Springer Nature
About this entry
Cite this entry
Chen, L. (2018). Cost and Quality Trade-Offs in Crowdsourcing. In: Liu, L., Özsu, M.T. (eds) Encyclopedia of Database Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8265-9_80658
Download citation
DOI: https://doi.org/10.1007/978-1-4614-8265-9_80658
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-8266-6
Online ISBN: 978-1-4614-8265-9
eBook Packages: Computer ScienceReference Module Computer Science and Engineering