Abstract
Crowdsourcing system recruits an undefined group of people to accomplish tasks proposed by a requester in a short time with low cost. Crowdsourcing makes great contributions in many fields. Building a crowdsourcing system faces many challenges, such as how to motivate people, how to decompose and assign tasks, how to control quality, how to aggregate contributions. In this paper, we explain what the crowdsourcing is and propose three necessary characters as criteria to judge a crowdsourcing system. Then, we introduce solutions to those challenges of crowdsourcing system. Finally, we talk about where crowdsourcing can be used and choose two fields to illustrate the usefulness of crowdsourcing.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
von Ahn L, Dabbish L (2004) Labeling images with a computer game. In: CHI’04, pp 319–326
Lasecki W, Miller C, Sadilek A, Abumoussa A, Borrello D, Kushalnagar R, Bigham J (2012) Real-time captioning by groups of non-experts. In: UIST’12, pp 23–34
Hankins RA, Lee A (2011) Crowdsourcing and prediction markets. http://crowdresearch.org/chi2011-workshop/papers/lee-alison.pdf
Doan A, Ramakrishnan R, Halevy AY (2011) Crowdsourcing systems on the world-wide web. Commun ACM 54(4):86–96
Kaufmann N, Schulze T, Veit D (2011) More than fun and money. Worker motivation in crowdsourcing – a study on mechanical turk. In: AMCIS’11, pp 1–11
Quinn AJ, Bederson BB (2011) Human computation: a survey and taxonomy of a growing field. In: CHI’11, pp 1403–1412
Yuen MC, King I, Leung KS (2011) A survey of crowdsourcing systems. In: Passat’11 and Socialcom’11, pp 766 –773
Callison-Burch C, Dredze M (2010) Creating speech and language data with amazon’s mechanical turk. In: CSLDAMT’10, pp 1–12
Gentry C, Ramzan Z, Stubblebine S (2005) Secure distributed human computation. In: EC’05, pp 155–164
Kulkarni A, Can M, Hartmann B (2012) Collaboratively crowdsourcing workflows with turkomatic. In: CSCW’12, pp 1003–1012
Noronha J, Hysen E, Zhang H, Gajos KZ (2011) Platemate: crowdsourcing nutritional analysis from food photographs. In: UIST’11, pp 1–12
von Ahn L, Dabbish L (2008) Designing games with a purpose. Commun ACM 51(8):58–67
Bernstein MS, Brandt J, Miller RC, Karger DR (2011) Crowds in two seconds: enabling realtime crowd-powered interfaces. In: UIST’11, pp 33–42
Yu L, Nickerson JV (2011) Cooks or cobblers?: crowd creativity through combination. In: CHI’11, pp 1393–1402
Howe J (2006) The rise of crowdsourcing. Wired 14(6):1–4
Estells-Arolas E, Gonzlez Ladrn-de Guevara F (2012) Towards an integrated crowdsourcing definition. J Inf Sci 38(2):189–200
Antin J, Shaw A (2012) Social desirability bias and self-reports of motivation: a study of amazon mechanical turk in the US and India. In: CHI’12, pp 2925–2934
Brabham DC (June 2008) Moving the crowd at istockphoto: the composition of the crowd and motivations for participation in a crowdsourcing application. First Monday 13(6):1–22
Cooper S, Khatib F, Treuille A, Barbero J, Lee J, Beenen M, Leaver-Fay A, Baker D, Popovic Z, Players F (2010) Predicting protein structures with a multiplayer online game. Nature 466:756–760
Starbird K (2011) Digital volunteerism during disaster: crowdsourcing information processing. http://crowdresearch.org/chi2011-workshop/papers/starbird.pdf
Wohn D, Velasquez A, Bjornrud T, Lampe C (2012) Habit as an explanation of participation in an online peer-production community. In: CHI’12, pp 2905–2914
Ariely D, Gneezy U, Loewenstein G, Mazar N (2009) Large stakes and big mistakes. Rev Econ Stud 76(2):451–469
Mason W, Watts DJ (2009) Financial incentives and the “performance of crowds”. In: HCOMP’09, pp 77–85
Gneezy U, Rustichini A (2000) Pay enough or don’t pay at all. Q J Econ 115(3):791–810
Heimerl K, Gawalt B, Chen K, Parikh T, Hartmann B (2012) Community sourcing: engaging local crowds to perform expert work via physical kiosks. In: CHI’12, pp 1539–1548
Sarkar C, Wohn D, Lampe C, DeMaagd K (2012) A quantitative explanation of governance in an online peer-production community. In: CHI’12, pp 2939–2942
von Ahn L, Maurer B, McMillen C, Abraham D, Blum M (2008). ReCAPTCHA: human-based character recognition via web security measures. Science 321(5895):1465–1468
Tuite K, Snavely N, Hsiao D, Tabing N, Popovic Z (2011) Photocity: training experts at large- scale image acquisition through a competitive game. In: CHI’11, pp 1383–1392
Bernstein MS, Teevan J, Dumais S, Liebling D, Horvitz E (2012) Direct answers for search queries in the long tail. In: CHI’12, pp 237–246
Hu C, Resnik P, Kronrod Y, Bederson B (2012) Deploying monotrans widgets in the wild. In: CHI’12, pp 2935–2938
Parent G, Eskenazi M (2011) Sources of variability and adaptive tasks. http://crowdresearch.org/chi2011-workshop/papers/parent.pdf
Quinn AJ, Bederson BB (2011) Human-machine hybrid computation. http://crowdresearch.org/chi2011-workshop/papers/quinn.pdf
Ahmad S, Battle A, Malkani Z, Kamvar S (2011) The jabberwocky programming environment for structured social computing. In: UIST’11, pp 53–64
Le J, Edmonds A, Hester V, Biewald L (2010) Ensuring quality in crowdsourced search relevance. In: Workshop on crowdsourcing for search evaluation at SIGIR’10, pp 21–26
Rzeszotarski JM, Kittur A (2011) Instrumenting the crowd: using implicit behavioral measures to predict task performance. In: UIST’11, pp 13–22
Rzeszotarski J, Kittur A (2012) Crowdscape: interactively visualizing user behavior and output. In: UIST’12, pp 55–62
Dow S, Kulkarni A, Klemmer S, Hartmann B (2012) Shepherding the crowd yields better work. In: CSCW’12, pp 1013–1022
Chen K-T, Wu C-C, Chang Y-C, Lei C-L (2009) A crowdsourceable QoE evaluation framework for multimedia content. In: MM’09, pp 491–500
Bernstein MS, Little G, Miller RC, Hartmann B, Ackerman MS, Karger DR, Crowell D, Panovich K (2010) Soylent: a word processor with a crowd inside. In: UIST’10, pp 313–322
Kittur A, Smus B, Khamkar S, Kraut RE (2011) Crowdforge: crowdsourcing complex work. In: UIST’11, pp 43–52
Chia PH, Chuang J (2012) Community-based web security: complementary roles of the serious and casual contributors. In: CSCW’12, pp 1023–1032
Bigham JP, Jayant C, Ji H, Little G, Miller A, Miller RC, Miller R, Tatarowicz A, White B, White S, Yeh T (2010) Vizwiz: nearly real-time answers to visual questions. In: UIST’10, pp 333–342
Chilana PK, Ko AJ, Wobbrock JO (2012) Lemonaid: selection-based crowdsourced contextual help for web applications. In: CHI’12, pp 1549–1558
Willett W, Heer J, Agrawala M (2012) Strategies for crowdsourcing social data analysis. In: CHI’12, pp 227–236
Acknowledgments
This work was partly supported by China national natural science foundation (61272243, 61202146, 61003149), Shandong Provincial Natural Science Foundation, China (ZR2010FQ011, ZR2012FQ026).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media Dordrecht
About this paper
Cite this paper
Yin, X., Liu, W., Wang, Y., Yang, C., Lu, L. (2014). What? How? Where? A Survey of Crowdsourcing. In: Li, S., Jin, Q., Jiang, X., Park, J. (eds) Frontier and Future Development of Information Technology in Medicine and Education. Lecture Notes in Electrical Engineering, vol 269. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-7618-0_22
Download citation
DOI: https://doi.org/10.1007/978-94-007-7618-0_22
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-007-7617-3
Online ISBN: 978-94-007-7618-0
eBook Packages: EngineeringEngineering (R0)