Skip to main content

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 269))

Abstract

Crowdsourcing system recruits an undefined group of people to accomplish tasks proposed by a requester in a short time with low cost. Crowdsourcing makes great contributions in many fields. Building a crowdsourcing system faces many challenges, such as how to motivate people, how to decompose and assign tasks, how to control quality, how to aggregate contributions. In this paper, we explain what the crowdsourcing is and propose three necessary characters as criteria to judge a crowdsourcing system. Then, we introduce solutions to those challenges of crowdsourcing system. Finally, we talk about where crowdsourcing can be used and choose two fields to illustrate the usefulness of crowdsourcing.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 429.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 549.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 549.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. von Ahn L, Dabbish L (2004) Labeling images with a computer game. In: CHI’04, pp 319–326

    Google Scholar 

  2. Lasecki W, Miller C, Sadilek A, Abumoussa A, Borrello D, Kushalnagar R, Bigham J (2012) Real-time captioning by groups of non-experts. In: UIST’12, pp 23–34

    Google Scholar 

  3. Hankins RA, Lee A (2011) Crowdsourcing and prediction markets. http://crowdresearch.org/chi2011-workshop/papers/lee-alison.pdf

  4. Doan A, Ramakrishnan R, Halevy AY (2011) Crowdsourcing systems on the world-wide web. Commun ACM 54(4):86–96

    Google Scholar 

  5. Kaufmann N, Schulze T, Veit D (2011) More than fun and money. Worker motivation in crowdsourcing – a study on mechanical turk. In: AMCIS’11, pp 1–11

    Google Scholar 

  6. Quinn AJ, Bederson BB (2011) Human computation: a survey and taxonomy of a growing field. In: CHI’11, pp 1403–1412

    Google Scholar 

  7. Yuen MC, King I, Leung KS (2011) A survey of crowdsourcing systems. In: Passat’11 and Socialcom’11, pp 766 –773

    Google Scholar 

  8. Callison-Burch C, Dredze M (2010) Creating speech and language data with amazon’s mechanical turk. In: CSLDAMT’10, pp 1–12

    Google Scholar 

  9. Gentry C, Ramzan Z, Stubblebine S (2005) Secure distributed human computation. In: EC’05, pp 155–164

    Google Scholar 

  10. Kulkarni A, Can M, Hartmann B (2012) Collaboratively crowdsourcing workflows with turkomatic. In: CSCW’12, pp 1003–1012

    Google Scholar 

  11. Noronha J, Hysen E, Zhang H, Gajos KZ (2011) Platemate: crowdsourcing nutritional analysis from food photographs. In: UIST’11, pp 1–12

    Google Scholar 

  12. von Ahn L, Dabbish L (2008) Designing games with a purpose. Commun ACM 51(8):58–67

    Google Scholar 

  13. Bernstein MS, Brandt J, Miller RC, Karger DR (2011) Crowds in two seconds: enabling realtime crowd-powered interfaces. In: UIST’11, pp 33–42

    Google Scholar 

  14. Yu L, Nickerson JV (2011) Cooks or cobblers?: crowd creativity through combination. In: CHI’11, pp 1393–1402

    Google Scholar 

  15. Howe J (2006) The rise of crowdsourcing. Wired 14(6):1–4

    Google Scholar 

  16. Estells-Arolas E, Gonzlez Ladrn-de Guevara F (2012) Towards an integrated crowdsourcing definition. J Inf Sci 38(2):189–200

    Google Scholar 

  17. Antin J, Shaw A (2012) Social desirability bias and self-reports of motivation: a study of amazon mechanical turk in the US and India. In: CHI’12, pp 2925–2934

    Google Scholar 

  18. Brabham DC (June 2008) Moving the crowd at istockphoto: the composition of the crowd and motivations for participation in a crowdsourcing application. First Monday 13(6):1–22

    Google Scholar 

  19. Cooper S, Khatib F, Treuille A, Barbero J, Lee J, Beenen M, Leaver-Fay A, Baker D, Popovic Z, Players F (2010) Predicting protein structures with a multiplayer online game. Nature 466:756–760

    Google Scholar 

  20. Starbird K (2011) Digital volunteerism during disaster: crowdsourcing information processing. http://crowdresearch.org/chi2011-workshop/papers/starbird.pdf

  21. Wohn D, Velasquez A, Bjornrud T, Lampe C (2012) Habit as an explanation of participation in an online peer-production community. In: CHI’12, pp 2905–2914

    Google Scholar 

  22. Ariely D, Gneezy U, Loewenstein G, Mazar N (2009) Large stakes and big mistakes. Rev Econ Stud 76(2):451–469

    Article  MATH  Google Scholar 

  23. Mason W, Watts DJ (2009) Financial incentives and the “performance of crowds”. In: HCOMP’09, pp 77–85

    Google Scholar 

  24. Gneezy U, Rustichini A (2000) Pay enough or don’t pay at all. Q J Econ 115(3):791–810

    Article  Google Scholar 

  25. Heimerl K, Gawalt B, Chen K, Parikh T, Hartmann B (2012) Community sourcing: engaging local crowds to perform expert work via physical kiosks. In: CHI’12, pp 1539–1548

    Google Scholar 

  26. Sarkar C, Wohn D, Lampe C, DeMaagd K (2012) A quantitative explanation of governance in an online peer-production community. In: CHI’12, pp 2939–2942

    Google Scholar 

  27. von Ahn L, Maurer B, McMillen C, Abraham D, Blum M (2008). ReCAPTCHA: human-based character recognition via web security measures. Science 321(5895):1465–1468

    Google Scholar 

  28. Tuite K, Snavely N, Hsiao D, Tabing N, Popovic Z (2011) Photocity: training experts at large- scale image acquisition through a competitive game. In: CHI’11, pp 1383–1392

    Google Scholar 

  29. Bernstein MS, Teevan J, Dumais S, Liebling D, Horvitz E (2012) Direct answers for search queries in the long tail. In: CHI’12, pp 237–246

    Google Scholar 

  30. Hu C, Resnik P, Kronrod Y, Bederson B (2012) Deploying monotrans widgets in the wild. In: CHI’12, pp 2935–2938

    Google Scholar 

  31. Parent G, Eskenazi M (2011) Sources of variability and adaptive tasks. http://crowdresearch.org/chi2011-workshop/papers/parent.pdf

  32. Quinn AJ, Bederson BB (2011) Human-machine hybrid computation. http://crowdresearch.org/chi2011-workshop/papers/quinn.pdf

  33. Ahmad S, Battle A, Malkani Z, Kamvar S (2011) The jabberwocky programming environment for structured social computing. In: UIST’11, pp 53–64

    Google Scholar 

  34. Le J, Edmonds A, Hester V, Biewald L (2010) Ensuring quality in crowdsourced search relevance. In: Workshop on crowdsourcing for search evaluation at SIGIR’10, pp 21–26

    Google Scholar 

  35. Rzeszotarski JM, Kittur A (2011) Instrumenting the crowd: using implicit behavioral measures to predict task performance. In: UIST’11, pp 13–22

    Google Scholar 

  36. Rzeszotarski J, Kittur A (2012) Crowdscape: interactively visualizing user behavior and output. In: UIST’12, pp 55–62

    Google Scholar 

  37. Dow S, Kulkarni A, Klemmer S, Hartmann B (2012) Shepherding the crowd yields better work. In: CSCW’12, pp 1013–1022

    Google Scholar 

  38. Chen K-T, Wu C-C, Chang Y-C, Lei C-L (2009) A crowdsourceable QoE evaluation framework for multimedia content. In: MM’09, pp 491–500

    Google Scholar 

  39. Bernstein MS, Little G, Miller RC, Hartmann B, Ackerman MS, Karger DR, Crowell D, Panovich K (2010) Soylent: a word processor with a crowd inside. In: UIST’10, pp 313–322

    Google Scholar 

  40. Kittur A, Smus B, Khamkar S, Kraut RE (2011) Crowdforge: crowdsourcing complex work. In: UIST’11, pp 43–52

    Google Scholar 

  41. Chia PH, Chuang J (2012) Community-based web security: complementary roles of the serious and casual contributors. In: CSCW’12, pp 1023–1032

    Google Scholar 

  42. Bigham JP, Jayant C, Ji H, Little G, Miller A, Miller RC, Miller R, Tatarowicz A, White B, White S, Yeh T (2010) Vizwiz: nearly real-time answers to visual questions. In: UIST’10, pp 333–342

    Google Scholar 

  43. Chilana PK, Ko AJ, Wobbrock JO (2012) Lemonaid: selection-based crowdsourced contextual help for web applications. In: CHI’12, pp 1549–1558

    Google Scholar 

  44. Willett W, Heer J, Agrawala M (2012) Strategies for crowdsourcing social data analysis. In: CHI’12, pp 227–236

    Google Scholar 

Download references

Acknowledgments

This work was partly supported by China national natural science foundation (61272243, 61202146, 61003149), Shandong Provincial Natural Science Foundation, China (ZR2010FQ011, ZR2012FQ026).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yafang Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media Dordrecht

About this paper

Cite this paper

Yin, X., Liu, W., Wang, Y., Yang, C., Lu, L. (2014). What? How? Where? A Survey of Crowdsourcing. In: Li, S., Jin, Q., Jiang, X., Park, J. (eds) Frontier and Future Development of Information Technology in Medicine and Education. Lecture Notes in Electrical Engineering, vol 269. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-7618-0_22

Download citation

  • DOI: https://doi.org/10.1007/978-94-007-7618-0_22

  • Published:

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-94-007-7617-3

  • Online ISBN: 978-94-007-7618-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics