Skip to main content

Collaborative Majority Vote: Improving Result Quality in Crowdsourcing Marketplaces

  • Chapter
  • First Online:
Book cover Crowdsourcing

Part of the book series: Progress in IS ((PROIS))

Abstract

Crowdsourcing markets, such as Amazon’s Mechanical Turk, are designed for easy distribution of micro-tasks to an on-demand scalable workforce. Improving the quality of the submitted results is still one of the main challenges for quality control management in these markets. Although beneficial effects of synchronous collaboration on the quality of work are well-established in other domains, interaction and collaboration mechanisms are not yet supported by most crowdsourcing platforms, and thus, not considered as a means of ensuring high-quality processing of tasks. In this paper, we address this challenge and present a new method that extends majority vote, one of the most widely used quality assurance mechanisms, enabling workers to interact and communicate during task execution. We illustrate how to apply this method to the basic scenarios of task execution and present the enabling technology for the proposed real-time collaborative extension. We summarize its positive impacts on the quality of results and discuss its limitations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chesbrough, H.W.: Open Innovation: The New Imperative for Creating and Profiting from Technology. Harvard Business School Press Books, Boston (2003)

    Google Scholar 

  2. Howe, J.: Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business. Crown Publishing Group, New York (2008)

    Google Scholar 

  3. Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on Amazon Mechanical Turk. In: Proceedings of the ACM SIGKDD Workshop on Human Computation (2010). doi:10.1145/1837885.1837906

  4. Kazai, G.: An exploration of the influence that task parameters have on the performance of crowds. In: Proceedings of the CrowdConf (2010)

    Google Scholar 

  5. Kern, R., Bauer, C., Thies, H., Satzger, G.: Validating results of human-based electronic services leveraging multiple reviewers. In: Proceedings of the 16th Americas Conference on Information Systems (2010)

    Google Scholar 

  6. Kern, R., Thies, H., Satzger, G.: Efficient quality management of human-based electronic services leveraging group decision making. In: Proceedings of the 19th European Conference on Information Systems (2011)

    Google Scholar 

  7. Kern, R., Thies, H., Zirpins, C., Satzger, G.: Dynamic and goal-based quality management for human-based electronic services. Int. J. Coop. Inf. Syst. (2012). doi:10.1142/S0218843012400011

  8. Kittur, A.: Crowdsourcing, collaboration and creativity. XRDS: crossroads, the ACM magazine for students (2010). doi:10.1145/1869086.1869096

  9. Kosinski, M., Bachrach, Y., Kasneci, G., Van-Gael, J., Graepel, T.: Crowd IQ: measuring the intelligence of crowdsourcing platforms. In: Proceedings of the 3rd Annual ACM Web Science Conference (2012). doi:10.1145/2380718.2380739

  10. Kulkarni, A.P., Can, M., Hartmann, B.: Turkomatic: automatic recursive task and workflow design for mechanical turk. In: Proceedings of ACM CHI Conference on Human Factors in Computing Systems (2011)

    Google Scholar 

  11. Leimeister, J.M., Huber, M., Bretschneider, U., Krcmar, H.: Leveraging crowdsourcing: activation-supporting components for IT-based ideas competition. J. Manag. Inf. Syst. (2009). doi:10.2753/MIS0742-1222260108

  12. Little, G., Chilton, L.B., Goldman, M., Miller R.C.: Exploring iterative and parallel human computation processes. In: Proceedings of the ACM SIGKDD Workshop on Human Computation (2010). doi:10.1145/1837885.1837907

  13. Lowry, P.B., Albrecht, C.C., Lee, J.D., Nunamaker, J.F.: Users’ experiences in collaborative writing using collaboratus, an internet-based collaborative work. In: Proceedings of the 35th Annual Hawaii International Conference on System Sciences (2002). doi:10.1109/HICSS.2002.993879

  14. McCarthy, J., Miles, V., Monk, A.: An experimental study of common ground in text-based communication. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (1991). doi:10.1145/108844.108890

  15. Morris, M.R.: Interfaces for collaborative exploratory web search: motivations and directions for multi-user design. In: Proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems, Workshop on Exploratory Search and HCI: Designing and Evaluating Interfaces to Support Exploratory Search Interaction, pp. 9–12 (2007)

    Google Scholar 

  16. Oleson, D., Sorokin, A., Laughlin, G., Hester, V., Le, J., Biewald, L.: Programmatic gold: targeted and scalable quality assurance in crowdsourcing. In: The 3rd Human Computation Workshop (2011)

    Google Scholar 

  17. Quinn, A.J., Bederson, B.B.: Human computation: a survey and taxonomy of a growing field. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2011). doi:10.1145/1978942.1979148

  18. Raykar, V.C., Yu, S.: Eliminating spammers and ranking annotators for crowdsourced labeling tasks. J. Mach. Learn. Res. 13, 491–518 (2012)

    Google Scholar 

  19. Schulze, T., Nordheimer, D., Schader, M.: Worker perception of quality assurance mechanisms in crowdsourcing and human computation markets. In: Proceedings of the 19th Americas Conference on Information Systems (2013)

    Google Scholar 

  20. Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (2008)

    Google Scholar 

  21. Sorokin, A., Forsyth, D.: Utility data annotation with Amazon Mechanical Turk. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (2008). doi:10.1109/cvprw.2008.4562953

  22. Thum, C.: Enabling lightweight real-time collaboration. In: Becker, C., Gaul, W., Heinzl, A., Schader, M., Veit, D. (eds.) Informationstechnologie und Oekonomie, Band 41, Peter-Lang-Publisher, Bern, Dissertation (2012)

    Google Scholar 

  23. Thum, C., Schwind, M.: Synchronite—a service for real-time lightweight collaboration. In: Proceedings of the 2010 International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (2010). doi:10.1109/3PGCIC.2010.36

  24. Vuurens, J., de Vries, A.P., Eickhoff, C.: How much spam can you take? An analysis of crowdsourcing results to increase accuracy. In: Proceedings of ACM SIGIR Workshop on Crowdsourcing for Information Retrieval (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Khrystyna Nordheimer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Nordheimer, D., Nordheimer, K., Schader, M., Korthaus, A. (2015). Collaborative Majority Vote: Improving Result Quality in Crowdsourcing Marketplaces. In: Li, W., Huhns, M., Tsai, WT., Wu, W. (eds) Crowdsourcing. Progress in IS. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-47011-4_8

Download citation

Publish with us

Policies and ethics