Skip to main content

Human Factors Modeling in Crowdsourcing

  • Reference work entry
  • First Online:
Encyclopedia of Database Systems

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 4,499.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 6,499.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Recommended Reading

  1. Davidson SB, Khanna S, Milo T, Roy S. Using the crowd for top-k and group-by queries. In: Proceedings of the 16th International Conference on Database Theory; 2013. p. 225–36.

    Google Scholar 

  2. Deutch D, Greenshpan O, Kostenko B, Milo T. Declarative platform for data sourcing games. In: Proceedings of the 21st International World Wide Web Conference; 2012. p. 779–88.

    Google Scholar 

  3. Fleishman EA. Toward a taxonomy of human performance. Am Psychol. 1975; 30(12):1127.

    Article  Google Scholar 

  4. Hassan U, Curry E. A capability requirements approach for predicting worker performance in crowdsourcing. In: Proceedings of the 9th International Conference on Collaborative Computing: Networking, Applications and Worksharing; 2013. p. 429–37.

    Google Scholar 

  5. Ipeirotis PG, Gabrilovich E. Quizz: targeted crowdsourcing with a billion (potential) users. In: Proceedings of the 23rd International World Wide Web Conference; 2014.

    Google Scholar 

  6. Joglekar M, Garcia-Molina H, Parameswaran A. Evaluating the crowd with confidence. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2013. p. 686–694.

    Google Scholar 

  7. Karger DR, Oh S, Shah D. Budget-optimal crowdsourcing using low-rank matrix approximations. In: Proceedings of the 49th Annual Allerton Conference on Communication, Control, and Computing; 2011. p. 284–91.

    Google Scholar 

  8. Marcus A, Wu E, Karger D, Madden S, Miller R. Human-powered sorts and joins. Proc VLDB Endowment. 2011; 5(1):13–24.

    Article  Google Scholar 

  9. Parameswaran AG, Garcia-Molina H, Park H, Polyzotis N, Ramesh A, Widom J. Crowdscreen: algorithms for filtering data with humans. In: Proceedings of the ACM SIGMOD International Conference on Management of Data; 2012. p. 361–72.

    Google Scholar 

  10. Ramesh A, Parameswaran A, Garcia-Molina H, Polyzotis N. Identifying reliable workers swiftly. 2012.

    Google Scholar 

  11. Raykar VC, Yu S. Ranking annotators for crowdsourced labeling tasks. In: Advances in Neural Information Proceedings of the Systems 24, Proceedings of the 25th Annual Conference on Neural Information Proceedings of the Systems; 2011. p. 1809–17.

    Google Scholar 

  12. Raykar VC, Yu S, Zhao LH, Jerebko A, Florin C, Valadez GH, Bogoni L, Moy L. Supervised learning from multiple experts: whom to trust when everyone lies a bit. In: Proceedings of the 26th Annual International Conference on Machine Learning; 2009. p. 889–96.

    Google Scholar 

  13. Roy SB, Lykourentzou I, Thirumuruganathan S, Amer-Yahia S, Das G. Crowds, not drones: Modeling human factors in interactive crowdsourcing. In: Proceedings of the 1st VLDB Workshop on Databases and Crowdsourcing; 2013. p. 39–42.

    Google Scholar 

  14. Roy SB, Lykourentzou I, Thirumuruganathan S, Amer-Yahia S, Das G. Optimization in knowledge-intensive crowdsourcing. CoRR, abs/1401.1302, 2014.

    Google Scholar 

  15. Slivkins A, Vaughan JW. Online decision making in crowdsourcing markets: Theoretical challenges (position paper). CoRR, abs/1308.1746, 2013.

    Google Scholar 

  16. Sorokin A, Forsyth D. Utility data annotation with Amazon mechanical turk. Urbana, 2008; 51(61):820.

    Google Scholar 

  17. Tan CH, Agichtein E, Ipeirotis P, Gabrilovich E. Trust, but verify: predicting contribution quality for knowledge base construction and curation. In: Proceedings of the 7th ACM International Conference on Web Search and Data Mining; 2014.

    Google Scholar 

  18. Welinder P, Branson S, Perona P, Belongie SJ. The multidimensional wisdom of crowds. In: Advances in Neural Information Proceedings of the Systems 23, Proceedings of the 24th Annual Conference on Neural Information Proceedings of the Systems; 2010. p. 2424–32.

    Google Scholar 

  19. Welinder P, Perona P. Online crowdsourcing: rating annotators and obtaining cost-effective labels. In: Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; 2010. p. 25–32.

    Google Scholar 

  20. Whitehill J, Wu T-F, Bergsma J, Movellan JR, Ruvolo PL. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In: Advances in Neural Information Proceedings of the Systems 22, Proceedings of the 23rd Annual Conference on Neural Information Proceedings of the Systems; 2009. p. 2035–43.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sihem Amer-Yahia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Science+Business Media, LLC, part of Springer Nature

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Amer-Yahia, S., Roy, S.B., Das, G., Lykourentzou, I., Rahman, H., Thirumuruganathan, S. (2018). Human Factors Modeling in Crowdsourcing. In: Liu, L., Özsu, M.T. (eds) Encyclopedia of Database Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8265-9_80659

Download citation

Publish with us

Policies and ethics