Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Human Factors Modeling in Crowdsourcing

  • Sihem Amer-YahiaEmail author
  • Senjuti Basu Roy
  • Gautam Das
  • Ioanna Lykourentzou
  • Habibur Rahman
  • Saravanan Thirumuruganathan
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_80659


Human factors; Standardization in crowdsourcing


Human factors relate to the behavior or characteristics of the human workers. In the context of crowdsourcing, human factors model the unpredictability and inconsistency in worker behavior, their volatility, asynchronous arrival and departure, their expertise or skills, their incentives (monetary or otherwise) for their participation, or even their collaborative synergy. For example, there is uncertainty regarding worker availability: workers can enter the crowdsourcing platform when they want, remain connected for as long as they like, and they may or may not accept a task. Uncertainty about a worker’s ability to complete a task depends on the worker’s expertise that may or not be known at the time a task is available. Similarly, there is uncertainty regarding the incentive (wage to be more precise) that workers may expect for achieving a task: worker wage may vary from worker to worker, even among workers with the...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Davidson SB, Khanna S, Milo T, Roy S. Using the crowd for top-k and group-by queries. In: Proceedings of the 16th International Conference on Database Theory; 2013. p. 225–36.Google Scholar
  2. 2.
    Deutch D, Greenshpan O, Kostenko B, Milo T. Declarative platform for data sourcing games. In: Proceedings of the 21st International World Wide Web Conference; 2012. p. 779–88.Google Scholar
  3. 3.
    Fleishman EA. Toward a taxonomy of human performance. Am Psychol. 1975; 30(12):1127.CrossRefGoogle Scholar
  4. 4.
    Hassan U, Curry E. A capability requirements approach for predicting worker performance in crowdsourcing. In: Proceedings of the 9th International Conference on Collaborative Computing: Networking, Applications and Worksharing; 2013. p. 429–37.Google Scholar
  5. 5.
    Ipeirotis PG, Gabrilovich E. Quizz: targeted crowdsourcing with a billion (potential) users. In: Proceedings of the 23rd International World Wide Web Conference; 2014.Google Scholar
  6. 6.
    Joglekar M, Garcia-Molina H, Parameswaran A. Evaluating the crowd with confidence. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2013. p. 686–694.Google Scholar
  7. 7.
    Karger DR, Oh S, Shah D. Budget-optimal crowdsourcing using low-rank matrix approximations. In: Proceedings of the 49th Annual Allerton Conference on Communication, Control, and Computing; 2011. p. 284–91.Google Scholar
  8. 8.
    Marcus A, Wu E, Karger D, Madden S, Miller R. Human-powered sorts and joins. Proc VLDB Endowment. 2011; 5(1):13–24.CrossRefGoogle Scholar
  9. 9.
    Parameswaran AG, Garcia-Molina H, Park H, Polyzotis N, Ramesh A, Widom J. Crowdscreen: algorithms for filtering data with humans. In: Proceedings of the ACM SIGMOD International Conference on Management of Data; 2012. p. 361–72.Google Scholar
  10. 10.
    Ramesh A, Parameswaran A, Garcia-Molina H, Polyzotis N. Identifying reliable workers swiftly. 2012.Google Scholar
  11. 11.
    Raykar VC, Yu S. Ranking annotators for crowdsourced labeling tasks. In: Advances in Neural Information Proceedings of the Systems 24, Proceedings of the 25th Annual Conference on Neural Information Proceedings of the Systems; 2011. p. 1809–17.Google Scholar
  12. 12.
    Raykar VC, Yu S, Zhao LH, Jerebko A, Florin C, Valadez GH, Bogoni L, Moy L. Supervised learning from multiple experts: whom to trust when everyone lies a bit. In: Proceedings of the 26th Annual International Conference on Machine Learning; 2009. p. 889–96.Google Scholar
  13. 13.
    Roy SB, Lykourentzou I, Thirumuruganathan S, Amer-Yahia S, Das G. Crowds, not drones: Modeling human factors in interactive crowdsourcing. In: Proceedings of the 1st VLDB Workshop on Databases and Crowdsourcing; 2013. p. 39–42.Google Scholar
  14. 14.
    Roy SB, Lykourentzou I, Thirumuruganathan S, Amer-Yahia S, Das G. Optimization in knowledge-intensive crowdsourcing. CoRR, abs/1401.1302, 2014.Google Scholar
  15. 15.
    Slivkins A, Vaughan JW. Online decision making in crowdsourcing markets: Theoretical challenges (position paper). CoRR, abs/1308.1746, 2013.Google Scholar
  16. 16.
    Sorokin A, Forsyth D. Utility data annotation with Amazon mechanical turk. Urbana, 2008; 51(61):820.Google Scholar
  17. 17.
    Tan CH, Agichtein E, Ipeirotis P, Gabrilovich E. Trust, but verify: predicting contribution quality for knowledge base construction and curation. In: Proceedings of the 7th ACM International Conference on Web Search and Data Mining; 2014.Google Scholar
  18. 18.
    Welinder P, Branson S, Perona P, Belongie SJ. The multidimensional wisdom of crowds. In: Advances in Neural Information Proceedings of the Systems 23, Proceedings of the 24th Annual Conference on Neural Information Proceedings of the Systems; 2010. p. 2424–32.Google Scholar
  19. 19.
    Welinder P, Perona P. Online crowdsourcing: rating annotators and obtaining cost-effective labels. In: Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; 2010. p. 25–32.Google Scholar
  20. 20.
    Whitehill J, Wu T-F, Bergsma J, Movellan JR, Ruvolo PL. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In: Advances in Neural Information Proceedings of the Systems 22, Proceedings of the 23rd Annual Conference on Neural Information Proceedings of the Systems; 2009. p. 2035–43.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Sihem Amer-Yahia
    • 1
    • 2
    Email author
  • Senjuti Basu Roy
    • 3
  • Gautam Das
    • 4
  • Ioanna Lykourentzou
    • 5
  • Habibur Rahman
    • 6
  • Saravanan Thirumuruganathan
    • 1
    • 2
  1. 1.CNRSUniv. Grenoble AlpsGrenobleFrance
  2. 2.Laboratoire d’Informatique de GrenobleCNRS-LIGSaint Martin-d’HèresFrance
  3. 3.Department of Computer ScienceNew Jersey Institute of TechnologyTacomaUSA
  4. 4.Department of Computer Science and EngineeringUniversity of Texas at ArlingtonArlingtonUSA
  5. 5.CRP Henri TudorEsch-sur-AlzetteLuxembourg
  6. 6.Qatar Computing Research InstituteHamad Bin Khalifa UniversityDohaQatar