Abstract
Crowdsourcing have been gaining increasing popularity as a highly distributed digital solution that surpasses both borders and time-zones. Moreover, it extends economic opportunities to developing countries, thus answering the call of impact sourcing in alleviating the welfare of poor labor in need. Nevertheless, it is constantly criticized for the associated quality problems and risks. Attempting to mitigate these risks, a rich body of research has been dedicated to design countermeasures against free riders and spammers, who compromise the overall quality of the results, and whose undetected presence ruins the financial prospects for other honest workers. Such quality risks materialize even more severely with imbalanced crowdsourcing tasks. In fact, while surveying this literature, a common rule of thumb can be indeed derived: the easier it is to cheat the system and go undetected, the more restrictive and across-the-board discriminating countermeasures are taken. Hence, also honest yet low-skilled workers will be placed on par with spammers, and consequently exposed and deprived of much-needed earnings. Therefore in this paper, we argue for an impact-driven quality control model, which fulfills the impact-sourcing vision, thus materializing the social responsibility aspect of crowdsourcing, while ensuring high quality results.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Sorokin, A., Forsyth, D.: Utility data annotation with Amazon Mechanical Turk. In: Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Anchorage, AK. IEEE (2008)
Lofi, C., Selke, J., Balke, W.T.: Information extraction meets crowdsourcing: a promising couple. Datenbank-Spektrum 12(2), 109–120 (2012)
Kouloumpis, E., Wilson, T., Moore, J.: Twitter sentiment analysis: the good the bad and the OMG! In: International Conference on Weblogs & Social Media, Barcelona, Spain (2011)
Selke, J., Lofi, C., Balke, W.-T.: Pushing the boundaries of crowd-enabled databases with query-driven schema expansion. In: International Conference on Very Large Data Bases (VLDB), Istanbul, Turkey (2012)
Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on Amazon mechanical turk. In: ACM SIGKDD on Human Computation Workshop (HCOMP), New York, USA (2010)
El Maarry, K., Güntzer, U., Balke, W.-T.: Realizing impact sourcing by adaptive gold questions: a socially responsible measure for workers’ trustworthiness. In: International Conference on Web-Age Information Management (WAIM), Qingdao, Shandong, China (2015)
El Maarry, K., Balke, W.-T.: Retaining rough diamonds: towards a fairer elimination of low-skilled workers. In: International Conference on Database Systems for Advanced Applications (DASFAA), Hanoi, Vietnam (2015)
El Maarry, K., Güntzer, U., Balke, W.-T.: A majority of wrongs doesn’t make it right. In: Conference on Web Information Systems Engineering (WISE), Miami, USA (2015)
Wang, J., Ipeirotis, P.G., Provost, F.: Managing crowdsourced workers. In: winter Conference on Business Intelligence, Salt Lake City, Utah, USA (2011)
Dawid, P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. J. Royal Stat. Soc. 28(1) (1979)
Raykar, V.C., Yu, S., Zhao, L.H., Valadez, G.H., Florin, C., Bogoni, L., Moy, L.P: Learning from crowds. J. Mach. Learn. Res. 11 (2010)
Whitehill, J., Ruvolo, P., Wu, T., Bergsma, J., Movellan, J.: Whose vote should count more: optimal integration of labels from labelers of unknown expertise. In: Advanced Neural Information Processing Systems (NIPS), Vancouver, Canada (2009)
Kuncheva, L.I., Whitaker, C.J., Shipp, C.A., Duin, R.P.W.: Limits on the majority vote accuracy in classifier fusion. J. Pattern Anal. Appl. 6(1), 22–31 (2003)
El Maarry, K., Balke, W.-T., Cho, H., Hwang, S., Baba, Y.: Skill ontology-based model for Quality Assurance in Crowdsourcing. In: International Conference on Database Systems for Advanced Applications (DASFAA), Uncrowd Workshop, Bali, Indonesia (2014)
Noorian, Z., Ulieru, M.: The state of the art in trust and reputation systems: a framework for comparison. Journal of theoretical and applied electronic commerce research 5(2), 97–117 (2010)
Ignjatovic, A., Foo, N., Lee, C.T.: An analytic approach to reputation ranking of participants in online transactions. In: IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Sydney, Australia (2008)
Liu, X., Lu, M., Ooi, B.C., Shen, Y., Wu, S., Zhang, M.: CDAS: a crowdsourcing data analytics system. VLDB Endowment 5(10), 1040–1051 (2012)
Yu, B., Singh, M.P.: Detecting deception in reputation management. In: International Joint Conference on Autonomous Agents and Multiagent Systems, Melbourne, VIC, Australia (2003)
Daltayanni, M., de Alfaro, L., Papadimitriou, P.: WorkerRank: Using employer implicit judgements to infer worker reputation. In: ACM International Conference on Web Search and Data Mining (WSDM), Shanghai, China (2015)
Hossain, M.: Users’ motivation to participate in online crowdsourcing platforms. In: Conference on Innovation, Management and Technology Research, Malacca, Malaysia (2012)
Kazai, G.: In search of quality in crowdsourcing for search engine evaluation. In: European Conference on Advances in Information Retrieval, Dublin, Ireland (2011)
Boim, R., Greenshpan, O., Milo, T., Novgorodov, S., Polyzotis, N., Tan, W.C.: Asking the right questions in crowd data sourcing. In: International Conference on Data Engineering, Washington, DC, USA (2012)
Altman, G., Bland, J.M.: Diagnostic tests. 1: sensitivity and specificity. British Med. J. 308(6943), 1552 (1994). (Clinical research edition)
Altman, G., Bland, J.M.: Diagnostic tests 2: predictive values. British Med. J. 309(6947), 102 (1994). (Clinical research edition)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Maarry, K.E., Balke, WT. (2016). Towards an Impact-Driven Quality Control Model for Imbalanced Crowdsourcing Tasks. In: Cellary, W., Mokbel, M., Wang, J., Wang, H., Zhou, R., Zhang, Y. (eds) Web Information Systems Engineering – WISE 2016. WISE 2016. Lecture Notes in Computer Science(), vol 10041. Springer, Cham. https://doi.org/10.1007/978-3-319-48740-3_9
Download citation
DOI: https://doi.org/10.1007/978-3-319-48740-3_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-48739-7
Online ISBN: 978-3-319-48740-3
eBook Packages: Computer ScienceComputer Science (R0)