Abstract
In this paper, we address the problem of task redundancy in crowdsourcing systems while providing a methodology to decrease the overall effort required to accomplish a crowdsourcing task. Typical task assignment systems assign tasks to a fixed number of crowd workers, while tasks are varied in difficulty as being easy or hard tasks. Easy tasks need fewer task assignments than hard tasks. We present TRR, a task redundancy reducer that assigns tasks to crowd workers on several work iterations, that adaptively estimates how many workers are needed for each iteration for Boolean and classification task types. TRR stops assigning tasks to crowd workers upon detecting convergence between workers’ opinions that in turn reduces invested cost and time to answer a task. TRR supports Boolean, classification, and rating task types taking into consideration both crowdsourcing task assignment schemes of anonymous workers task assignments and non-anonymous workers task assignments. The paper includes experimental results by performing simulating experiments on crowdsourced datasets.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Amazon Mechanical Turk (2005). https://www.mturk.com/
CrowdFlower (2009). http://www.crowdflower.com/
Zheng, Y., Li, G., Li, Y., Shan, C., Cheng, R.: Truth Inference in Crowdsourcing : Is the Problem Solved ? PVLDB 3(3.0), 541–552 (2016)
Karger, D.R., Oh, S., Shah, D.: Iterative learning for reliable crowdsourcing systems. In: Advances in Neural Information Processing Systems 24 (NIPS) (2011)
Karger, D.R., Oh, S., Shah, D.: Budget-optimal task allocation for reliable crowdsourcing systems. Oper. Res. 62(1), 1–24 (2014)
Liu, X., Lu, M., Ooi, C., Shen, Y., Wu, S., Zhang, M.: CDAS: a crowdsourcing data analytics p system. PVLDB 5(10), 1040–1051 (2012)
Abraham, I., Alonso, O., Kandylas, V., Patel, R., Shelford, S., Slivkins, A.: How many workers to ask? adaptive exploration for collecting high quality labels. In: ACM SIGIR, p. 473 (2016)
Franklin, M.J., Kossmann, D., Kraska, T., Ramesh, S., Xin, R.: CrowdDB: answering queries with crowdsourcing. In: SIGMOD, pp. 61–72 (2011)
Haas, D., Ansel, J., Gu, L., Marcus, A.: Argonaut: macrotask crowdsourcing for complex data processing. PVLDB 8(12), 1642–1653 (2015)
Jost, L.: Entropy and diversity. Oikos 113(2), 363–375 (2006)
Zhang, C.J., Zhao, Z., Chen, L., Jagadish, H.V.: Cao CC. CrowdMatcher: crowd-assisted schema matching. In: SIGMOD, pp. 721–724 (2014)
Fan, J., Tan, K.: iCrowd : an adaptive crowdsourcing framework. In: SIGMOD, pp. 1015–1030 (2015)
Zheng, Y., Wang, J., Li, G., Cheng, R., Feng, J.: Berkeley UC, pp. 1031–1046. A quality-aware task assignment system for crowdsourcing applications. SIGMOD, QASCA (2015)
Zheng, Y., Li, G., Cheng, R.: DOCS: domain-aware crowdsourcing system. PVLDB 10(4), 361–372 (2016)
Jain, A., Das Sarma, A., Parameswaran, A., Widom, J.: Understanding workers, developing effective tasks, and enhancing marketplace dynamics: a study of a large crowdsourcing marketplace. PVLDB 10(7), 829–840 (2017)
Guo, S., Parameswaran, A., Garcia-molina, H.: So who won ? dynamic max discovery with the crowd. In: SIGMOD (2012)
Khanfouci, M., Nicolas, G.: Consensus-based techniques for range-task resolution in crowdsourcing systems. In: EDBT/ICDT Workshops (2017)
Dubois, D., Prade, H.: Fundamentals of Fuzzy Sets. Springer (2000)
Szmidt, E., Kacprzyk, J.: Distances between intuitionistic fuzzy sets. Fuzzy Sets Syst. 114, 505–518 (2000)
Solanas, A., Selvam, R.M., Leiva, D.: Common indexes of group diversity: upper boundaries. Psychol. Rep. 111(3), 777–796 (2012)
Zeng, S.: Some intuitionistic fuzzy weighted distance measures and their application to group decision making. Group Decis. Negot. 22(2), 281–298 (2013)
Crowd database group. http://dbgroup.cs.tsinghua.edu.cn/ligl/crowddata/
Weather sentiment analysis dataset. https://eprints.soton.ac.uk/376543/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Galal, S., El-Sharkawi, M.E. (2019). TRR: Reducing Crowdsourcing Task Redundancy. In: Hartmann, S., Küng, J., Chakravarthy, S., Anderst-Kotsis, G., Tjoa, A., Khalil, I. (eds) Database and Expert Systems Applications. DEXA 2019. Lecture Notes in Computer Science(), vol 11707. Springer, Cham. https://doi.org/10.1007/978-3-030-27618-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-27618-8_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27617-1
Online ISBN: 978-3-030-27618-8
eBook Packages: Computer ScienceComputer Science (R0)