Abstract
The effective integration of crowdsourcing answers has become research hot spots in crowdsourcing quality control. Taking into account the influence of the specialty categories of workers on the accuracy of crowdsourced answers, a crowdsourced answer integration algorithm based on the specialty categories of workers is proposed(SCAI). Firstly, SCAI use the crowdsourced answer set to determine the difficulty of the task. Secondly calculate the accuracy of each crowdsourced answer, then obtain the professional classification of the workers and update the professional accuracy. Experiments were conducted on real data sets and compared with classical majority voting method(MV) and expectation maximization evaluation algorithm(EM). The results show that the proposed algorithm can effectively improve the accuracy of crowd-sourced answer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H.R., Bertino, E., Dustdar, S.: Quality control in crowdsourcing systems: issues and directions. IEEE Internet Comput. 17(2), 76–81 (2013). https://doi.org/10.1109/MIC.2013.20
Feng, J.H., Guo-Liang, L.I., Feng, J.H.: A survey on crowdsourcing. Chin. J. Comput. 38(9), 1713–1726 (2015). https://doi.org/10.11897/SP.J.1016.2015.01713
Alonso, O., Mizzaro, S.: Can we get rid of TREC assessors? using mechanical turk for relevance assessment. In: SIGIR Workshop on the Future of IR Evaluation, pp. 19–23 (2009). https://doi.org/10.1016/j.ipm.2012.01.004
Franklin, M.J., Kossmann, D., Kraska, T., Ramesh, S., Xin, R.: CrowdDB: answering queries with crowdsourcing. In: ACM SIGMOD International Conference on Management of Data, pp. 61–72. ACM (2011).https://doi.org/10.1145/1989323.1989331
Lease, M., Carvalho, V.R., Yilmaz, E.: Crowdsourcing for search and data mining. ACM SIGIR Forum 45(1), 18–24 (2011). https://doi.org/10.1145/1988852.1988856
Alabduljabbar, R., Al-Dossari, H.: A task ontology-based model for quality control in crowdsourcing systems. In: International Conference on Research in Adaptive and Convergent Systems, pp. 22–28. ACM (2016). https://doi.org/10.1145/2987386.2987413
Li, G., Fan, J., Fan, J., Wang, J., Cheng, R.: Crowdsourced data management: overview and challenges. In: ACM International Conference on Management of Data, pp. 1711–1716. ACM (2017). https://doi.org/10.1145/3035918.3054776
Muhammadi, J., Rabiee, H.R., Hosseini, A.: A unified statistical framework for crowd labeling. Knowl. Inf. Syst. 45(2), 271–294 (2015). https://doi.org/10.1007/s10115-014-0790-7
Yue, D.J., Ge, Y.U., Shen, D.R., Xiao-Cong, Y.U.: Crowdsourcing quality evaluation strategies based on voting consistency. J. Northeast. Univ. 35(8), 1097–1101 (2014). https://doi.org/10.3969/j.issn.1005-3026.2014.08.008
Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on Amazon Mechanical Turk. In: ACM SIGKDD Workshop on Human Computation, pp. 64–67. ACM (2010). https://doi.org/10.1145/1837885.1837906
Liu, X., Lu, M., Ooi, B.C., et al.: CDAS: a crowdsourcing data analytics system. In: Proceedings of the VLDB Endowment (2012). https://doi.org/10.14778/2336664.2336676
Ding, Y., Wang, P.: Quality control algorithm research of crowdsourcing based on social platform. Softw. Guide 16(12), 90–93 (2017). https://doi.org/10.11907/rjdk.171970
Zheng, Z., Jiang, G., Zhang, D., et al.: Crowdsourcing quality evaluation algorithm based on sliding task window. Small Microcomput. Syst. 38(09), 2125–2129 (2017). https://doi.org/10.3969/j.issn.1000-1220.2017.09.038. 5(10), 1040–1051
Demartini, G., Difallah, D.E., Cudré Mauroux, P.: ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In: International Conference on World Wide Web, pp. 469–478. ACM (2012). https://doi.org/10.1145/2187836.2187900
Zhang, Z.Q.: Research on crowdsourcing quality control strategies and evaluation algorithm. Chin. J. Comput. 36(8), 1636–1649 (2013). https://doi.org/10.3724/SP.J.1016.2013.01636
Feng, J., Li, G., Wang, H., Feng, J.: Incremental Quality Inference in Crowdsourcing. In: International Conference on Database Systems for Advanced Applications, vol. 8422, pp. 453–467. Springer (2014). https://doi.org/10.1007/978-3-319-05813-9_30
Yin, X., Han, J., Yu, P.S.: Truth discovery with multiple conflicting information providers on the web. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, vol. 20, pp. 1048–1052. ACM (2007). https://doi.org/10.1109/tkdE.2007.190745
Acknowledgements
This research is supported by the National Natural Science Foundation of China (61373116) and Science and the Technology Project in Shaanxi Province of China (Program No. 2016KTZDGY04-01) and the International Science and Technology Cooperation Program of the Science and Technology Department of Shaanxi Province of China (Grant No. 2018KW-049), and the Special Scientific Research Program of the Education Department of Shaanxi Province of China (Grant No. 17JK0711).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Chen, Y., Wang, H., Xia, H., Gao, C., Wang, Z. (2019). An Algorithm of Crowdsourcing Answer Integration Based on Specialty Categories of Workers. In: Krömer, P., Zhang, H., Liang, Y., Pan, JS. (eds) Proceedings of the Fifth Euro-China Conference on Intelligent Data Analysis and Applications. ECC 2018. Advances in Intelligent Systems and Computing, vol 891. Springer, Cham. https://doi.org/10.1007/978-3-030-03766-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-03766-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-03765-9
Online ISBN: 978-3-030-03766-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)