An Algorithm of Crowdsourcing Answer Integration Based on Specialty Categories of Workers

  • Yanping Chen
  • Han Wang
  • Hong XiaEmail author
  • Cong Gao
  • Zhongmin Wang
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 891)


The effective integration of crowdsourcing answers has become research hot spots in crowdsourcing quality control. Taking into account the influence of the specialty categories of workers on the accuracy of crowdsourced answers, a crowdsourced answer integration algorithm based on the specialty categories of workers is proposed(SCAI). Firstly, SCAI use the crowdsourced answer set to determine the difficulty of the task. Secondly calculate the accuracy of each crowdsourced answer, then obtain the professional classification of the workers and update the professional accuracy. Experiments were conducted on real data sets and compared with classical majority voting method(MV) and expectation maximization evaluation algorithm(EM). The results show that the proposed algorithm can effectively improve the accuracy of crowd-sourced answer.


Crowdsourcing Quality control Answers integration Specialty categories of workers 



This research is supported by the National Natural Science Foundation of China (61373116) and Science and the Technology Project in Shaanxi Province of China (Program No. 2016KTZDGY04-01) and the International Science and Technology Cooperation Program of the Science and Technology Department of Shaanxi Province of China (Grant No. 2018KW-049), and the Special Scientific Research Program of the Education Department of Shaanxi Province of China (Grant No. 17JK0711).


  1. 1.
    Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H.R., Bertino, E., Dustdar, S.: Quality control in crowdsourcing systems: issues and directions. IEEE Internet Comput. 17(2), 76–81 (2013). Scholar
  2. 2.
    Feng, J.H., Guo-Liang, L.I., Feng, J.H.: A survey on crowdsourcing. Chin. J. Comput. 38(9), 1713–1726 (2015). Scholar
  3. 3.
    Alonso, O., Mizzaro, S.: Can we get rid of TREC assessors? using mechanical turk for relevance assessment. In: SIGIR Workshop on the Future of IR Evaluation, pp. 19–23 (2009). Scholar
  4. 4.
    Franklin, M.J., Kossmann, D., Kraska, T., Ramesh, S., Xin, R.: CrowdDB: answering queries with crowdsourcing. In: ACM SIGMOD International Conference on Management of Data, pp. 61–72. ACM (2011).
  5. 5.
    Lease, M., Carvalho, V.R., Yilmaz, E.: Crowdsourcing for search and data mining. ACM SIGIR Forum 45(1), 18–24 (2011). Scholar
  6. 6.
    Alabduljabbar, R., Al-Dossari, H.: A task ontology-based model for quality control in crowdsourcing systems. In: International Conference on Research in Adaptive and Convergent Systems, pp. 22–28. ACM (2016).
  7. 7.
    Li, G., Fan, J., Fan, J., Wang, J., Cheng, R.: Crowdsourced data management: overview and challenges. In: ACM International Conference on Management of Data, pp. 1711–1716. ACM (2017).
  8. 8.
    Muhammadi, J., Rabiee, H.R., Hosseini, A.: A unified statistical framework for crowd labeling. Knowl. Inf. Syst. 45(2), 271–294 (2015). Scholar
  9. 9.
    Yue, D.J., Ge, Y.U., Shen, D.R., Xiao-Cong, Y.U.: Crowdsourcing quality evaluation strategies based on voting consistency. J. Northeast. Univ. 35(8), 1097–1101 (2014). Scholar
  10. 10.
    Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on Amazon Mechanical Turk. In: ACM SIGKDD Workshop on Human Computation, pp. 64–67. ACM (2010).
  11. 11.
    Liu, X., Lu, M., Ooi, B.C., et al.: CDAS: a crowdsourcing data analytics system. In: Proceedings of the VLDB Endowment (2012). Scholar
  12. 12.
    Ding, Y., Wang, P.: Quality control algorithm research of crowdsourcing based on social platform. Softw. Guide 16(12), 90–93 (2017). Scholar
  13. 13.
    Zheng, Z., Jiang, G., Zhang, D., et al.: Crowdsourcing quality evaluation algorithm based on sliding task window. Small Microcomput. Syst. 38(09), 2125–2129 (2017). 5(10), 1040–1051CrossRefGoogle Scholar
  14. 14.
    Demartini, G., Difallah, D.E., Cudré Mauroux, P.: ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In: International Conference on World Wide Web, pp. 469–478. ACM (2012).
  15. 15.
    Zhang, Z.Q.: Research on crowdsourcing quality control strategies and evaluation algorithm. Chin. J. Comput. 36(8), 1636–1649 (2013). Scholar
  16. 16.
    Feng, J., Li, G., Wang, H., Feng, J.: Incremental Quality Inference in Crowdsourcing. In: International Conference on Database Systems for Advanced Applications, vol. 8422, pp. 453–467. Springer (2014). Scholar
  17. 17.
    Yin, X., Han, J., Yu, P.S.: Truth discovery with multiple conflicting information providers on the web. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, vol. 20, pp. 1048–1052. ACM (2007). Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Yanping Chen
    • 1
    • 2
  • Han Wang
    • 1
  • Hong Xia
    • 1
    • 2
    Email author
  • Cong Gao
    • 1
    • 2
  • Zhongmin Wang
    • 1
    • 2
  1. 1.School of Computer Science and TechnologyXi’an University of Posts and TelecommunicationsXi’anChina
  2. 2.Shannxi Key Laboratory of Network Data Analysis and Intelligent ProcessingXi’an University of Posts and TelecommunicationsXi’anChina

Personalised recommendations