Advertisement

Answer Aggregation of Crowdsourcing Employing an Improved EM-Based Approach

  • Ran Zhang
  • Lei Liu
  • Lizhen Cui
  • Wei He
  • Hui Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11336)

Abstract

Crowdsourcing platforms are frequently employed to collect answers from numerous participants on the Internet, e.g., Amazon Mechanical Turk. Different participants may have different answers for the same question. This cause unexpected aggregated answers. The accuracy of aggregated answers depends on answer quality. Answer quality varies by skill level of participants. In crowdsourcing, participants are defined as workers. Existing studies always characterize worker quality with their skills. However, the personality features of individual persons may have significant impact on the quality of their answers, e.g. worker emotion and worker intent. To this end, aggregating answers without taking into account the personality characteristics of persons may lead to unexpected results. To fill the gap this paper employs an improved EM-based approach for answer aggregation based on the answer data of workers and considering personality characteristics. The approach not only aggregates answers but also simultaneously estimates the skill level of each worker, worker emotion, worker intent and the difficulty of the task. Last but not least, the verification is conducted on real-world datasets Affect Text and simulation datasets.

Keywords

Crowdsourcing Worker skill Task difficulty Worker quality Personality characteristics EM-based approach Answer aggregation 

Notes

Acknowledgment

This work is partially supported by National Key R&D Program No. 2017YFB1400100, SDNFSC No. ZR2018MF014.

References

  1. 1.
    Feng, J.H., Li, G.L., Feng, J.H.: A survey on crowdsourcing. Chin. J. Comput. 38(9), 1713–1726 (2015) Google Scholar
  2. 2.
    Kurve, A., Miller, D., Kesidis, G.: Multicategory crowdsourcing accounting for variable task difficulty, worker skill, and worker intention. IEEE Trans. Knowl. Data Eng. 27(3), 794–809 (2014)CrossRefGoogle Scholar
  3. 3.
    Cao, C.C., She, J., Tong, Y., Chen, L.: Whom to ask? Proc. VLDB Endow. 5(11), 1495–1506 (2012)CrossRefGoogle Scholar
  4. 4.
    Karger, D.R., Oh, S., Shah, D.: Iterative learning for reliable crowdsourcing systems (2011)Google Scholar
  5. 5.
    Lee, J., Cho, H., Park, J.W., Cha, Y.R., Hwang, S.W., Nie, Z., Wen, J.R.: Hybrid entity clustering using crowds and data. VLDB J. 22(5), 711–726 (2013)CrossRefGoogle Scholar
  6. 6.
    Park, H., Garcia-Molina, H., Pang, R., Polyzotis, N., Parameswaran, A., Widom, J.: Deco: a system for declarative crowdsourcing. Proc. VLDB Endow. 5(12), 1990–1993 (2012)CrossRefGoogle Scholar
  7. 7.
    Demartini, G., Difallah, D.E., Cudré-Mauroux, P.: ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In: International Conference on World Wide Web, pp. 469–478. ACM (2012)Google Scholar
  8. 8.
    Oswald, A., Proto, E., Sgroi, D.: Happiness and productivity. Soc. Sci. Electron. Publ. 33(4), 789–822 (2008)CrossRefGoogle Scholar
  9. 9.
    Dempster, A.P., Laird, L., Rubin, D.B.: Maximum likelihood estimation from incomplete data via the EM algorithm. Elearn 39(1), 1–38 (1977)zbMATHGoogle Scholar
  10. 10.
    Raykar, V.C., Yu, S., Zhao, L.H., Valadez, G.H., Florin, C., Bogoni, L., et al.: Learning from crowds. J. Mach. Learn. Res. 11(2), 1297–1322 (2010)MathSciNetGoogle Scholar
  11. 11.
    Yu, H., Shen, Z.J., Fauvel, S., Cui, L.Z.: Efficient scheduling in crowdsourcing based on workers’ emotion. In: IEEE International Conference on Agents IEEE Computer Society, pp. 121–126 (2017)Google Scholar
  12. 12.
    Sun, H., Hu, K., Fang, Y., Song, Y.: Adaptive result inference for collecting quantitative data with crowdsourcing. IEEE Internet Things J. 4(5), 1389–1398 (2017)CrossRefGoogle Scholar
  13. 13.
    Koulougli, D., Hadjali, A., Rassoul, I.: Leveraging human factors to enhance query answering in crowdsourcing systems. In: IEEE Tenth International Conference on Research Challenges in Information Science, pp. 1–6. IEEE (2016)Google Scholar
  14. 14.
    Moayedikia, A., Ong, K.L., Boo, Y.L., Yeoh, W.: Bee colony based worker reliability estimation algorithm in microtask crowdsourcing. In: IEEE International Conference on Machine Learning and Applications, pp. 713–717. IEEE (2017)Google Scholar
  15. 15.
    Wu, M., Li, Q., Zhang, J., Cui, S., Li, D., Qi, Y.: A robust inference algorithm for crowd sourced categorization. In: International Conference on Intelligent Systems and Knowledge Engineering, pp. 1–6 (2017)Google Scholar
  16. 16.
    Whitehill, J., Ruvolo, P., Wu, T., Bergsma, J., Movellan, J.: Whose vote should count more: optimal integration of labels from labelers of unknown expertise. In: International Conference on Neural Information Processing Systems, vol. 46, pp. 2035–2043. Curran Associates Inc. (2009)Google Scholar
  17. 17.
    Strapparava, C., Mihalcea, R.: SemEval-2007 task 14: affective text. In: International Workshop on Semantic Evaluations, pp. 70–74. Association for Computational LinguisticsGoogle Scholar
  18. 18.
    Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. In: Conference on Empirical Methods in Natural Language Processing 2008 (2008)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Ran Zhang
    • 1
  • Lei Liu
    • 1
  • Lizhen Cui
    • 1
  • Wei He
    • 1
  • Hui Li
    • 1
  1. 1.Shandong UniversityJinanChina

Personalised recommendations