Advertisement

Improving Reliability of Crowdsourced Results by Detecting Crowd Workers with Multiple Identities

  • Ujwal GadirajuEmail author
  • Ricardo Kawase
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10360)

Abstract

Quality control in crowdsourcing marketplaces plays a vital role in ensuring useful outcomes. In this paper, we focus on tackling the issue of crowd workers participating in tasks multiple times using different worker-ids to maximize their earnings. Workers attempting to complete the same task repeatedly may not be harmful in cases where the aim of a requester is to gather data or annotations, wherein more contributions from a single worker are fruitful. However, in several cases where the outcomes are subjective, requesters prefer the participation of distinct crowd workers. We show that traditional means to identify unique crowd workers such as worker-ids and ip-addresses are not sufficient. To overcome this problem, we propose the use of browser fingerprinting in order to ascertain the unique identities of crowd workers in paid crowdsourcing microtasks. By using browser fingerprinting across 8 different crowdsourced tasks with varying task difficulty, we found that 6.18% of crowd workers participate in the same task more than once, using different worker-ids to avoid detection. Moreover, nearly 95% of such workers in our experiments pass gold-standard questions and are deemed to be trustworthy, significantly biasing the results thus produced.

Keywords

Crowdsourcing Microtasks Multiple identities Quality control Reliability 

Notes

Acknowledgments

This research has been supported in part by the European Commission within the H2020-ICT-2015 Programme (AFEL project, Grant Agreement No. 687916).

References

  1. 1.
    Difallah, D.E., Catasta, M., Demartini, G., Ipeirotis, P.G., Cudré-Mauroux, P.: The dynamics of micro-task crowdsourcing: the case of amazon mturk. In: Proceedings of the 24th International Conference on World Wide Web, pp. 238–247. International World Wide Web Conferences Steering Committee (2015)Google Scholar
  2. 2.
    Eckersley, P.: How unique is your web browser? In: Atallah, M.J., Hopper, N.J. (eds.) PETS 2010. LNCS, vol. 6205, pp. 1–18. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-14527-8_1 CrossRefGoogle Scholar
  3. 3.
    Eickhoff, C., de Vries, A.P.: Increasing cheat robustness of crowdsourcing tasks. Inf. Retrieval 16(2), 121–137 (2013)CrossRefGoogle Scholar
  4. 4.
    Faradani, S., Hartmann, B., Ipeirotis, P.G.: What’s the right price? pricing tasks for finishing on time. In: Human Computation, Papers from the 2011 AAAI Workshop, San Francisco, California, USA, 8 August 2011Google Scholar
  5. 5.
    Gadiraju, U., Dietze, S.: Improving learning through achievement priming in crowdsourced information finding microtasks. In: Proceedings of the Seventh International Learning Analytics & Knowledge Conference, Vancouver, BC, Canada, pp. 105–114, 13–17 March 2017Google Scholar
  6. 6.
    Gadiraju, U., Kawase, R., Dietze, S.: A taxonomy of microtasks on the web. In: 25th ACM Conference on Hypertext and Social Media, HT 2014, Santiago, Chile, pp. 218–223, 1–4 September 2014Google Scholar
  7. 7.
    Gadiraju, U., Kawase, R., Dietze, S., Demartini, G.: Understanding malicious behavior in crowdsourcing platforms: the case of online surveys. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, Seoul, Republic of Korea, pp. 1631–1640, 18–23 April 2015Google Scholar
  8. 8.
    Gani, K., Hacid, H., Skraba, R.: Towards multiple identity detection in social networks. In: Proceedings of the 21st International Conference on World Wide Web, pp. 503–504. ACM (2012)Google Scholar
  9. 9.
    Gawade, M., Vaish, R., Waihumbu, M.N., Davis, J.: Exploring employment opportunities through microtasks via cybercafes. In: 2012 IEEE Global Humanitarian Technology Conference, GHTC 2012, Seattle, WA, USA, pp. 77–82, 21–24 October 2012Google Scholar
  10. 10.
    Kafai, Y.B., Fields, D.A., Cook, M.: Your second selves: avatar designs and identity play in a teen virtual world. In: Proceedings of DIGRA, vol. 2007 (2007)Google Scholar
  11. 11.
    Kaufmann, N., Schulze, T., Veit, D.: More than fun and money, worker motivation in crowdsourcing-a study on mechanical turk. In: AMCIS, vol. 11, pp. 1–11 (2011)Google Scholar
  12. 12.
    Kittur, A., Chi, E.H., Suh, B.: Crowdsourcing user studies with mechanical turk. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 453–456. ACM (2008)Google Scholar
  13. 13.
    Mason, W.A., Watts, D.J.: Financial incentives and the “performance of crowds”. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, Paris, France, pp. 77–85, 28 June 2009Google Scholar
  14. 14.
    Mowery, K., Shacham, H.: Pixel perfect: fingerprinting canvas in html5. In: Proceedings of W2SP (2012)Google Scholar
  15. 15.
    Mulazzani, M., Reschl, P., Huber, M., Leithner, M., Schrittwieser, S., Weippl, E., Wien, F.: Fast and reliable browser identification with javascript engine fingerprinting. In: Web 2.0 Workshop on Security and Privacy (W2SP), vol. 5 (2013)Google Scholar
  16. 16.
    Oleson, D., Sorokin, A., Laughlin, G.P., Hester, V., Le, J., Biewald, L.: Programmatic gold: targeted and scalable quality assurance in crowdsourcing. In: Human Computation, Papers from the 2011 AAAI Workshop, San Francisco, California, USA, 8 August 2011Google Scholar
  17. 17.
    Rainer, B., Timmerer, C.: Quality of experience of web-based adaptive http streaming clients in real-world environments using crowdsourcing. In: Proceedings of the 2014 Workshop on Design, Quality and Deployment of Adaptive Video Streaming, pp. 19–24. ACM (2014)Google Scholar
  18. 18.
    Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., Vukovic, M.: An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In: Proceedings of the Fifth International Conference on Weblogs and Social Media, Barcelona, Catalonia, Spain, 17–21 July 2011Google Scholar
  19. 19.
    Rzeszotarski, J.M., Kittur, A.: Instrumenting the crowd: using implicit behavioral measures to predict task performance. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, pp. 13–22, 16–19 October 2011Google Scholar
  20. 20.
    Wang, J., Ipeirotis, P.G., Provost, F.: Quality-based pricing for crowdsourced workers (2013)Google Scholar
  21. 21.
    Yamak, Z., Saunier, J., Vercouter, L.: Detection of multiple identity manipulation in collaborative projects. In: Proceedings of the 25th International Conference Companion on World Wide Web, pp. 955–960. International World Wide Web Conferences Steering Committee (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.L3S Research CenterLeibniz Universität HannoverHannoverGermany
  2. 2.mobile.de GmbH/eBay Inc.BerlinGermany

Personalised recommendations