Skip to main content

Characterization of Experts in Crowdsourcing Platforms

  • Conference paper
  • First Online:
Belief Functions: Theory and Applications (BELIEF 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9861))

Included in the following conference series:

Abstract

Crowdsourcing platforms enable to propose simple human intelligence tasks to a large number of participants who realise these tasks. The workers often receive a small amount of money or the platforms include some other incentive mechanisms, for example they can increase the workers reputation score, if they complete the tasks correctly. We address the problem of identifying experts among participants, that is, workers, who tend to answer the questions correctly. Knowing who are the reliable workers could improve the quality of knowledge one can extract from responses. As opposed to other works in the literature, we assume that participants can give partial or incomplete responses, in case they are not sure that their answers are correct. We model such partial or incomplete responses with the help of belief functions, and we derive a measure that characterizes the expertise level of each participant. This measure is based on precise and exactitude degrees that represent two parts of the expertise level. The precision degree reflects the reliability level of the participants and the exactitude degree reflects the knowledge level of the participants. We also analyze our model through simulation and demonstrate that our richer model can lead to more reliable identification of experts.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bozzon, A., Brambilla, M., Ceri, S., Silvestri, M., Vesci, G.: Choosing the right crowd: expert finding in social networks. In: Proceedings of the 16th International Conference on Extending Database Technology, pp. 637–648. ACM (2013)

    Google Scholar 

  2. Chiu, C.-M., Liang, T.-P., Turban, E.: What can crowdsourcing do for decision support? Decis. Support Syst. 65, 40–49 (2014)

    Article  Google Scholar 

  3. Dempster, A.P.: Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 38, 325–339 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  4. Howe, J.: The rise of crowdsourcing. Wired Mag. 14(6), 1–4 (2006)

    MathSciNet  Google Scholar 

  5. Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on amazon mechanical turk. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 64–67. ACM (2010)

    Google Scholar 

  6. Ipeirotis, P.: Worker evaluation in crowdsourcing: gold data or-multipleworkers? (2010)

    Google Scholar 

  7. Jousselme, A.-L., Grenier, D., Bossé, É.: A new distance between two bodies of evidence. Inf. Fusion 2(2), 91–101 (2001)

    Article  Google Scholar 

  8. Kazai, G., Kamps, J., Milic-Frayling, N.: Worker types and personality traits in crowdsourcing relevance labels. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management, pp. 1941–1944. ACM (2011)

    Google Scholar 

  9. Khattak, F.K., Salleb-Aouissi, A.: Quality control of crowd labeling through expert evaluation. In: Proceedings of the NIpPS 2nd Workshop on Computational Social Science and the Wisdom of Crowds (2011)

    Google Scholar 

  10. Kittur, A., Nickerson, J.V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., Lease, M., Horton, J.: The future of crowd work. In: Proceedings of the 2013 Conference on Computer Supported Cooperative Work, pp. 1301–1318. ACM (2013)

    Google Scholar 

  11. Noll, M.G., Au Yeung, C., Gibbins, N., Meinel, C., Shadbolt, N.: Telling experts from spammers: expertise ranking in folksonomies. In: Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 612–619. ACM (2009)

    Google Scholar 

  12. Rjab, A.B., Kharoune, M., Miklos, Z., Martin, A., Yaghlane, B.B.: Caractérisation d’experts dans les plate-formes de crowdsourcing. In: 24ème Conference sur la Logique Floue et ses Applications (LFA), Poitiers, France, no. 13 (2015)

    Google Scholar 

  13. Shafer, G., et al.: A Mathematical Theory of Evidence, vol. 1. Princeton University Press, Princeton (1976)

    MATH  Google Scholar 

  14. Smarandache, F., Martin, A., Osswald, C.: Contradiction measures and specificity degrees of basic belief assignments. In: International Conference on Information Fusion, Chicago, USA (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arnaud Martin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Ben Rjab, A., Kharoune, M., Miklos, Z., Martin, A. (2016). Characterization of Experts in Crowdsourcing Platforms. In: Vejnarová, J., Kratochvíl, V. (eds) Belief Functions: Theory and Applications. BELIEF 2016. Lecture Notes in Computer Science(), vol 9861. Springer, Cham. https://doi.org/10.1007/978-3-319-45559-4_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-45559-4_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-45558-7

  • Online ISBN: 978-3-319-45559-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics