Skip to main content

An Adaptive Approach of Label Aggregation Using a Belief Function Framework

  • Conference paper
  • First Online:
Digital Economy. Emerging Technologies and Business Innovation (ICDEc 2017)

Part of the book series: Lecture Notes in Business Information Processing ((LNBIP,volume 290))

Included in the following conference series:

  • 1259 Accesses

Abstract

Crowdsourcing knows a large expansion in recent years. It is widely used as a low-cost alternative to guess the true labels of training data in machine learning problems. In fact, crowdsourcing platforms such as Amazon’s Mechanical Turk allow to collect from crowd workers multiple labels aggregated thereafter to infer the true label. As the workers are not always reliable, imperfect labels can occur. In this work, we propose an approach that aggregates labels using the belief function theory besides of adaptively integrating both labelers expertise and question difficulty. Experiments with real data demonstrate that our method provides better aggregation results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Howe, J.: The rise of crowdsourcing. Wired Magaz. 14(6), 1–4 (2006)

    Google Scholar 

  2. Shafer, G.: A Mathematical Theory of Evidence, vol. 1. Princeton University Press, Princeton (1976)

    MATH  Google Scholar 

  3. Dempster, A.P.: Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 38, 325–339 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  4. Jousselme, A.-L., Grenier, D., Bossé, É.: A new distance between two bodies of evidence. Inf. Fusion 2, 91–101 (2001)

    Article  Google Scholar 

  5. Lefèvre, E., Elouedi, Z.: How to preserve the confict as an alarm in the combination of belief functions? Decis. Supp. Syst. 56, 326–333 (2013)

    Article  Google Scholar 

  6. Smets, P.: The combination of evidence in the transferable belief model. IEEE Trans. Pattern Anal. Mach. Intell. 12(5), 447–458 (1990)

    Article  Google Scholar 

  7. Raykar, V.C., Yu, S.: Eliminating spammers and ranking annotators for crowdsourced labeling tasks. J. Mach. Learn. Res. 13, 491–518 (2012)

    MathSciNet  MATH  Google Scholar 

  8. Smyth, P., Fayyad, U., Burl, M.: Inferring ground truth from subjective labelling of venus images. In: Advances in Neural Information Processing Systems, pp. 1085–1092 (1995)

    Google Scholar 

  9. Yan, Y., Rosales, R., Fung, G.: Modeling annotator expertise: learning when everybody knows a bit of something. In: International Conference on Artificial Intelligence and Statistics, pp. 932–939 (2010)

    Google Scholar 

  10. Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. Appl. Stat. 28, 20–28 (2010)

    Article  Google Scholar 

  11. Khattak, F.K., Salleb, A.: Quality control of crowd labeling through expert evaluation. In: The Neural Information Processing Systems, 2nd Workshop on Computational Social Science and the Wisdom of Crowds, pp. 27–29 (2011)

    Google Scholar 

  12. Abassi, L., Boukhris, I.: Crowd label aggregation under a belief function framework. In: Lehner, F., Fteimi, N. (eds.) KSEM 2016. LNCS, vol. 9983, pp. 185–196. Springer, Cham (2016). doi:10.1007/978-3-319-47650-6_15

    Chapter  Google Scholar 

  13. Smets, P., Mamdani, A., Dubois, D., Prade, H.: Non Standard Logics for Automated Reasoning, pp. 253–286. Academic Press, London (1988)

    Google Scholar 

  14. Ben Rjab, A., Kharoune, M., Miklos, Z., Martin, A.: Characterization of experts in crowdsourcing platforms. In: Vejnarová, J., Kratochvíl, V. (eds.) BELIEF 2016. LNCS, vol. 9861, pp. 97–104. Springer, Cham (2016). doi:10.1007/978-3-319-45559-4_10

    Chapter  Google Scholar 

  15. Trabelsi, A., Elouedi, Z., Lefèvre, E.: Belief function combination: comparative study within the classifier fusion framework. In: Gaber, T., Hassanien, A.E., El-Bendary, N., Dey, N. (eds.) The 1st International Conference on Advanced Intelligent System and Informatics (AISI2015), November 28-30, 2015, Beni Suef, Egypt. AISC, vol. 407, pp. 425–435. Springer, Cham (2016). doi:10.1007/978-3-319-26690-9_38

    Chapter  Google Scholar 

  16. Snow, R., et al.: Cheap and fast but is it good? Evaluation non-expert annotations for natural language tasks. In: The Conference on Empirical Methods in Natural Languages Processing, pp. 254–263 (2008)

    Google Scholar 

  17. Whitehill, J., Wu, T., Bergsma, J., Movellan, J.R., Ruvolo, P.L.: Whose vote should count more: optimal integration of labels from labelers of unknown expertise. In: Neural Information Processing Systems, pp. 2035–2043 (2009)

    Google Scholar 

  18. Alonso, O., Mizzaro, S.: Can we get rid of trec assessors? Using mechanical turk for relevance assessment. In: Proceedings of the SIGIR 2009 Workshop on the Future of IR Evaluation, vol. 15, p. 16 (2009)

    Google Scholar 

  19. Karger, D., Oh, S., Shah, D.: Iterative learning for reliable crowdsourcing systems. In: Neural Information Processing Systems, pp. 1953–1961 (2011)

    Google Scholar 

  20. Georgescu, M., Zhu, X.: Aggregation of crowdsourced labels based on worker history. In: Proceedings of the 4th International Conference on Web Intelligence, Mining and Semantics, pp. 1–11 (2014)

    Google Scholar 

  21. Quinn, A.J., et al.: Human computation: a survey and taxonomy of a growing field. In: Conference on Human Factors in Computing Systems, pp. 1403–1412 (2011)

    Google Scholar 

  22. Nicholson, B., Sheng, V.S., Zhang, J., Wang, Z., Xian, X.: Improving label accuracy by filtering low-quality workers in crowdsourcing. In: Sidorov, G., Galicia-Haro, S.N. (eds.) MICAI 2015. LNCS, vol. 9413, pp. 547–559. Springer, Cham (2015). doi:10.1007/978-3-319-27060-9_45

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lina Abassi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Abassi, L., Boukhris, I. (2017). An Adaptive Approach of Label Aggregation Using a Belief Function Framework. In: Jallouli, R., Zaïane, O., Bach Tobji, M., Srarfi Tabbane, R., Nijholt, A. (eds) Digital Economy. Emerging Technologies and Business Innovation. ICDEc 2017. Lecture Notes in Business Information Processing, vol 290. Springer, Cham. https://doi.org/10.1007/978-3-319-62737-3_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-62737-3_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-62736-6

  • Online ISBN: 978-3-319-62737-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics