Advertisement

Harnessing Diversity in Crowds and Machines for Better NER Performance

  • Oana InelEmail author
  • Lora Aroyo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10249)

Abstract

Over the last years, information extraction tools have gained a great popularity and brought significant performance improvement in extracting meaning from structured or unstructured data. For example, named entity recognition (NER) tools identify types such as people, organizations or places in text. However, despite their high F1 performance, NER tools are still prone to brittleness due to their highly specialized and constrained input and training data. Thus, each tool is able to extract only a subset of the named entities (NE) mentioned in a given text. In order to improve NE Coverage, we propose a hybrid approach, where we first aggregate the output of various NER tools and then validate and extend it through crowdsourcing. The results from our experiments show that this approach performs significantly better than the individual state-of-the-art tools (including existing tools that integrate individual outputs already). Furthermore, we show that the crowd is quite effective in (1) identifying mistakes, inconsistencies and ambiguities in currently used ground truth, as well as in (2) a promising approach to gather ground truth annotations for NER that capture a multitude of opinions.

Keywords

Crowdsourcing Disagreement Diversity Perspectives Opinions Named entity extraction Named entity typing Hybrid machine-crowd workflow Crowdsourcing ground truth 

References

  1. 1.
    Gangemi, A.: A comparison of knowledge extraction tools for the semantic web. In: Cimiano, P., Corcho, O., Presutti, V., Hollink, L., Rudolph, S. (eds.) ESWC 2013. LNCS, vol. 7882, pp. 351–366. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-38288-8_24CrossRefGoogle Scholar
  2. 2.
    Rizzo, G., van Erp, M., Troncy, R.: Benchmarking the extraction and disambiguation of named entities on the semantic web. In: LREC, pp. 4593–4600 (2014)Google Scholar
  3. 3.
    Derczynski, L., Maynard, D., Rizzo, G., van Erp, M., Gorrell, G., Troncy, R., Petrak, J., Bontcheva, K.: Analysis of named entity recognition and linking for tweets. Inf. Process. Manage. 51(2), 32–49 (2015)CrossRefGoogle Scholar
  4. 4.
    Bayerl, P.S., Paul, K.I.: What determines inter-coder agreement in manual annotations? A meta-analytic investigation. Comput. Linguist. 37(4), 699–725 (2011)CrossRefGoogle Scholar
  5. 5.
    Aroyo, L., Welty, C.: Truth is a lie: CrowdTruth and 7 myths about human computation. AI Mag. 36(1), 15–24 (2015)CrossRefGoogle Scholar
  6. 6.
    Demartini, G., Difallah, D.E., Cudré-Mauroux, P.: ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In: Proceedings of the 21st International Conference on WWW, pp. 469–478. ACM (2012)Google Scholar
  7. 7.
    Finin, T., Murnane, W., Karandikar, A., Keller, N., Martineau, J., Dredze, M.: Annotating named entities in twitter data with crowdsourcing. In: Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pp. 80–88. ACL (2010)Google Scholar
  8. 8.
    Bu, Q., Simperl, E., Zerr, S., Li, Y.: Using microtasks to crowdsource DBpedia entity classification: a study in workflow design. Semant. Web J. (2016)Google Scholar
  9. 9.
    Feyisetan, O., Luczak-Roesch, M., Simperl, E., Tinati, R., Shadbolt, N.: Towards hybrid NER: a study of content and crowdsourcing-related performance factors. In: Gandon, F., Sabou, M., Sack, H., d’Amato, C., Cudré-Mauroux, P., Zimmermann, A. (eds.) ESWC 2015. LNCS, vol. 9088, pp. 525–540. Springer, Cham (2015). doi: 10.1007/978-3-319-18818-8_32CrossRefGoogle Scholar
  10. 10.
    Inel, O., et al.: CrowdTruth: machine-human computation framework for harnessing disagreement in gathering annotated data. In: Mika, P., et al. (eds.) ISWC 2014. LNCS, vol. 8797, pp. 486–504. Springer, Cham (2014). doi: 10.1007/978-3-319-11915-1_31CrossRefGoogle Scholar
  11. 11.
    Van Erp, M., Rizzo, G., Troncy, R.: Learning with the web: spotting named entities on the intersection of nerd and machine learning. In: # MSM, pp. 27–30 (2013)Google Scholar
  12. 12.
    Rizzo, G., Troncy, R.: NERD: a framework for unifying named entity recognition and disambiguation extraction tools. In: Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the ACL, pp. 73–76. ACL (2012)Google Scholar
  13. 13.
    Plu, J., Rizzo, G., Troncy, R.: A hybrid approach for entity recognition and linking. In: Gandon, F., Cabrio, E., Stankovic, M., Zimmermann, A. (eds.) SemWebEval 2015. CCIS, vol. 548, pp. 28–39. Springer, Cham (2015). doi: 10.1007/978-3-319-25518-7_3CrossRefGoogle Scholar
  14. 14.
    Consoli, S., Recupero, D.R.: Using FRED for named entity resolution, linking and typing for knowledge base population. In: Gandon, F., Cabrio, E., Stankovic, M., Zimmermann, A. (eds.) SemWebEval 2015. CCIS, vol. 548, pp. 40–50. Springer, Cham (2015). doi: 10.1007/978-3-319-25518-7_4CrossRefGoogle Scholar
  15. 15.
    Röder, M., Usbeck, R., Speck, R., Ngomo, A.-C.N.: CETUS – a baseline approach to type extraction. In: Gandon, F., Cabrio, E., Stankovic, M., Zimmermann, A. (eds.) SemWebEval 2015. CCIS, vol. 548, pp. 16–27. Springer, Cham (2015). doi: 10.1007/978-3-319-25518-7_2CrossRefGoogle Scholar
  16. 16.
    Plu, J., Rizzo, G., Troncy, R.: Enhancing entity linking by combining NER models. In: Sack, H., Dietze, S., Tordai, A., Lange, C. (eds.) SemWebEval 2016. CCIS, vol. 641, pp. 17–32. Springer, Cham (2016). doi: 10.1007/978-3-319-46565-4_2CrossRefGoogle Scholar
  17. 17.
    Chabchoub, M., Gagnon, M., Zouaq, A.: Collective disambiguation and semantic annotation for entity linking and typing. In: Sack, H., Dietze, S., Tordai, A., Lange, C. (eds.) SemWebEval 2016. CCIS, vol. 641, pp. 33–47. Springer, Cham (2016). doi: 10.1007/978-3-319-46565-4_3CrossRefGoogle Scholar
  18. 18.
    Dumitrache, A., Aroyo, L., Welty, C.: Achieving expert-level annotation quality with CrowdTruth (2015)Google Scholar
  19. 19.
    Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast–but is it good? Evaluating non-expert annotations for natural language tasks. In: Proceedings of EMNLP, pp. 254–263. Association for Computational Linguistics (2008)Google Scholar
  20. 20.
    Caselli, T., Sprugnoli, R., Inel, O.: Temporal information annotation: crowd vs. experts. In: LREC (2016)Google Scholar
  21. 21.
    Inel, O., Caselli, T., Aroyo, L.: Crowdsourcing salient information from news and tweets. In: LREC, pp. 3959–3966 (2016)Google Scholar
  22. 22.
    Fromreide, H., Hovy, D., Søgaard, A.: Crowdsourcing and annotating ner for twitter #drift. In: LREC, pp. 2544–2547 (2014)Google Scholar
  23. 23.
    Nowak, S., Rüger, S.: How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In: Proceedings of the International Conference on Multimedia Information Retrieval. ACM (2010)Google Scholar
  24. 24.
    Aroyo, L., Welty, C.: The three sides of CrowdTruth. J. Hum. Comput. 1, 31–34 (2014)Google Scholar
  25. 25.
    Chen, L., Ortona, S., Orsi, G., Benedikt, M.: Aggregating semantic annotators. Proc. VLDB Endowment 6(13), 1486–1497 (2013)CrossRefGoogle Scholar
  26. 26.
    Kozareva, Z., Ferrández, Ó., Montoyo, A., Muñoz, R., Suárez, A., Gómez, J.: Combining data-driven systems for improving named entity recognition. Data Knowl. Eng. 61(3), 449–466 (2007)CrossRefGoogle Scholar
  27. 27.
    Hellmann, S., Lehmann, J., Auer, S., Brümmer, M.: Integrating NLP using linked data. In: Alani, H., et al. (eds.) ISWC 2013. LNCS, vol. 8219, pp. 98–113. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-41338-4_7CrossRefGoogle Scholar
  28. 28.
    Sabou, M., Bontcheva, K., Derczynski, L., Scharl, A.: Corpus annotation through crowdsourcing: towards best practice guidelines. In: LREC, pp. 859–866 (2014)Google Scholar
  29. 29.
    Voyer, R., Nygaard, V., Fitzgerald, W., Copperman, H.: A hybrid model for annotating named entity training corpora. In: Proceedings of LAW IV. ACL (2010)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Vrije Universiteit AmsterdamAmsterdamThe Netherlands

Personalised recommendations