Advertisement

Improving Chunker Performance Using a Web-Based Semi-automatic Training Data Analysis Tool

  • István EndrédyEmail author
Conference paper
  • 283 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10930)

Abstract

Fine tuning features for NP chunking is a difficult task. The effects of a modification are sometimes unpredictable. The tuning process with a (un)supervised learning algorithm does not produce necessarily better results. An online toolkit was developed for this scenario highlighting critical areas in training data, which may pose a challenge for the learning algorithm: irregular data, exceptions in trends, unusual property values. This overview of problematic data might inspire the linguist to enhance the data (for example by dividing a class into more detailed classes). The kit was tested on English and Hungarian corpora. Results show that the preparation of datasets for NP chunking is accelerated effectively, which result in better F-scores. The toolkit runs on a simple browser and its usage poses no difficulties for non-technical users. The tool combines the abstraction ability of a linguist and the power of a statistical engine.

Keywords

NP chunking Training data analysis Feature tuning Web based analysis tool IOB labelling WordNet 

Notes

Acknowledgments

I would like to show my gratitude to Dr Gábor Prószéky for motivation, and I thank Dr Nóra Wenszky, Zsuzsanna Balogh, Borbála Siklósi and the anonym reviewers for their comments.

References

  1. 1.
    Bird, S., Klein, E., Loper, E.: Natural language processing with Python. O’Reilly Media Inc., Sebastopol (2009)zbMATHGoogle Scholar
  2. 2.
    Déjean, H.: Learning syntactic structures with xml. In: Proceedings of the 2nd Workshop on Learning Language in Logic and the 4th CoNLL, ConLL 2000, vol. 7, pp. 133–135. ACL, Stroudsburg, PA, USA (2000).  https://doi.org/10.3115/1117601.1117632
  3. 3.
    Endrédy, I., Indig, B.: HunTag3: a general-purpose, modular sequential tagger - chunking phrases in English and maximal NPs and NER for Hungarian. In: 7th Language & Technology Conference, Human Language Technologies as a Challenge for Computer Science and Linguistics (LTC 2015), pp. 213–218. Poznań: Uniwersytet im. Adama Mickiewicza w Poznaniu, Poznań, Poland, November 2015Google Scholar
  4. 4.
    Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: Liblinear: a library for large linear classification. J. Mach. Learn. Res. 9, 1871–1874 (2008)zbMATHGoogle Scholar
  5. 5.
    Johansson, C.: A context sensitive maximum likelihood approach to chunking. In: Proceedings of the 2nd Workshop on Learning Language in Logic and the 4th CoNLL, CoNLL 2000, vol. 7, pp. 136–138, ACL, Stroudsburg, PA, USA (2000).  https://doi.org/10.3115/1117601.1117633
  6. 6.
    Koeling, R.: Chunking with maximum entropy models. In: Proceedings of the 2nd Workshop on Learning Language in Logic and the 4th CoNLL, CoNLL 2000, vol. 7, pp. 139–141. ACL, Stroudsburg, PA, USA (2000).  https://doi.org/10.3115/1117601.1117634
  7. 7.
    Marcus, M.P., Marcinkiewicz, M.A., Santorini, B.: Building a large annotated corpus of english: the penn treebank. Comput. Linguist. 19(2), 313–330 (1993)Google Scholar
  8. 8.
    Miller, G.A.: WordNet: a lexical database for English. Commun. ACM 38(11), 39–41 (1995)CrossRefGoogle Scholar
  9. 9.
    Molina, A., Pla, F.: Shallow parsing using specialized hmms. J. Mach. Learn. Res. 2, 595–613 (2002)zbMATHGoogle Scholar
  10. 10.
    Osborne, M.: Shallow parsing as part-of-speech tagging. In: Proceedings of the 2nd Workshop on Learning Language in Logic and the 4th CoNLL, ConLL 2000, vol. 7, pp. 145–147, ACL, Stroudsburg, PA, USA (2000).  https://doi.org/10.3115/1117601.1117636
  11. 11.
    Ratnaparkhi, A., et al.: A maximum entropy model for part-of-speech tagging. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, vol. 1, pp. 133–142. Philadelphia, PA (1996)Google Scholar
  12. 12.
    Shen, H., Sarkar, A.: Voting between multiple data representations for text chunking. In: Kégl, B., Lapalme, G. (eds.) AI 2005. LNCS (LNAI), vol. 3501, pp. 389–400. Springer, Heidelberg (2005).  https://doi.org/10.1007/11424918_40CrossRefGoogle Scholar
  13. 13.
    Simon, E.: Approaches to Hungarian Named Entity Recognition. Ph.D. thesis, Budapest University of Technology and Economics Budapest (2013)Google Scholar
  14. 14.
    Sun, X., Morency, L.P., Okanohara, D., Tsujii, J.: Modeling latent-dynamic in shallow parsing: a latent conditional model with improved inference. In: Proceedings of the 22nd COLING, vol. 1, pp. 841–848. ACL (2008)Google Scholar
  15. 15.
    Tjong Kim Sang, E.F., Buchholz, S.: Introduction to the CoNLL-2000 shared task: Chunking. In: Proceedings of the 2nd Workshop on Learning Language in Logic and the 4th CoNLL, ConLL 2000, vol. 7, pp. 127–132. ACL, Stroudsburg, PA, USA (2000).  https://doi.org/10.3115/1117601.1117631
  16. 16.
    Tjong Kim Sang, E.F., Veenstra, J.: Representing text chunks. In: Proceedings of the Ninth Conference on European Chapter of the Association for Computational Linguistics, pp. 173–179. Association for Computational Linguistics (1999)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Faculty of Information Technology and Bionics, MTA-PPKE Hungarian Language Technology Research GroupPázmány Péter Catholic UniversityBudapestHungary

Personalised recommendations