Evaluation Metrics and Evaluation
- 12k Downloads
This chapter describes the metrics for the evaluation of information retrieval and natural language processing systems, the annotation techniques and evaluation metrics and the concepts of training, development and evaluations sets for information retrieval systems.
- Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. In International Joint Conference on Artificial Intelligence (IJCAI) (pp. 1137–1145).Google Scholar
- Pustejovsky, J., & Stubbs, A. (2012). Natural Language Annotation for Machine Learning. O’Reilly Media, Inc. Beijing.Google Scholar
- Stenetorp, P., Pyysalo, S., Topić, G., Ohta, T., Ananiadou, S., & Tsujii, J. (2012). BRAT: A web-based tool for NLP-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics (pp. 102–107). Association for Computational Linguistics.Google Scholar
- Voorhees, E. M. (2001). The philosophy of information retrieval evaluation. In Evaluation of Cross-Language Information Retrieval Systems (pp. 355–370). Berlin: Springer.Google Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.