Abstract
In this paper we attempt to verify the influence of data quality improvements on results of machine learning tasks. We focus on measuring semantic similarity and use the SemEval 2016 datasets. To achieve consistent annotations, we made all sentences grammatically and lexically correct, and developed formal semantic similarity criteria. The similarity detector used in this research was designed for the SemEval English Semantic Textual Similarity (STS) task. This paper addresses two fundamental issues: first, how each characteristic of the chosen sets affects performance of similarity detection software, and second, which improvement techniques are most effective for provided sets and which are not. Having analyzed these points, we present and explain the not obvious results we obtained.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
ACL 2008 Third Workshop on Statistical Machine Translation: WMT2008 Development Dataset (2008). http://www.statmt.org/wmt08/shared-evaluation-task.html/
Agirre, E., Banea, C., Cardie, C., Cer, D.M., Diab, M.T., Gonzalez-Agirre, A., Guo, W., Mihalcea, R., Rigau, G., Wiebe, J.: Semeval-2014 task 10: multilingual semantic textual similarity. In: Nakov, P., Zesch, T. (eds.) Proceedings of the 8th International Workshop on Semantic Evaluation, SemEval@COLING 2014, Dublin, Ireland, August 23–24, pp. 81–91. The Association for Computer Linguistics (2014). http://aclweb.org/anthology/S/S14/S14-2010.pdf
Bethard, S., Cer, D.M., Carpuat, M., Jurgens, D., Nakov, P., Zesch, T. (eds.): Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16–17, 2016. The Association for Computer Linguistics (2016). http://aclweb.org/anthology/S/S16/
Labadié, A., Prince, V.: The impact of corpus quality and type on topic based text segmentation evaluation. In: Proceedings of the International Multiconference on Computer Science and Information Technology, IMCSIT 2008, Wisla, Poland, 20–22 October 2008. pp. 313–319. IEEE (2008). http://dx.doi.org/10.1109/IMCSIT.2008.4747258
Microsoft Research: Microsoft Research Paraphrase Corpus (2010). http://research.microsoft.com/en-us/downloads/607d14d9-20cd-47e3-85bc-a2f65cd28042/
Microsoft Research: Microsoft Research Video Description Corpus (2010). https://www.microsoft.com/en-us/download/details.aspx?id=52422
Miller, G.A.: Wordnet: a lexical database for english. Commun. ACM 38, 39–41 (1995)
Rus, V., Banjade, R., Lintean, M.: On Paraphrase Identification Corpora (2015). http://www.researchgate.net/publication/280690782_On_Paraphrase_Identification_Corpora/
Rychalska, B., Pakulska, K., Chodorowska, K., Walczak, W., Andruszkiewicz, P.: Samsung Poland NLP Team at SemEval-2016 task 1: Necessity for diversity; combining recursive autoencoders, wordnet and ensemble methods to measure semantic similarity. In: Bethard et al., [3], pp. 602–608. http://aclweb.org/anthology/S/S16/S16-1091.pdf
Socher, R., Huang, E.H., Pennington, J., Ng, A.Y., Manning, C.D.: Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection (2011)
Sultan, M.A., Bethard, S., Sumner, T.: DLS@CU: Sentence Similarity From Word Alignment and Semantic Vector (2015). http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval027.pdf
Talvensaari, T.: Effects of aligned corpus quality and size in corpus-based CLIR. In: Macdonald, C., Ounis, I., Plachouras, V., Ruthven, I., White, R.W. (eds.) Advances in Information Retrieval, 30th European Conference on IR Research, ECIR 2008, Glasgow, UK, March 30–April 3, 2008. Proceedings. Lecture Notes in Computer Science, vol. 4956, pp. 114–125. Springer, Berlin (2008). http://dx.doi.org/10.1007/978-3-540-78646-7_13
Ul-Qayyum, Z., Altaf, W.: Paraphrase Identification Using Semantic Heuristic Features (2012). http://maxwellsci.com/print/rjaset/v4-4894-4904.pdf
Zhou, Y., Liu, P., Zong, C.: Approaches to improving corpus quality for statistical machine translation. Int. J. Comput. Proc. Oriental Lang. 23(4), 327–348 (2011). http://dx.doi.org/10.1142/S1793840611002395
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Chodorowska, K., Rychalska, B., Pakulska, K., Andruszkiewicz, P. (2019). To Improve, or Not to Improve; How Changes in Corpora Influence the Results of Machine Learning Tasks on the Example of Datasets Used for Paraphrase Identification. In: Bembenik, R., Skonieczny, Ł., Protaziuk, G., Kryszkiewicz, M., Rybinski, H. (eds) Intelligent Methods and Big Data in Industrial Applications. Studies in Big Data, vol 40. Springer, Cham. https://doi.org/10.1007/978-3-319-77604-0_25
Download citation
DOI: https://doi.org/10.1007/978-3-319-77604-0_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-77603-3
Online ISBN: 978-3-319-77604-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)