Advertisement

A taxonomy on impact of label noise and feature noise using machine learning techniques

  • A. ShanthiniEmail author
  • G. Vinodhini
  • R. M. Chandrasekaran
  • P. Supraja
Focus
  • 23 Downloads

Abstract

Soft computing techniques are effective techniques that are used in prediction of noise in the dataset which causes misclassification. In classification, it is expected to have perfect labeling, but the noise present in data has impact on the label mapped and influences the input values by affecting the input feature values of the instances. Existence of noise complicates prediction in the real-world data which leads to vicious effect of the classifier. Present study aims at quantitative assessment of label noise and feature noise through machine learning, and classification performance in medical datasets as noise handling has become an important aspect in the research work related to data mining and its application. Weak classifier boosting provides high standard accuracy levels in classification problems. This study explores the performance of most recent soft computing technique in machine learning which includes weak learner-based boosting algorithms, such as adaptive boosting, generalized tree boosting and extreme gradient boosting. Current study was made to compare and analyze disparate boosting algorithms in divergent noise and feature levels (5%, 10%, 15% and 20%) on distinct medical datasets. The performances of weak learners are measured in terms of accuracy and equalized loss of accuracy.

Keywords

Feature noise Label noise Machine learning Boosting 

Notes

Compliance with ethical standards

Conflict of interest

The authors declare they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed consent

Not Applicable.

References

  1. Abellán J, Masegosa AR (2012) Bagging schemes on the presence of class noise in classification. Expert Syst Appl 39(8):6827–6837CrossRefGoogle Scholar
  2. Cao J, Kwong S, Wang R (2012) A noise-detection based AdaBoost algorithm for mislabeled data. Pattern Recogn 45(12):4451–4465CrossRefzbMATHGoogle Scholar
  3. Carlotto MJ (2009) Effect of errors in ground truth on classification accuracy. Int J Remote Sens 30:4831–4849 (Remote Sens 2017, 9: 173 23 of 24) CrossRefGoogle Scholar
  4. Chen T, He T, Benesty M (2015) XGBoost: extreme gradient boosting. R package version 0.4-2, pp 1–4Google Scholar
  5. Folleco A, Khoshgoftaar TM, Hulse JV, Napolitano A (2009) Identifying learners robust to low quality data. Informatica 33:245–259MathSciNetzbMATHGoogle Scholar
  6. Frénay B, Verleysen M (2014) Classification in the presence of label noise: a survey, IEEE Trans. Neural Netw Learn Syst 25:845–869CrossRefGoogle Scholar
  7. Garcia LP, de Carvalho AC, Lorena AC (2015) Effect of label noise in the complexity of classification problems. Neurocomputing 160:108–119CrossRefGoogle Scholar
  8. Görnitz N, Porbadnigk A, Binder A, Sannelli C, Braun ML, Müller KR, Kloft M (2014) Learning and evaluation in presence of non-IID label noise. In: Proceedings of the international conference on artificial intelligence and statistics, Reykjavik, Iceland, 22–25 April 2014, pp 293–302Google Scholar
  9. Karmaker A, Kwek S (2006) A boosting approach to remove class label noise. Int J Hybrid Intell Syst 3(3):169–177CrossRefzbMATHGoogle Scholar
  10. Khoshgoftaar TM, Van Hulse J, Napolitano A (2011) Comparing boosting and bagging techniques with noisy and imbalanced data. IEEE Trans Syst Man Cybern Part A Syst Hum 41(3):552–568CrossRefGoogle Scholar
  11. Kottilingam K, Gunasekaran R, Saranya K (2016) A data activity-based server-side cache replacement for mobile devices. In: Proceedings on artificial intelligence and evolutionary computations in engineering systems. Springer, pp 579–589Google Scholar
  12. Li Y, Wessels LF, de Ridder D, Reinders MJ (2007) Classification in the presence of class noise using a probabilistic kernel fisher method. Pattern Recogn 40(12):3349–3357CrossRefzbMATHGoogle Scholar
  13. Mantas CJ, Abellán J (2014) Analysis and extension of decision trees based on imprecise probabilities: application on noisy data. Expert Syst Appl 41(5):2514–2525CrossRefGoogle Scholar
  14. Mellor A, Boukir S, Haywood A, Jones S (2015) Exploring issues of training data imbalance and mislabeling on Random Forest performance for large area land cover classification using the ensemble margin. ISPRS J Photogramm Remote Sens 105:155–168CrossRefGoogle Scholar
  15. Natarajan N, Dhillon IS, Ravikumar PK, Tewari A (2013) Learning with noisy labels. In: Advances in neural information processing systems, vol 26. Curran Associates, Inc., Lake Tahoe, USA, pp 1196–1204Google Scholar
  16. Pechenizkiy M, Tsymbal A, Puuronen S, Pechenizkiy O (2006) Class noise and supervised learning in medical domains: the effect of feature extraction. In: Proceedings of the 19th IEEE symposium on computer-based medical systems (CBMS’06), Salt Lake City, UT, USA, 22–23 June 2006, pp 708–713Google Scholar
  17. Sabzevari M, Martínez-Muñoz G, Suárez A (2018) Vote-boosting ensembles. Pattern Recogn 83:119–133CrossRefGoogle Scholar
  18. Saez JA, Luengo J, Herrera F (2016) Evaluating the classifier behaviour with noisy data considering performance and robustness: the equalized loss of accuracy measure. Neurocomputing 176:26–35CrossRefGoogle Scholar
  19. Seiffert C, Khoshgoftaar TM, Van Hulse J, Folleco A (2014) An empirical study of the classification performance of learners on imbalanced and noisy software quality data. Inf Sci 259:571–595CrossRefGoogle Scholar
  20. Shanthini A, Vinodhini G, Chandrasekaran RM (2018) Predicting students’ academic performance in the university using meta decision tree classifiers. J Comput Sci JCS. ISSN: 1552-6607.  https://doi.org/10.3844/jcssp
  21. Sluban B, Lavrač N (2015) Relating ensemble diversity and performance: a study in class noise detection. Neurocomputing 160:120–131CrossRefGoogle Scholar
  22. Sun B, Chen S, Wang J, Chen H (2016) A robust multi-class AdaBoost algorithm for mislabeled noisy data. Knowl Based Syst 102:87–102CrossRefGoogle Scholar
  23. Teng CM (2001) A comparison of noise handling techniques. In: Proceedings of the international florida artificial intelligence research society conference, Key West, FL, USA, 21–23 May 2001, pp 269–273Google Scholar
  24. Xiao H, Biggio B, Nelson B, Xiao H, Eckert C, Roli F (2015a) Support vector machines under adversarial label contamination. Neurocomputing 160:53–62CrossRefGoogle Scholar
  25. Xiao T, Xia T, Yang Y, Huang C, Wang X (2015b) Learning from massive noisy labeled data for image classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, USA, 7–12 June 2015, pp 2691–2699Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  • A. Shanthini
    • 1
    Email author
  • G. Vinodhini
    • 2
  • R. M. Chandrasekaran
    • 2
  • P. Supraja
    • 1
  1. 1.Department of Information and TechnologySRM Institute of Science and TechnologyKattankulathurIndia
  2. 2.Department of Computer Science and EngineeringAnnamalai UniversityChidambaramIndia

Personalised recommendations