Advertisement

Evolving Systems

, Volume 10, Issue 1, pp 51–61 | Cite as

Hybrid local boosting utilizing unlabeled data in classification tasks

  • Christos K. AridasEmail author
  • Sotiris B. Kotsiantis
  • Michael N. Vrahatis
Original Paper
  • 78 Downloads

Abstract

In many real life applications, a complete labeled data set is not always available. Therefore, an ideal learning algorithm should be able to learn from both labeled and unlabeled data. In this work a two stage local boosting algorithm for handling semi-supervised classification tasks is proposed. The proposed method can be simply described as: (a) a two stage local boosting method, (b) which adds self-labeled examples of unlabeled data and (c) employ them on semi-supervised classification tasks. Grounded on the local application of the boosting-by-reweighting version of AdaBoost, the proposed method utilizes unlabeled data to enhance it’s classification performance. Simulations on thirty synthetic and real-world benchmark data sets show that the proposed method significantly outperforms nine other well-known semi-supervised classification methods in terms of classification accuracy.

Notes

Acknowledgements

We wish to thank all three anonymous referees for their valuable comments which helped to improve the manuscript.

References

  1. Alcalá-Fdez J, Sánchez L, García S, del Jesus MJ, Ventura S, Garrell JM, Otero J, Romero C, Bacardit J, Rivas VM, Fernández JC, Herrera F (2008) KEEL: a software tool to assess evolutionary algorithms for data mining problems. Soft Comput 13(3):307–318. doi: 10.1007/s00500-008-0323-y CrossRefGoogle Scholar
  2. Alcalá-Fdez J, Fernandez A, Luengo J, Derrac J, Garcia S (2011) KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. Multiple Value Logic Soft Comput 17(2–3):255–287Google Scholar
  3. Angluin D, Laird P (1988) Learning from noisy examples. Mach Learn 2(4):343–370. doi: 10.1023/A:1022873112823 Google Scholar
  4. Aridas C, Kotsiantis S (2015) Combining random forest and support vector machines for semi-supervised learning. In: Proceedings of the 19th Panhellenic Conference on Informatics, ACM, pp 123–128Google Scholar
  5. Aridas CK, Kotsiantis SB, Vrahatis MN (2016) Combining prototype selection with local boosting, IFIP advances in information and communication technology, vol 475. Springer International Publishing, Switzerland, pp 94–105Google Scholar
  6. Blum A, Chawla S (2001) Learning from Labeled and Unlabeled Data Using Graph Mincuts. In: Proceedings of the eighteenth international conference on machine learning, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, ICML ’01, pp 19–26Google Scholar
  7. Blum A, Mitchell T (1998) Combining Labeled and Unlabeled Data with Co-training. In: Proceedings of the eleventh annual conference on computational learning theory, ACM, Madison, Wisconsin, USA, COLT’ vol 98, pp 92–100. doi: 10.1145/279943.279962
  8. Bottou L, Vapnik V (1992) Local learning algorithms. Neural Comput 4(6):888–900. doi: 10.1162/neco.1992.4.6.888 CrossRefGoogle Scholar
  9. Breiman L (2001) Random forests. Mach Learn 45(1):5–32. doi: 10.1023/A:1010933404324 CrossRefzbMATHGoogle Scholar
  10. Chapelle O, Schlkopf B, Zien A (2010) Semi-supervised learning, 1st edn. MIT, Cambridge, MAGoogle Scholar
  11. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines: and other kernel-based learning methods. Cambridge University Press, New YorkCrossRefzbMATHGoogle Scholar
  12. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetzbMATHGoogle Scholar
  13. Deng C, Guo M (2011) A new co-training-style random forest for computer aided diagnosis. J Intell Inf Syst 36(3):253–281. doi: 10.1007/s10844-009-0105-8 CrossRefGoogle Scholar
  14. Didaci L, Fumera G, Roli F (2012) Analysis of co-training algorithm with very small training sets. In: Gimelfarb G, Hancock E, Imiya A, Kuijper A, Kudo M, Omachi S, Windeatt T, Yamada K (eds) Structural, syntactic, and statistical pattern recognition, vol 7626. Lecture notes in computer science. Springer, Berlin Heidelberg, pp 719–726CrossRefGoogle Scholar
  15. El Gayar N, Shaban SA, Hamdy S (2006) Face Recognition with Semi-supervised Learning and Multiple Classifiers. In: Proceedings of the 5th WSEAS international conference on computational intelligence, man-machine systems and cybernetics, world scientific and engineering academy and society (WSEAS), Venice, Italy, CIMMACS’06, pp 296–301Google Scholar
  16. Freedman DA (2009) Statistical models: theory and practice. Cambridge University Press, CambridgeGoogle Scholar
  17. Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: ICML, vol 96, pp 148–156. http://www.public.asu.edu/~jye02/CLASSES/Fall-2005/PAPERS/boosting-icml.pdf
  18. Garcia S, Derrac J, Cano JR, Herrera F (2012) Prototype selection for nearest neighbor classification: taxonomy and empirical study. IEEE Trans Pattern Anal Mach Intell 34(3):417–435. doi: 10.1109/TPAMI.2011.142 CrossRefGoogle Scholar
  19. Guo T, Li G (2012) Improved tri-training with unlabeled data. In: Wu Y (ed) Software engineering and knowledge engineering: theory and practice, advances in intelligent and soft computing, vol 115. Springer, Berlin Heidelberg, pp 139–147Google Scholar
  20. Hady M, Schwenker F (2008) Co-training by Committee: A New Semi-supervised Learning Framework. In: Data Mining Workshops, 2008. ICDMW ’08. IEEE International Conference on, pp 563–572. doi: 10.1109/ICDMW.2008.27
  21. Holte RC (1993) Very simple classification rules perform well on most commonly used datasets. Mach Learn 11(1):63–90. doi: 10.1023/a:1022631118932 MathSciNetCrossRefzbMATHGoogle Scholar
  22. Iba W, Langley P (1992) Induction of One-Level Decision Trees. In: Proceedings of the Ninth International Workshop on Machine Learning, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, ML ’92, pp 233–240Google Scholar
  23. Kim M (2011) Discriminative semi-supervised learning of dynamical systems for motion estimation. Pattern Recogn 44(1011):2325–2333. doi: 10.1016/j.patcog.2010.12.002 CrossRefzbMATHGoogle Scholar
  24. Kotsiantis S, Pintelas P (2004) Local boosting of weak classifiers. In: Proceedings of intelligent systems design and applications (ISDA 2004), pp 26–28Google Scholar
  25. Kotsiantis SB, Kanellopoulos D, Pintelas PE (2006) Local boosting of decision stumps for regression and classification problems. J Comput. doi: 10.4304/jcp.1.4.30-37
  26. Li J (2008) A two-step rejection procedure for testing multiple hypotheses. J Stat Plan Inference 138(6):1521–1527. doi: 10.1016/j.jspi.2007.04.032 MathSciNetCrossRefzbMATHGoogle Scholar
  27. Li J, Zhang W, Li K (2010) A novel semi-supervised SVM based on tri-training for intrusition detection. JCP. doi: 10.4304/jcp.5.4.638-645
  28. Li M, Zhou ZH (2005) SETRED: self-training with editing. In: Ho T, Cheung D, Liu H (eds) Advances in knowledge discovery and data mining, vol 3518. Lecture notes in computer science. Springer, Berlin Heidelberg, pp 611–621CrossRefGoogle Scholar
  29. Li M, Zhou ZH (2007) Improve computer-aided diagnosis with machine learning techniques using undiagnosed samples. Syst Man Cybern Part A Syst Hum IEEE Trans 37(6):1088–1098. doi: 10.1109/TSMCA.2007.904745 CrossRefGoogle Scholar
  30. Li Y, Guan C, Li H, Chin Z (2008) A self-training semi-supervised SVM algorithm and its application in an EEG-based brain computer interface speller system. Pattern Recogn Lett 29(9):1285–1294. doi: 10.1016/j.patrec.2008.01.030 CrossRefGoogle Scholar
  31. Liu C, Yuen PC (2011) A boosted co-training algorithm for human action recognition. IEEE Trans Circ Syst Video Technol 21(9):1203–1213. doi: 10.1109/tcsvt.2011.2130270 CrossRefGoogle Scholar
  32. Maulik U, Chakraborty D (2011) A self-trained ensemble with semisupervised SVM: an application to pixel classification of remote sensing imagery. Pattern Recogn 44(3):615–623. doi: 10.1016/j.patcog.2010.09.021 CrossRefzbMATHGoogle Scholar
  33. Nigam K, Ghani R (2000) Analyzing the effectiveness and applicability of co-training. In: Proceedings of the ninth international conference on information and knowledge management, ACM, New York, NY, USA, CIKM ’00, pp 86–93. doi: 10.1145/354756.354805
  34. Nigam K, McCallum AK, Thrun S, Mitchell T (2000) Text classification from labeled and unlabeled documents using EM. Mach Learn 39(2–3):103–134. doi: 10.1023/A:1007692713085 CrossRefzbMATHGoogle Scholar
  35. Riloff E, Wiebe J, Wilson T (2003) Learning Subjective Nouns Using Extraction Pattern Bootstrapping. In: Proceedings of the seventh conference on natural language learning at HLT-NAACL 2003—Volume 4, Association for Computational Linguistics, Edmonton, Canada, CONLL ’03, pp 25–32. doi: 10.3115/1119176.1119180
  36. Rosenberg C, Hebert M, Schneiderman H (2005) Semi-Supervised Self-Training of Object Detection Models. In: Application of computer vision, 2005. WACV/MOTIONS ’05 volume 1. Seventh IEEE Workshops on, vol 1, pp 29–36. doi: 10.1109/ACVMOT.2005.107
  37. Salzberg SL (1994) C4.5: Programs for machine learning by J. Ross Quinlan. Morgan Kaufmann Publishers Inc, 1993. Mach Learn 16(3):235–240. doi: 10.1007/BF00993309 MathSciNetGoogle Scholar
  38. Sun S, Jin F (2011) Robust co-training. Int J Pattern Recogn Artif Intell 25(07):1113–1126. doi: 10.1142/S0218001411008981 MathSciNetCrossRefGoogle Scholar
  39. Tanha J, v Someren M, Afsarmanesh H (2011) Disagreement-based co-training. In: 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence, pp 803–810, doi: 10.1109/ICTAI.2011.126
  40. Tanha J, van Someren M, Afsarmanesh H (2015) Semi-supervised self-training for decision tree classifiers. Int J Mach Learn Cybern. doi: 10.1007/s13042-015-0328-7
  41. Trawiński B, Smetek M, Telec Z, Lasota T (2012) Nonparametric statistical analysis for multiple comparison of machine learning regression algorithms. Int J Appl Math Comput Sci. doi: 10.2478/v10006-012-0064-z
  42. Triguero I, Garcia S, Herrera F (2015) Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study. Knowl Inf Syst 42(2):245–284. doi: 10.1007/s10115-013-0706-y CrossRefGoogle Scholar
  43. Wang J, Luo Sw, Zeng Xh (2008) A random subspace method for co-training. In: Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on, pp 195–200. doi: 10.1109/IJCNN.2008.4633789
  44. Wilcoxon F (1945) Individual comparisons by ranking methods. Biometr Bull 1(6):80. doi: 10.2307/3001968 CrossRefGoogle Scholar
  45. Xu J, He H, Man H (2012) DCPE co-training for classification. Neurocomputing 86:75–85. doi: 10.1016/j.neucom.2012.01.006 CrossRefGoogle Scholar
  46. Yarowsky D (1995) Unsupervised word sense disambiguation rivaling supervised methods. In: Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics, Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’95, pp 189–196. doi: 10.3115/981658.981684
  47. Yaslan Y, Cataltepe Z (2010) Co-training with relevant random subspaces. Neurocomputing 73(1012):1652–1661. doi: 10.1016/j.neucom.2010.01.018 CrossRefGoogle Scholar
  48. Zhang CX, Zhang JS (2008) A local boosting algorithm for solving classification problems. Comput Stat Data Anal 52(4):1928–1941. doi: 10.1016/j.csda.2007.06.015 MathSciNetCrossRefzbMATHGoogle Scholar
  49. Zhou ZH, Li M (2005) Tri-training: exploiting unlabeled data using three classifiers. Knowl Data Eng IEEE Trans 17(11):1529–1541. doi: 10.1109/TKDE.2005.186 CrossRefGoogle Scholar
  50. Zhu X, Goldberg AB (2009) Introduction to semi-supervised learning. Synth Lect Artif Intell Mach Learn 3(1):1–130. doi: 10.2200/s00196ed1v01y200906aim006 CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany 2017

Authors and Affiliations

  1. 1.Computational Intelligence Laboratory (CILab), Department of MathematicsUniversity of PatrasPatrasGreece

Personalised recommendations