Advertisement

Semi-supervised Learning for Regression with Co-training by Committee

  • Mohamed Farouk Abdel Hady
  • Friedhelm Schwenker
  • Günther Palm
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5768)

Abstract

Semi-supervised learning is a paradigm that exploits the unlabeled data in addition to the labeled data to improve the generalization error of a supervised learning algorithm. Although in real-world applications regression is as important as classification, most of the research in semi-supervised learning concentrates on classification. In particular, although Co-Training is a popular semi-supervised learning algorithm, there is not much work to develop new Co-Training style algorithms for semi-supervised regression. In this paper, a semi-supervised regression framework, denoted by CoBCReg is proposed, in which an ensemble of diverse regressors is used for semi-supervised learning that requires neither redundant independent views nor different base learning algorithms. Experimental results show that CoBCReg can effectively exploit unlabeled data to improve the regression estimates.

Keywords

Root Mean Square Error Ensemble Member Unlabeled Data Generalization Error Supervise Learning Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Zhu, X.: Semi-supervised learning literature survey. Technical Report (2008)Google Scholar
  2. 2.
    Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: Proceedings of the 11th Annual Conf. on Computational Learning Theory (COLT 1998), pp. 92–100. Morgan Kaufmann Publishers, San Francisco (1998)Google Scholar
  3. 3.
    Goldman, S., Zhou, Y.: Enhancing supervised learning with unlabeled data. In: Proceedings of the 17th Int. Conf. Machine Learning (ICML 2000), pp. 327–334 (2000)Google Scholar
  4. 4.
    Zhou, Z., Li, M.: Tri-training: Exploiting unlabeled data using three classifiers. IEEE Trans. on Knowl. and Data Eng. 17(11), 1529–1541 (2005)CrossRefGoogle Scholar
  5. 5.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)zbMATHGoogle Scholar
  6. 6.
    Zhou, Z.H., Li, M.: Semi-supervised regression with co-training. In: Proceedings of the 19th Int. Joint Conf. on Artificial Intelligence (IJCAI), pp. 908–913 (2005)Google Scholar
  7. 7.
    Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: a survey and categorisation. Information Fusion 6(1), 5–20 (2005)CrossRefGoogle Scholar
  8. 8.
    Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. In: Advances in Neural Information Processing Systems, pp. 231–238. MIT Press, Cambridge (1995)Google Scholar
  9. 9.
    Schwenker, F., Kestler, H., Palm, G.: Three learning phases for radial basis function networks. Neural Networks 14, 439–458 (2001)CrossRefzbMATHGoogle Scholar
  10. 10.
    Hansen, J.: Combining predictors: meta machine learning methods and bias/variance and ambiguity. PhD thesis, University of Aarhus, Denmark (2000)Google Scholar
  11. 11.
    Ridgeway, G., Madigan, D., Richardson, T.: Boosting methodology for regression problems, pp. 152–161 (1999)Google Scholar
  12. 12.
    Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, San Francisco (1999)Google Scholar
  13. 13.
    Shrestha, D.L., Solomatine, D.P.: Experiments with adaboost.rt, an improved boosting scheme for regression. Neural Computation 18(7), 1678–1710 (2006)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Mohamed Farouk Abdel Hady
    • 1
  • Friedhelm Schwenker
    • 1
  • Günther Palm
    • 1
  1. 1.Institute of Neural Information ProcessingUniversity of UlmUlmGermany

Personalised recommendations