Semi-supervised Robust Alternating AdaBoost

  • Héctor Allende-Cid
  • Jorge Mendoza
  • Héctor Allende
  • Enrique Canessa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5856)

Abstract

Semi-Supervised Learning is one of the most popular and emerging issues in Machine Learning. Since it is very costly to label large amounts of data, it is useful to use data sets without labels. For doing that, normally we uses Semi-Supervised Learning to improve the performance or efficiency of the classification algorithms.

This paper intends to use the techniques of Semi-Supervised Learning to boost the performance of the Robust Alternating AdaBoost algorithm.

We introduce the algorithm RADA+ and compare it with RADA, reporting the performance results using synthetic and real data sets, the latter obtained from a benchmark site.

Keywords

Semi-Supervised Learning Expectation Maximization Machine ensembles Robust Alternating AdaBoost 

References

  1. 1.
    Allende-Cid, H., Salas, R., Allende, H., Ñanculef, R.: Robust Alternating AdaBoost. In: Rueda, L., Mery, D., Kittler, J. (eds.) CIARP 2007. LNCS, vol. 4756, pp. 427–436. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  2. 2.
    Asuncion, A., Newman, D.J.: UCI machine learning repository (2007)Google Scholar
  3. 3.
    Bennett, K., Demiriz, A.: Semi-Supervised Support Vector Machines. Advances in Neural Information Processing Systems, 368–374 (1999)Google Scholar
  4. 4.
    Bennett, K.P., Demiriz, A., Maclin, R.: Exploiting unlabeled data in ensemble methods. In: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 289–296 (2002)Google Scholar
  5. 5.
    Chapelle, O., Schölkopf, B., Zien, A.: Semi-supervised learning. MIT Press, Cambridge (2006)Google Scholar
  6. 6.
    Alche-Buc, F.d., Grandvalet, Y., Ambroise, C.: Semi-Supervised MarginBoost. Advances in Neural Information Processing Systems 1, 553–560 (2002)Google Scholar
  7. 7.
    Duda, R., Hart, P., Stork, D.: Pattern classification. Wiley-Interscience, Hoboken (2000)Google Scholar
  8. 8.
    Grandvalet, Y., d’Alché-Buc, F., Ambroise, C.: Boosting Mixture Models for Semi-supervised Learning. In: Dorffner, G., Bischof, H., Hornik, K. (eds.) ICANN 2001. LNCS, vol. 2130, pp. 41–48. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  9. 9.
    Moon, T.K.: The expectation-maximization algorithm. IEEE Signal processing magazine 13(6), 47–60 (1996)CrossRefGoogle Scholar
  10. 10.
    Rosenberg, C., Hebert, M., Schneiderman, H.: Semi-supervised self-training of object detection models. In: Seventh IEEE Workshop on Applications of Computer Vision, vol. 1, pp. 29–36 (2005)Google Scholar
  11. 11.
    Vapnik, V.N.: Statistical learning theory. John Wiley & Sons, Chichester (1998)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Héctor Allende-Cid
    • 1
  • Jorge Mendoza
    • 2
  • Héctor Allende
    • 1
    • 2
  • Enrique Canessa
    • 2
  1. 1.Dept. de InformáticaUniversidad Técnica Federico Santa MaríaValparaísoChile
  2. 2.Facultad de Ingenieria y CienciasUniversidad Adolfo IbáñezViña del MarChile

Personalised recommendations