Advertisement

Ensembles of Similarity-Based Models

  • Włodzisław Duch
  • Karol Grudziński
Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 10)

Abstract

Ensembles of independent classifiers are usually more accurate and show smaller variance than individual classifiers. Methods of selection of Similarity Based Models (SBM) that should be included in an ensemble are discussed. Standard k-NN, weighted k-NN, ensembles of weighted models and ensembles of averaged weighted models are considered. Ensembles of competent models are introduced. Results of numerical experiments on benchmark and real-world datasets are presented.

Keywords

Feature Selection Majority Vote Weighted Model Reference Vector Competent Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Duch, W. (1998): A framework for similarity-based classification methods. In: Intelligent Information Systems VII, Malbork, Poland, 288 - 291Google Scholar
  2. 2.
    Grudzinski, K., Duch, W. (2000): SBL-PM: A Simple Algorithm for Selection of Reference Instances for Similarity-Based Methods. In: Intelligent Information Systems IIS'2000. Physica Verlag, Springer, Berlin, Heidelberg, New York, 99 - 108Google Scholar
  3. 3.
    Breiman, L. (1998): Bias-Variance, regularization, instability and stabilization. In: Bishop, C. (Ed.) Neural Networks and Machine Learning. Springer, Berlin, Heidelberg, New YorkGoogle Scholar
  4. 4.
    Wettschereck D., Aha, W., Mohri, T. (1997): A Review and Empirical Evaluation of Feature Weighting Methods for a Class of Lazy Learning Algorithms. Artificial Intelligence Review 11, 273 - 314CrossRefGoogle Scholar
  5. 5.
    Gupta, H.V., Hsu, K., Sorooshian, S. (1997): Superior training of artificial neural networks using weight-space partitioning. In: Proc. of the Intern. Conf. Neural Networks (ICNN'97), Houston, 1919-1923Google Scholar
  6. 6.
    Ingberg, L. (1996): Adaptive simulated annealing (ASA): Lessons learned. J. Control and Cybernetics 25, 33-54Google Scholar
  7. 7.
    Bauer E., Kohavi, R. (1999): An empirical comparison of voting classfication algorithms: Bagging, Boosting and variants. Machine Learning 36, 105-139Google Scholar
  8. 8.
    Hinton, G. (2000): Training products of experts by minimizing contrastive divergence. Gatsby Computational Neuroscience Unit Technical Report 2000-004Google Scholar
  9. 9.
    Krogh, A., Vedelsby, J. (1995): Neural Network Ensembles, Cross Validation, and Active Learning. In: Advances in Neural Information Processing Systems 7, 231-238Google Scholar
  10. 10.
    Duch W., Grudzinski K. (1999): Weighting and selection of features in Similarity-Based Methods. In: Intelligent Information Systems VIII, Ustron, Poland, 32 - 36Google Scholar
  11. 11.
    Duch W., Grudzinski K. (1999): Search and global minimization in similarity-based methods. In: Int. Joint Conference on Neural Networks (IJCNN), Washington, July 1999, paper no. 742Google Scholar
  12. 12.
    Mertz, C.J., Murphy, P.M., UCI repository of machine learning datasets,http://www.ics.uci.edu/AI/ML/MLDBRepository.html
  13. 13.
    Thrun S.B. The MONK's problems: a performance comparison of different learning algorithms. Carnegie Mellon University, Technical Report CMU-CS-91-197Google Scholar
  14. 14.
    Shang N., Breiman, L. (1996): Distribution based trees are more accurate. Int. Conf. on Neural Information Processing, Hong Kong, Vol. I, 133 - 138Google Scholar
  15. 15.
    Mitra S., De R., Pal S. (1997): Knowledge based fuzzy MLP for classification and rule generation. IEEE Transactions on Neural Networks 8, 1338 - 1350CrossRefGoogle Scholar
  16. 16.
    Duch, W., Adamczak, R., Grgbczewski, K., Zal, G., Hayashi, Y. (2000): Fuzzy and crisp logical rule extraction methods in application to medical data. In: P.S. Szczepaniak, P.J.G. Lisboa, J. Kacprzyk (eds.), Fuzzy systems in medicine. Physica - Verlag, Springer, Berlin, Heidelberg, New York, 593 - 616CrossRefGoogle Scholar
  17. 17.
    Yao, X., Liu, Y. (1997): A New Evolutionary System for Evolving Artificial Neural Networks. IEEE Transaction on Neural Networks 8, 694 - 713CrossRefGoogle Scholar
  18. 18.
    Weiss, S.M., Kapouleas, I. (1990): An empirical comparison of pattern recognition, neural nets and machine learning classification methods. In: Shavlik J.W., Dietterich, T.G., Readings in Machine Learning, Morgan Kauffman Publications, CaliforniaGoogle Scholar
  19. 19.
    Opitz, D.W., Maclin, R. (1998): Popular ensemble methods: an empirical study, Journal of Artificial Intelligence Research 11, 169 - 198Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Włodzisław Duch
    • 1
  • Karol Grudziński
    • 1
  1. 1.Department of Computer MethodsNicholas Copernicus UniversityToruńPoland

Personalised recommendations