Skip to main content

An Improved Random Subspace Method and Its Application to EEG Signal Classification

  • Conference paper
Multiple Classifier Systems (MCS 2007)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 4472))

Included in the following conference series:

Abstract

Ensemble learning is one of the principal current directions in the research of machine learning. In this paper, subspace ensembles for classification are explored which constitute an ensemble classifier system by manipulating different feature subspaces. Starting with the nature of ensemble efficacy, we probe into the microcosmic meaning of ensemble diversity, and propose to use region partitioning and region weighting to implement effective subspace ensembles. An improved random subspace method that integrates this mechanism is presented. Individual classifiers possessing eminent performance on a partitioned region reflected by high neighborhood accuracies, are deemed to contribute largely to this region, and are assigned large weights in determining the labels of instances in this area. The robustness and effectiveness of the proposed method is shown empirically with the base classifier of linear support vector machines on the classification problem of EEG signals.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Guyon, I., Elisseeff, A.: An Introduction to Variable and Feature Selection. J. Mach. Learn. Res 3, 1157–1182 (2003)

    Article  MATH  Google Scholar 

  2. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Wiley & Sons, New York (2000)

    Google Scholar 

  3. Kohavi, F., John, G.: Wrappers for Feature Subset Selection. Artif. Intell. 97, 273–324 (1997)

    Article  MATH  Google Scholar 

  4. Ho, T.: The Random Subspace Method for Constructing Decision Forests. IEEE Trans. Pattern Anal. Mach. Intell. 20, 832–844 (1998)

    Article  Google Scholar 

  5. Breiman, L.: Random Forests. Mach. Learn. 45, 5–32 (2001)

    Article  MATH  Google Scholar 

  6. Brown, G., Wyatt, J., Tiňo, P.: Managing Diversity in Regression Ensembles. J. Mach. Learn. Res. 6, 1621–1650 (2005)

    MathSciNet  Google Scholar 

  7. Hansen, L., Salamon, P.: Neural Network Ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12, 993–1001 (1990)

    Article  Google Scholar 

  8. Krogh, A., Vedelsby, J.: Neural Network Ensembles, Cross Validation, and Active Learning. In: Tesauro, G., Touretzky, D., Leen, T. (eds.) Advances in Neural Information Processing Systems, vol. 7, pp. 231–238. MIT, Cambridge (1995)

    Google Scholar 

  9. Kuncheva, L., Whitaker, C.: Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy. Mach. Learn. 51, 181–207 (2003)

    Article  MATH  Google Scholar 

  10. Saranh, A., Demirekler, M.: On Output Independence and Complementariness in Rank-Based Multiple Classifier Decision Systems. Pattern Recogn. 34, 2319–2330 (2001)

    Article  Google Scholar 

  11. Opitz, D.: Feature Selection for Ensembles. In: Proc. of the Sixteenth National Conference on Artificial Intelligence, pp. 379–384 (1999)

    Google Scholar 

  12. Banfield, R., et al.: Ensemble Diversity Measures and Their Application to Thinning. Information Fusion 6, 49–62 (2005)

    Article  Google Scholar 

  13. Brown, G., et al.: Diversity Creation Methods: a Survey and Categorisation. Information Fusion 6, 5–20 (2005)

    Article  Google Scholar 

  14. Ueda, N., Nakano, R.: Generalization Error of Ensemble Estimators. In: Proc. of the International Conference on Neural Networks, pp. 90–95 (1996)

    Google Scholar 

  15. Dietterich, T.: An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization. Mach. Learn. 40, 139–157 (2000)

    Article  Google Scholar 

  16. Tsymbal, A., Pechenizkiy, M., Cunningham, P.: Diversity in Search Strategies for Ensemble Feature Selection. Information Fusion 6, 83–98 (2005)

    Article  Google Scholar 

  17. Kittler, J., et al.: On Combining Classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 20, 226–239 (1998)

    Article  Google Scholar 

  18. Nicolelis, M.A.L.: Actions from Thoughts. Nature 409, 403–407 (2001)

    Article  Google Scholar 

  19. Millán, J.R.: On the Need for On-Line Learning in Brain-Computer Interfaces. In: Proc. 2004 Int. Joint Conf. Neural Networks, vol. 4, pp. 2877–2882 (2004)

    Google Scholar 

  20. Chiappa, S., Millán, J.R.: Data set V <mental imagery, multi-class>. (2005), Available at http://ida.first.fraunhofer.de/projects/bci/competition_iii/desc_V.html

  21. Bauer, E., Kohavi, R.: An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Mach. Learn. 36, 105–139 (1999)

    Article  Google Scholar 

  22. Jordan, M.I., Jacobs, R.A.: Hierarchical Mixtures of Experts and the EM Algorithm. Neural Comput. 6, 181–214 (1994)

    Article  Google Scholar 

  23. Ortega, J.: Making the Most of What You’ve Got: Using Models and Data to Improve Prediction Accuracy. Ph.D. Thesis, Vanderbilt Univeristy, Nashville, TN (1996)

    Google Scholar 

  24. Todorovski, L., Džeroski, S.: Combining Classifiers with Meta Decision Trees. Mach. Learn. 50, 223–249 (2003)

    Article  MATH  Google Scholar 

  25. Woods, K., Kegelmeyer, W.P., Bowyer, K.: Combination of Multiple Classifiers using Local Accuracy Estimates. IEEE Trans. Pattern Anal. Mach. Intell. 19, 405–410 (1997)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Michal Haindl Josef Kittler Fabio Roli

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Sun, S. (2007). An Improved Random Subspace Method and Its Application to EEG Signal Classification. In: Haindl, M., Kittler, J., Roli, F. (eds) Multiple Classifier Systems. MCS 2007. Lecture Notes in Computer Science, vol 4472. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72523-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72523-7_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72481-0

  • Online ISBN: 978-3-540-72523-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics