On Stopping Rules in Dependency-Aware Feature Ranking
Feature Selection in very-high-dimensional or small sample problems is particularly prone to computational and robustness complications. It is common to resort to feature ranking approaches only or to randomization techniques. A recent novel approach to the randomization idea in form of Dependency-Aware Feature Ranking (DAF) has shown great potential in tackling these problems well. Its original definition, however, leaves several technical questions open. In this paper we address one of these questions: how to define stopping rules of the randomized computation that stands at the core of the DAF method. We define stopping rules that are easier to interpret and show that the number of randomly generated probes does not need to be extensive.
Keywordsdimensionality reduction feature selection randomization stopping rule
- 1.Fan, J., Li, R.: Statistical challenges with high dimensionality: Feature selection in knowledge discovery (2006)Google Scholar
- 2.Kuncheva, L.I.: A stability index for feature selection. In: Proc. 25th IASTED International Multi-Conference, AIAP 2007, pp. 390–395. ACTA Press (2007)Google Scholar
- 5.Glover, F.W., Kochenberger, G.A. (eds.): Handbook of Metaheuristics. Int. Series in Operations Research & Management Science, vol. 57. Springer (2003)Google Scholar
- 7.Somol, P., Grim, J., Pudil, P.: Fast dependency-aware feature selection in very-high-dimensional pattern recognition. In: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 502–509. IEEE (2011)Google Scholar
- 9.Newman, D., Hettich, S., Blake, C., Merz, C.: UCI repository of machine learning databases (1998)Google Scholar