Conclusion
At a conceptual level, one can divide the task of concept learning into the subtask of selecting a proper subset of features to use in describing the concept, and learning a hypothesis based on these features. This directly leads to a modular design of the learning algorithm which allows flexible combinations of explicit feature selection methods with model induction algorithms and sometimes leads to powerful variants. Many recent works, however, tend to take a more general view of feature selection as part of model selection and therefore integrate feature selection more closely into the learning algorithms (i.e. the Bayesian feature selection methods). Feature selection for clustering is a largely untouched problem, and there has been little theoretical characterization of the heuristic approaches we described in the chapter. In summary, although no universal strategy can be prescribed, for high-dimensional problems frequently encountered in microarray analysis, feature selection offers a promising suite of techniques to improve interpretability, performance and computation efficiency in learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Baluja S. and Davies S. (1997). Using Optimal Dependency-Trees for Combinatorial Optimization: Learning the Structure of the Search Space, Proceedings of the Fourteenth International Conference on Machine Learning.
Ben-Dor A., Friedman N. and Yakhini Z. (2000). Scoring genes for relevance, Agilent Technologies Technical Report AGL-2000-19.
Blum A. and Langley P. (1997). Selection of Relevant Features and Examples in Machine Learning, Artificial Intelligence 97:245–271.
Chow M.L and Liu C. (1968). Approximating discrete probability distribution with dependency tree, IEEE Transactions on Information Theory 14:462–367.
Chow M.L., Moler E.J., Mian I.S. (2002). Identification of marker genes in transcription profiling data using a mixture of feature relevance experts, Physiological Genomics (in press).
Cover T. and Thomas J. (1991). Elements of Information Theory, Wiley, New York.
Cox T. and Cox M. (1994). Multidimensional Scaling, Chapman & Hall, London.
Dash M. and Liu H. (2000). Feature Selection for Clustering, PAKDD, 110–121.
Dempster A.P., Laird N.M., Revow M. (1977). Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Society, B39(1):1–38.
Devaney M. and Ram A. (1997) Efficient feature selection in conceptual clustering, Proceedings of the Fourteenth International Conference on Machine Learning, Morgan Kaufmann, San Francisco, CA, 92–97.
Dudoit S., Fridlyand J., Speed T. (2000). Comparison of discrimination methods for the classification of tumors using gene expression data, Technical report 576, Department of Statistics, UC Berkeley.
Fisher D. H. (1987). Knowledge Acquisition via Incremental Conceptual Clustering, Machine Learning 2:139–172.
George E.I. and McCulloch R.E. (1997). Approaches for Bayesian variable selection, Statistica Sinica 7:339–373.
Golub T.R., Slonim D.K., Tamayo P., Huard C., Gaasenbeek M., Mesirov J.P., Coller H., Loh M.L., Downing J.R, Caligiuri M.A., Bloomfield C.D., Lander E.S. (1999). Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring, Science 286:531–537.
Jebara T. and Jaakola T. (2000). Feature selection and dualities in maximum entropy discrimination, Proceedings of the Sixteenth Annual Conference on Uncertainty in Artificial Intelligence, Morgan Kaufman.
Jolliffe I.T. (1989). Principal Component Analysis, Springer-Verlag, New York.
Koller D. and Sahami M. (1996), Toward optimal feature selection, Proceedings of the Thirteenth International Conference on Machine Learning, ICML96, 284–292.
Littlestone N. (1988). Learning quickly when irrelevant attribute abound: A new linearthreshold algorithm, Machine Learning 2:285–318.
Ng A.Y. (1988). On feature selection: Learning with exponentially many irrelevant features as training examples, Proceedings of the Fifteenth International Conference on Machine Learning.
Ng A.Y, and Jordan M. (2001). Convergence rates of the voting Gibbs classifier, with application to Bayesian feature selection, Proceedings of the Eighteenth International Conference on Machine Learning.
Ng A.Y., Zheng A.X., Jordan M. (2001). Link analysis, eigenvectors, and stability, Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence.
Russell S. and Norvig P. (1995). Artificial Intelligence, A Modern Approach, Prentice Hall, New Jersey
Xing E.P., Jordan M., Karp R.M. (2001). Feature selection for high-dimensional genomic microarray data, Proceedings of the Eighteenth International Conference on Machine Learning.
Xing E.P. and Karp R.M. (2001). Cliff: Clustering of high-dimensional microarray data via iterative feature filtering using normalized cuts, Bioinformatics 1(1):1–9.
Zhang T. (2000). Large margin winnow methods for text categorization, KDD 2000 Workshop on Text Mining, 81–87.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Kluwer Academic Publishers
About this chapter
Cite this chapter
Xing, E.P. (2003). Feature Selection in Microarray Analysis. In: Berrar, D.P., Dubitzky, W., Granzow, M. (eds) A Practical Approach to Microarray Data Analysis. Springer, Boston, MA. https://doi.org/10.1007/0-306-47815-3_6
Download citation
DOI: https://doi.org/10.1007/0-306-47815-3_6
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4020-7260-4
Online ISBN: 978-0-306-47815-4
eBook Packages: Springer Book Archive