Advertisement

Combining Noise Correction with Feature Selection

  • Choh Man Teng
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2737)

Abstract

Polishing is a noise correction mechanism which makes use of the inter-relationship between attribute and class values in the data set to identify and selectively correct components that are noisy. We applied polishing to a data set of amino acid sequences and associated information of point mutations of the gene COLIA1 for the classification of the phenotypes of the genetic collagenous disease Osteogenesis Imperfecta (OI). OI is associated with mutations in one or both of the genes COLIA1 and COLIA2. There are at least four known phenotypes of OI, of which type II is the severest and often lethal. Preliminary results of polishing suggest that it can lead to a higher classification accuracy. We further investigated the use of polishing as a scoring mechanism for feature selection, and the effect of the features so derived on the resulting classifier. Our experiments on the OI data set suggest that combining polishing and feature selection is a viable mechanism for improving data quality.

Keywords

Feature Selection Osteogenesis Imperfecta Polished Data Classi Cation Noise Correction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Brodley and Friedl, 1999]
    Brodley, C.E., Friedl, M.A.: Identifying mislabeled training data. Journal of Artificial Intelligence Research 11, 131–167 (1999)zbMATHGoogle Scholar
  2. [Cardie, 1993]
    Cardie, C.: Using decision trees to improve case-based learning. In: Proceedings of the Tenth International Conference on Machine Learning, pp. 25–32 (1993)Google Scholar
  3. [Clark and Niblett, 1989]
    Clark, P., Niblett, T.: The CN2 induction algorithm. Machine Learning 3(4), 261–283 (1989)Google Scholar
  4. [Domingos and Pazzani, 1996]
    Domingos, P., Pazzani, M.: Beyond independence: Conditions for the optimality of the simple Bayesian classifier. In: Proceedings of the Thirteenth International Conference on Machine Learning, pp. 105–112 (1996)Google Scholar
  5. [Drastal, 1991]
    Drastal, G.: Informed pruning in constructive induction. In: Proceedings of the Eighth International Workshop on Machine Learning, pp. 132–136 (1991)Google Scholar
  6. [Gamberger et al., 1996]
    Gamberger, D., Lavrač, N., Džeroski, S.: Noise elimination in inductive concept learning: A case study in medical diagnosis. In: Proceedings of the Seventh International Workshop on Algorithmic Learning Theory, pp. 199–212 (1996)Google Scholar
  7. [Hewett et al., 2002]
    Hewett, R., Leuchner, J., Teng, C.M., Mooney, S.D., Klein, T.E.: Compression-based induction and genome data. In: Proceedings of the Fifteenth International Florida Artificial Intelligence Research Society Conference, pp. 344–348 (2002)Google Scholar
  8. [Hunter and Klein, 1993]
    Hunter, L., Klein, T.E.: Finding relevant biomolecular features. In: Proceedings of the International Conference on Intelligent Systems for Molecular Biology, pp. 190–197 (1993)Google Scholar
  9. [John, 1995]
    John, G.H.: Robust decision trees: Removing outliers from databases. In: Proceedings of the First International Conference on Knowledge Discovery and Data Mining, pp. 174–179 (1995)Google Scholar
  10. [Kira and Rendell, 1992]
    Kira, K., Rendell, L.A.: A practical approach to feature selection. In: Proceedings of the Ninth International Conference on Machine Learning, pp. 249–256 (1992)Google Scholar
  11. [Klein and Wong, 1992]
    Klein, T.E., Wong, E.: Neural networks applied to the collagenous disease osteogenesis imperfecta. In: Proceedings of the Hawaii International Conference on System Sciences, vol. I, pp. 697–705 (1992)Google Scholar
  12. [Kohavi and John, 1997]
    Kohavi, R., John, G.H.: Wrappers for feature selection. Artificial Intelligence 97(l-2), 273–324 (1997)zbMATHCrossRefGoogle Scholar
  13. [Koller and Sahami, 1996]
    Koller, D., Sahami, M.: Toward optimal feature selection. In: Proceedings of the Thirteenth International Conference on Machine Learning, pp. 284–292 (1996)Google Scholar
  14. [Kononenko, 1991]
    Kononenko, I.: Semi-naive Bayesian classifier. In: Proceedings of the Sixth European Working Session on Learning, pp. 206–219 (1991)Google Scholar
  15. [Langley et al., 1992]
    Langley, P., Iba, W., Thompson, K.: An analysis of Bayesian classifiers. In: Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 223–228 (1992)Google Scholar
  16. [Liu and Motoda, 1998]
    Liu, H., Motoda, H. (eds.): Feature Selection for Knowledge Discovery and Data Mining. Kluwer Academic Publishers, Dordrecht (1998)zbMATHGoogle Scholar
  17. [Mitchell, 1997]
    Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)zbMATHGoogle Scholar
  18. [Mooney et al., 2001]
    Langley, P., Iba, W., Thompson, K.: An analysis of Bayesian classifiers. In: Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 223–228 (1992)Google Scholar
  19. [Quinlan, 1987]
    Quinlan, J.R.: Simplifying decision trees. International Journal of Man-Machine Studies 27(3), 221–234 (1987)CrossRefGoogle Scholar
  20. [Quinlan, 1993]
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  21. [Rousseeuw and Leroy, 1987]
    Rousseeuw, P.J., Leroy, A.M.: Robust Regression and Outlier Detection. John Wiley & Sons, Chichester (1987)zbMATHCrossRefGoogle Scholar
  22. [Teng, 1999]
    Teng, C.M.: Correcting noisy data. In: Proceedings of the Sixteenth International Conference on Machine Learning, pp. 239–248 (1999)Google Scholar
  23. [Teng, 2000]
    Teng, C.M.: Evaluating noise correction. In: Proceedings of the Sixth Pacific Rim International Conference on Artificial Intelligence. Springer, Heidelberg (2000)Google Scholar
  24. [Teng, 2001]
    Teng, C.M.: A comparison of noise handling techniques. In: Proceedings of the Fourteenth International Florida Artificial Intelligence Research Society Conference, pp. 269–273 (2001)Google Scholar
  25. [Teng, 2003]
    Teng, C.M.: Noise correction in genomic data. In: Proceedings of the International Conference on Intelligent Data Engineering and Automated Learning. Springer, Heidelberg (2003) (to appear)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Choh Man Teng
    • 1
  1. 1.Institute for Human and Machine CognitionUniversity of West FloridaPensacolaUSA

Personalised recommendations