Advertisement

Transformation of attribute space by function decomposition

  • Janez Demšar
  • Blaž Zupan
  • Ivan Bratko
Part of the International Centre for Mechanical Sciences book series (CISM, volume 431)

Abstract

Function decomposition is a promising mechanism for machine learning. This paper investigates its use as a redundancy removal and feature construction preprocessor. Experiments show that its combination with naive Bayesian classifier and decision trees is especially successful on artificial domains while results on real-world data are less encouraging.

Keywords

Decision Tree Classification Accuracy Attribute Space Information Gain Bayesian Classifier 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Almuallim, H., and Dietterich, T. G. (1991). Learning with many irrelevant features. In Ninth National Conference on Artificial Intelligence,547–552. MIT Press.Google Scholar
  2. Bloedorn, E., and Michalski, R. S. (1998). Data-driven constructive induction. IEEE Intelligent Systems 13 (2): 30–37.CrossRefGoogle Scholar
  3. Craven, M. W., and Shavlik, J. W. (1994). Investigating the value of a good input representation. In Proc. ML-COLT ‘84 Workshop on Constructive Induction and Change of Representation. Google Scholar
  4. Curtis, H. A. (1962). A New Approach to the Design of Switching Functions. Van Nostrand, Princeton, N.J.Google Scholar
  5. Demgar, J., and Zupan, B. (2000). Orange tutorial. Technical report, Artificial Inteligence Laboratory, Faculty of Computer and Information Sciences, Ljubljana.Google Scholar
  6. Demšar, J., Zupan, B., Bohanec, M., and Bratko, I. (1997). Constructing intermediate concepts by decomposition of real functions. In van Someren, M., and Widmer, G., eds., Proc. European Conference on Machine Learning, ECML-97, 93–107. Prague: Springer.Google Scholar
  7. Demšar, J. (1999). Use of function decomposition for attribute space transformation. M. Sc. Thesis (in Slovene).Google Scholar
  8. Dietterich, T. G. (1989). Approximate statistical tests for comparing supervised classification learning algorithms. 10: 1895–1924.Google Scholar
  9. Fayyad, U. M., and Irani, K. B. (1993). Multi-interval discretization of continuous valued attributes for classification learning. In Proceedings of the 13th International Joint Conference on Artificial Intelligence, 1022–1029. Chambery, France: Morgan-Kaufmann.Google Scholar
  10. John, G. H., Kohavi, R., and Pfleger, K. (1994). Irrelevant features and the subset selection problem. In Machine Learning: Proceedings of the Eleventh International Conference, 121–129. San Francisco, CA: Morgan Kaufmann Publishers.Google Scholar
  11. Kohavi, R., and John, G. H. (1997). Wrappers for feature subset selection. Artificial Intelligence Journal 97: 273–324.CrossRefMATHGoogle Scholar
  12. Kohavi, R., and John, G. H. (1998). Wrapper aproach. In Liu, H., and Motoda, H., eds., Feature Extraction, Construction and Selection: A Data Mining Perspective. Kluwer Academic Publishers.Google Scholar
  13. Kramer, S. (1994). CN2-MCI: A two-step method for constructive induction. In Proc. ML-COLT ‘84 Workshop on Constructive Induction and Change of Representation. Google Scholar
  14. Liu, H., and Motoda, H. (1998). Feature Extraction, Construction and Selection: A Data Mining Perspective. Kluwer Academic Publishers.Google Scholar
  15. Murphy, P. M., and Pazzani, M. J. (1991). Id2-of-3: Constructive induction of m-of-n concepts for discriminators in decision trees. In Proceedings of the Eighth International Workshop on Machine Learning, 183–187. Evanston, IL: Morgan-Kaufmann.Google Scholar
  16. Pagallo, G., and Haussier, D. (1990). Boolean feature discovery in empirical learning. Machine Learning 5: 71–99.CrossRefGoogle Scholar
  17. Pazzani, M. J. (1998). Constructive induction of cartesian product attributes. In Liu, H., and Motoda, H., eds., Feature Extraction, Construction and Selection: A Data Mining Perspective. Kluwer Academic Publishers.Google Scholar
  18. Ragavan, H., Rendell, L., Shaw, M., and Tessmer, A. (1993). Complex concept acquisition through direct search and feature caching. In Proceedings of Thirteens International Joint Conference on Artificial Inteligence.Google Scholar
  19. Salzberg, S. L. (1997). On comparing classifiers: Pitfalls to avoid and a recommended approach. Data Mining and Knowledge Discovery 1: 317–328.CrossRefGoogle Scholar
  20. Thrun et al., S. B. (1991). A performance comparison of different learning algorithms. Technical report, Carnegie Mellon University CMU-CS-91–197.Google Scholar
  21. Wnek, J., and Michalski, R. S. (1994). Hypothesis-driven constructive induction in AQ17-HCI: A method and experiments. Machine Learning 14: 139–168.CrossRefMATHGoogle Scholar
  22. Zupan, B., Bohanec, M., Bratko, I., and Demlar, J. (1997). Machine learning by function decomposition. In D. H. Fisher, J., ed., Proc. Fourteenth International Conference on Machine Learning (ICML-97), 421–429. San Mateo, CA: Morgan Kaufmann.Google Scholar
  23. Zupan, B., Bohanec, M., Demgar, J., and Bratko, I. (1999). Learning by discovering concept hierarchies. Artificial Inteligence Journal.Google Scholar
  24. Zupan, B. (1997). Machine learning based on function decomposition. Ph.D. Dissertation, University of Ljubljana. Available at http://www.ai.ijs.si/BlazZupan/papers.html.

Copyright information

© Springer-Verlag Wien 2001

Authors and Affiliations

  • Janez Demšar
    • 1
  • Blaž Zupan
    • 1
  • Ivan Bratko
    • 1
  1. 1.Faculty of Computer and Information ScienceUniversity of LjubljanaLjubljanaSlovenia

Personalised recommendations