Abstract
We describe the IGTree learning algorithm, which compresses an instance base into a tree structure. The concept of information gain is used as a heuristic function for performing this compression. IGTree produces trees that, compared to other lazy learning approaches, reduce storage requirements and the time required to compute classifications. Furthermore, we obtained similar or better generalization accuracy with IGTree when trained on two complex linguistic tasks, viz. letter—phoneme transliteration and part-of-speech-tagging, when compared to alternative lazy learning and decision tree approaches (viz., IB 1, information-gain-weighted IB1, and C4.5). A third experiment, with the task of word hyphenation, demonstrates that when the mutual differences in information gain of features is too small, IGTree as well as information-gain-weighted IB 1 perform worse than IB 1. These results indicate that IGTree is a useful algorithm for problems characterized by the availability of a large number of training instances described by symbolic features with sufficiently differing information gain values.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Aha, D. W., Kibler, D. & Albert, M. (1991). Instance-Based Learning Algorithms. Machine Learning 7: 37–66.
Aha, D. W. (1992). Generalizing from Case Studies: A Case Study. In Proceedings of the Ninth International Conference on Machine Learning, 1–10. Aberdeen, Scotland: Morgan Kaufmann.
Cardie, C. (1993). A Case-Based Approach to Knowledge Acquisition for Domain-Specific Sentence Analysis. In Proceedings of the Eleventh National Conference on Artificial Intelligence, pp. 798–803, San Jose, CA: AAA Press.
Daelemans, W. (1995). Memory-based Lexical Acquisition and Processing. In Steffens, P. (ed.) Machine Translation and the Lexicon, Lecture Notes in Artificial Intelligence, 898. Springer: Berlin.
Daelemans, W. and Van den Bosch, A. (1992). Generalisation Performance of Backpropagation Learning on a Syllabification Task. In Drossaers, M. and Nijholt, A. (eds.) TWLT3: Connectionism and Natural Language Processing. Enschede: Twente University.
Daelemans, W. and Van den Bosch, A. (1994). A Language-Independent, Data-Oriented Architecture for Grapheme-to-Phoneme Conversion. In Proceedings of ESCA-IEEE Speech Synthesis Conference ‘84. New York.
Deng, K. and Moore, A. W. (1995). Multiresolution Instance-Based Learning. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence. Montreal: Morgan Kaufmann.
Dougherty, J., Kohavi, R. and Sahami, M. (1995). Supervised and Unsupervised Discretization of Continuous Features. In Proceedings of the Twelfth International Conference on Machine Learning, pp. 194–202, Tahoe City, CA: Morgan Kaufmann.
Friedman, J., Bentley, J. and Ari Finkel, R. (1977). An Algorithm for Finding Best Matches in Logarithmic Expected Time. ACM Transactions on Mathematical Software 3(3).
Kitano, H. (1993). Challenges of Massive Parallelism. In Proceedings of the Thirteenth International Conference on Artificial Intelligence, pp. 813–834, Chembery, France: Morgan Kaufmann.
Kohavi, R. and Li, C-H. (1995). Oblivious Decision Trees, Graphs & Top-Down Pruning. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, 1071–1077. Montreal: Morgan Kaufmann.
Langley, P. and Sage, S. (1994). Oblivious Decision Trees and Abstract Cases In Aha, D. W. (ed.) Case-Based Reasoning: Papers from the 1994 Workshop (Technical Report WS-94–01). Menlo Park, CA: AAAI Press.
Nunn, A. and van Heuven, V. J. (1993). Morphon, Lexicon-Based Text-to-Phoneme Conversion and Phonological Rules. In van Heuven, V. J. and Pols, L. C. (eds.) Analysis and Synthesis of Speech: Strategic Research Towards High-Quality Text-to-Speech Generation. Berlin:
Mouton de Gruyter.
Omohundro, S. M. (1991). Bumptrees for Efficient Function, Constraint & Classification Learning. In Lippmann, R. P., Moody J. E. and Touretzky, D. S. (eds.) Advances in Neural Information Processing Systems 3. San Mateo, CA: Morgan Kaufmann.
Quinlan, J. (1993). C4.5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann.
Rumelhart, D. E., Hinton, G. E. and Williams, R. J. (1986). Learning Internal Representations by Error Propagation. In Rumelhart, D. E. and McClelland, J. L. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1: Foundations. Cambridge, MA: The MIT Press.
Sejnowski, T. J. and Rosenberg, C. S. (1987). Parallel Networks that Learn to Pronounce English Text. Complex Systems 1: 145–168.
Stanfill, C. and Waltz, D. (1986). Toward Memory-Based Reasoning. Communications of the ACM 29: 1212–1228.
Van den Bosch, A. and Daelemans, W. (1993). Data-Oriented Methods for Grapheme-toPhoneme Conversion. In Proceedings of the 6th Conference of the EACL, 45–53. Utrecht: OTS.
Weijters, A. and Hoppenbrouwers, G. (1990). NetSpraak: een neuraal netwerk voor grafeemfoneem-omzetting. Tabu 20 (1): 1–25.
Weijters, A. (1991). A Simple Look-Up Procedure Superior to NETtalk? In Proceedings of the International Conference on Artificial Neural Networks. Finland: Espoo.
Wess, S., Althoff, K. D. and Derwand, G. (1994). Using k-d Trees to Improve the Retrieval Step in Case-Based Reasoning. In Wess, S., Althoff K. D. and Richter, M. M. (eds.) Topics in Case-Based Reasoning. Berlin: Springer Verlag.
Wess, S. (1995). Fallbasiertes Problemliisen in wissensbasierten Systemen zur Entscheidungsunterstiitzung and Diagnostik. Doctoral Dissertation, University of Kaiserslautern.
Wolpert, D. H. (1990). Constructing a Generalizer Superior to NETtalk via a Mathematical Theory of Generalization. Neural Networks 3: 445–452.
Aamodt, A., and Plaza, E. (1994). Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI Communications 7, 39–59.
Aha, D. W., Kibler, D., and Albert, M. K. (1991). Instance-based learning algorithms. Machine Learning 6, 37–66.
Bostrom, H. (1992). Eliminating redundancy in explanation-based learning. In Proceedings of the Ninth International Conference on Machine Learning (pp. 38–42 ). Aberdeen, Scotland: Morgan Kaufmann.
Bottou, L., and Vapnik, V. (1992). Local learning algorithms. Neural Computation 4, 888–900.
Clark, P., and Holte, R. (1992). Lazy partial evaluation: An integration of explanation-based generalisation and partial evaluation. In Proceedings of the Ninth International Conference on Machine Learning (pp. 82–91 ). Aberdeen, Scotland: Morgan Kaufmann.
Creecy, R. H., Masand, B. M., Smith, S. J., and Waltz, D. L. (1992). Trading MIPS and memory for knowledge engineering. Communications of the ACM 35, 48–64.
Dasarathy, B. V. (Ed.). (1991). Nearest neighbor(NN) norms: NN pattern classification techniques. Los Alamitos, CA: IEEE Computer Society Press.
Dietterich, T. G., Wettschereck, D., Atkeson, C. G., and Moore, A. W. (1994). Memory-based methods for regression and classification. In J. Cowan, G. Tesauro, and J. Alspector (Eds.), Neural Information Processing Systems 6. Denver, CO: Morgan Kaufmann.
Friedman, J. H. (1994). Flexible metric nearest neighbor classification. Unpublished manuscript available by anonymous FTP from playfair.stanford.edu (see pub/friedman/ README).
Jabbour, K., Riveros, J. F. V., Landsbergen, D., and Meyer W. (1987). ALFA: Automated load forecasting assistant. In Proceedings of the 1987 IEEE Power Engineering Society Summer Meeting. San Francisco, CA.
Kolodner, J. (1993). Case-based reasoning. San Mateo, CA: Morgan Kaufmann.
Nguyen, T., Czerwinsksi, M., and Lee, D. (1993). COMPAQ QuickSource: Providing the consumer with the power of artificial intelligence. In Proceedings of the Fifth Annual Conference on Innovative Applications of Artificial Intelligence (pp. 142–150 ). Washington, DC: AAAI Press.
Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General 15, 39–57.
Smith, E. E., and Medin, D. L. (1981). Categories and concepts. Cambridge, MA: Harvard University Press.
Tadepalli, P. (1989). Lazy explanation-based learning: A solution to the intractable theory problem. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (pp. 649–700 ). Detroit, MI: Morgan Kaufmann.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Daelemans, W., Van Den Bosch, A., Weijters, T. (1997). IGTree: Using Trees for Compression and Classification in Lazy Learning Algorithms. In: Aha, D.W. (eds) Lazy Learning. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-2053-3_15
Download citation
DOI: https://doi.org/10.1007/978-94-017-2053-3_15
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-4860-8
Online ISBN: 978-94-017-2053-3
eBook Packages: Springer Book Archive