Abstract
This paper proposes a new supervised induction algorithm, IGR, that uses each training instances as a guide of rule induction. IGR learns a set of if-then rules by inducing a pseudo-optimun classification rule for each training instance. IGR weighs the induced rules by using the number of trianing instances covered by them and classifies new instances by majority voting with the weights. Experimental results with twenty datasets in UCI repository show IGR can induce more accurate classification rules than existing learning algorithms such as C4.5, AQ and LazyDT. The experiments also show that IGR does not generate too many rules even if it is applied to large problems
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Aha, D.W.: Lazy Learning. Kluwer academic publishers(1997)
Breiman, L.: Bagging Predictors. Machine Learning, 24 (1996) 123–140
Breiman, L., Friedman, J. H., Olshen, R.A. and Stone, C. J.: Classification and Regression Trees. Wadsworth International Group(1984)
Clark, P. and Niblett, T.: The CN2 Induction Algorithm. Machine Learning, 3 (1989) 261–283
Cohen, W.: Fast effective rule induction. Proc. of the Twelfth International Conference on Machine Learning (1995) 115–123
Dasarathy, B.V.: Nearest Neighbor(NN) Norms: NN Pattern Classification Techniques. IEEE Computer Society Press(1990)
Fayyad, U.M.: Branching on Attribute Values in Decision Tree Generation. In Proc. of the Twelfth National Conference on Artificial Intelligence (1994) 601–606
Fayyad, U.M. and Irani, K. B.: Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning. In Proc. of the 13th International Joint Conference on Artificial Intelligence (1993), 1022–1027
Friedman, J. H., Kohavi, R. and Yun, Y.: Lazy Decision Trees. In Proc. of the 13th National Conference on Artificial Intelligence (1996) 717–724.
Freund, Y. and Schapire, R. E.: Experiments with a new Boosting Algorithm. In Proc. of the Thirteenth International Conference on Machine Learning (1996) 148–156.
Hastie, T. and Tibshirani, R.: Discriminant Nearest Neighbor Classification. IEEE Transactions on Pattern Analysis and Machine Learning, 18(1995) 607–616
Kerber, R.: Discretization of numeric attributes. In Proc. of the Tenth National Conference on Artificial Intelligence (1992) 123–138
Kohavi, R. and Kunz, C.: Option Decision Trees with Majority Voting. In Proc. of the Fourteenth International Conference on Machine Learning (1997) 161–169.
Micalski, R.S. and Larson, J.: Incremental generation of vl1 hypotheses: the underlying methodology and the description of program AQ11. ISG 83-5, Dept. of Computer Science, Univ. of Illinois at Urbana-Champaign, Urbana(1983)
Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers (1993)
Quinlan, J.R.: Bagging, Boosting and C4.5. In Proc. of the Thirteenth National Conference on Artificial Intelligence (1996) 725–730
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Yugami, N., Ohta, Y., Okamoto, S. (1998). Instance Guided Rule Induction. In: Arikawa, S., Motoda, H. (eds) Discovey Science. DS 1998. Lecture Notes in Computer Science(), vol 1532. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49292-5_19
Download citation
DOI: https://doi.org/10.1007/3-540-49292-5_19
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65390-5
Online ISBN: 978-3-540-49292-4
eBook Packages: Springer Book Archive