Abstract
We investigate the generation of neural networks through the induction of binary trees of threshold logic units (TLUs). Initially, we describe the framework for our tree construction algorithm and how such trees can be transformed into an isomorphic neural network topology. Several methods for learning the linear discriminant functions at each node of the tree structure are examined and shown to produce accuracy results that are comparable to classical information theoretic methods for constructing decision trees (which use single feature tests at each node). Our TLU trees, however, are smaller and thus easier to understand. Moreover, we show that it is possible to simultaneously learn both the topology and weight settings of a neural network simply using the training data set that we are given.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Brent, R.P. 1990. Fast training algorithms for multi-layer neural nets. Numerical Analysis Project Manuscript NA-90-03, Computer Sci. Dept, Stanford.
Brodley, C.E., and Utgoff, P.E. 1992. Multivariate Versus Univariate Decision Trees. COINS Technical Report 92-8, Computer Science Dept, UMass.
Brodley, C.E., and Utgoff, P.E. 1994. Multivariate Decision Trees. To appear in Machine Learning.
Gallant, S.I. 1986. Optimal Linear Discriminants. In Eighth International Conference on Pattern Recognition, 849–852. New York: IEEE.
Nilsson, N.J. 1965. Learning machines. New York: McGraw-Hill.
Quinlan, J.R. 1986. Induction of decision trees. Machine Learning 1:81–106.
Rumelhart, D.E.; Hinton, G. E.; and Williams, R.J. 1986. Learning internal representations by error propagation. Parallel Distributed Processing, Vol. 1, eds. D.E. Rumelhart and J. L. McClelland, 318–62. Cambridge, MA: MIT Press.
Sahami, M. 1993. Learning Non-Linearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks. In Proceedings of the 11th National Conference on Artificial Intelligence, 335–41. Menlo Park, CA: AAAI Press.
Thrun, S.B., and 23 co-authors. 1991. The monk's problems: a performance comparison of different learning algorithms. TR CMU-CS-91-197, Carnegie Mellon.
Widrow, B., and Winter, R.G. 1988. Neural Nets for Adaptive Filtering and Adaptive Pattern Recognition. IEEE Computer, March:25–39.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sahami, M. (1995). Generating neural networks through the induction of threshold logic unit trees (Extended abstract). In: Lavrac, N., Wrobel, S. (eds) Machine Learning: ECML-95. ECML 1995. Lecture Notes in Computer Science, vol 912. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59286-5_82
Download citation
DOI: https://doi.org/10.1007/3-540-59286-5_82
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-59286-0
Online ISBN: 978-3-540-49232-0
eBook Packages: Springer Book Archive