Abstract
The paper utilises prime implicants and minimal polynomials in order to reduce the size of the training set of a neural feedforward network. We propose a heuristic in order to compute reduced polynomials which are often able to reduce the training set since the computation of minimal polynomials is intractable. Further abstractions lead to modular feedforward sub-architectures of neural networks for special training patterns. Finally, we introduce overlapping modular sub-architectures for distinct training patterns.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
R. F Albrecht, C. R. Reeves, N. C. Steele (eds.). ANNGA93, Springer, 1993.
G. Barna, K. Kaski. Choosing Optimal Network Structure. In: INNC-90, pp. 890–893, Kluwer, 1990.
S. Becker, Y. Cun. Improving The Convergence of Back-Propagation Learning With Second Order Methods. In: [29], pp. 29–37.
K. J. Cios, N. Liu. A Comparative Study of Machine Learning Algorithms for Generation of Neural Networks. In: [11], pp. I-189–I-194.
T. Denoeux, R. Lengelle, S. Canu. Initialization of Weights in a Feedforward Neural Network Using Prototypes. In: [11], pp. I-623–I-628.
G. P. Drago, S. Ridella. An Optimum Weights Initialization for Improving Scaling Relationships in BP-Learning. In: [11], pp. II-1519–II-1524.
S. E. Fahlman. Faster Learning Variations on Back Propagation: An Empirical Study. In: [29], pp. 38–51.
M. R. Garey, D. S. Johnson. Computers and Intractability. Freeman and Company, 1979.
H. Haario, P. Jokinen. Increasing the Learning Speed of a Backpropagation Algorithm by Linerization. In: [11], pp. I-629–I-634.
S. A. Harp, T. Samad, A. Guha. Designing Application- specific Neural Networks Using the Genetic Algorithm. In D. S. Touretzky (ed). IEEE CNIPS90, vol. 2. pp. 447–454, Morgan Kaufmann, 1990.
T. Kohnen, K. Makisara, O. Simula, J. Kangas (eds). Artificial Neural Networks, North-Holland, 1991.
M. A. Kraaijveld, R. P. W. Duin. On Backpropagation Learning of Edited Data Sets. In: INNC-90, pp. 741–744, Kluwer, 1990.
Y. Lee, S. Oh, M. Kim. The Effect of Initial Weights on Premature Saturation in Back-Propagation Learning. In: IJCNN91. pp. I-765–I-770., 1991.
J. Lin, J. S. Vitter. Complexity Issues in Learning by Neural Nets. In Ronald Rivest, David Haussler, Manfred K. Warmuth (eds). COLT89. pp. 118–132, Morgan Kaufmann 1989.
E. J. McCluskey. Minimization of Boolean Functions. Bell Systems Tech. J. 35, pp. 1417–1444, 1956.
S. Makram-Ebeid, J.-A. Sirat, J.-R. Viala. A Rationalized Error Back-Propagation Learning Algorithm. In: IJCNN89. pp. II-373–I-380, 1989.
G. F. Miller, P. M. Todd, S. U. Hedge. Designing Neural Networks Using Genetic Algorithms. In J. D. SchafFer (ed.). ICGA89. pp. 379–384, Morgan Kaufmann, 1989.
K. Möller, S. Thrun. Task Modularization by Network Modulation. In Neuro Nimes ’90. pp. 419–432, 1990.
F. Nadi. Topological Design of Modular Neural Networks. In: [11], pp. I-213–I-218.
N. K. Perugini, W. E. Engeler. Neural Network Learning Time: Effects of Network and Training Set Size. In: IJCNN89. pp. II-395–II-402, 1989.
D. Polani, T. Uthmann. Training Kohnen Feature Maps in Different Topologies: an Analysis Using Genetic Algorithms. In S. Forrest (ed.). 5th ICGA93, Morgan Kaufmann, pp. 326–333, 1993.
W. V. Quine. Two Theorems about Truth Functions. Bol. Soc. Math. Mex. 10, pp. 64–70, 1953.
W. V. Quine. A Way to Simplify Truth Functions. American Math. Soc. 62, pp. 627–631, 1955.
D. E. Rumelhart, J. L. McClelland (eds.). Parallel Distributed Processing, volume 1. The MIT-Press, 1986.
M. Schmitt, F. Vallet. Network Configuration and Initialization Using Mathematical Morphology: Theoretical Study of Measurement Functions. In: [11], pp. II-1045–II-1048.
F. M. Silva, L. B. Almeida. Speeding-Up Backpropagation by Data-Orthonormalization. In: [11], pp. I-213–I-218.
W. S. Stornetta, B. A. Huberman. An Improved Three- Layer, Back Propagation Algorithm. In: Maureen Caudill, Chales Butler. IEEE 1st ICNN87. pp. II-637–II-644, San Diego, 1987.
G. A. Tagliarini, E. W. Page. Learning in Systematically Designed Networks. In: JCNN89. pp. I-497–I-502, Washington, 1989.
D. Tourtzky, G. Hinton, T. Sejnowski. Proceedings of the 1988 Connectionist Models Summer School, Carnegie Mellon University, Morgan Kaufmann Publishers, 1988.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1995 Springer-Verlag/Wien
About this paper
Cite this paper
Hartmann, U. (1995). From Prime Implicants to Modular Feedforward Networks. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-7535-4_46
Download citation
DOI: https://doi.org/10.1007/978-3-7091-7535-4_46
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-82692-8
Online ISBN: 978-3-7091-7535-4
eBook Packages: Springer Book Archive