Advertisement

From Prime Implicants to Modular Feedforward Networks

  • Uwe Hartmann
Conference paper

Abstract

The paper utilises prime implicants and minimal polynomials in order to reduce the size of the training set of a neural feedforward network. We propose a heuristic in order to compute reduced polynomials which are often able to reduce the training set since the computation of minimal polynomials is intractable. Further abstractions lead to modular feedforward sub-architectures of neural networks for special training patterns. Finally, we introduce overlapping modular sub-architectures for distinct training patterns.

Keywords

Boolean Function Network Architecture Hide Unit Output Unit Feedforward Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    R. F Albrecht, C. R. Reeves, N. C. Steele (eds.). ANNGA93, Springer, 1993.Google Scholar
  2. 2.
    G. Barna, K. Kaski. Choosing Optimal Network Structure. In: INNC-90, pp. 890–893, Kluwer, 1990.Google Scholar
  3. 3.
    S. Becker, Y. Cun. Improving The Convergence of Back-Propagation Learning With Second Order Methods. In: [29], pp. 29–37.Google Scholar
  4. 4.
    K. J. Cios, N. Liu. A Comparative Study of Machine Learning Algorithms for Generation of Neural Networks. In: [11], pp. I-189–I-194.Google Scholar
  5. 5.
    T. Denoeux, R. Lengelle, S. Canu. Initialization of Weights in a Feedforward Neural Network Using Prototypes. In: [11], pp. I-623–I-628.Google Scholar
  6. 6.
    G. P. Drago, S. Ridella. An Optimum Weights Initialization for Improving Scaling Relationships in BP-Learning. In: [11], pp. II-1519–II-1524.Google Scholar
  7. 7.
    S. E. Fahlman. Faster Learning Variations on Back Propagation: An Empirical Study. In: [29], pp. 38–51.Google Scholar
  8. 8.
    M. R. Garey, D. S. Johnson. Computers and Intractability. Freeman and Company, 1979.Google Scholar
  9. 9.
    H. Haario, P. Jokinen. Increasing the Learning Speed of a Backpropagation Algorithm by Linerization. In: [11], pp. I-629–I-634.Google Scholar
  10. 10.
    S. A. Harp, T. Samad, A. Guha. Designing Application- specific Neural Networks Using the Genetic Algorithm. In D. S. Touretzky (ed). IEEE CNIPS90, vol. 2. pp. 447–454, Morgan Kaufmann, 1990.Google Scholar
  11. 11.
    T. Kohnen, K. Makisara, O. Simula, J. Kangas (eds). Artificial Neural Networks, North-Holland, 1991.Google Scholar
  12. 12.
    M. A. Kraaijveld, R. P. W. Duin. On Backpropagation Learning of Edited Data Sets. In: INNC-90, pp. 741–744, Kluwer, 1990.Google Scholar
  13. 13.
    Y. Lee, S. Oh, M. Kim. The Effect of Initial Weights on Premature Saturation in Back-Propagation Learning. In: IJCNN91. pp. I-765–I-770., 1991.Google Scholar
  14. 14.
    J. Lin, J. S. Vitter. Complexity Issues in Learning by Neural Nets. In Ronald Rivest, David Haussler, Manfred K. Warmuth (eds). COLT89. pp. 118–132, Morgan Kaufmann 1989.Google Scholar
  15. 15.
    E. J. McCluskey. Minimization of Boolean Functions. Bell Systems Tech. J. 35, pp. 1417–1444, 1956.MathSciNetGoogle Scholar
  16. 16.
    S. Makram-Ebeid, J.-A. Sirat, J.-R. Viala. A Rationalized Error Back-Propagation Learning Algorithm. In: IJCNN89. pp. II-373–I-380, 1989.Google Scholar
  17. 17.
    G. F. Miller, P. M. Todd, S. U. Hedge. Designing Neural Networks Using Genetic Algorithms. In J. D. SchafFer (ed.). ICGA89. pp. 379–384, Morgan Kaufmann, 1989.Google Scholar
  18. 18.
    K. Möller, S. Thrun. Task Modularization by Network Modulation. In Neuro Nimes ’90. pp. 419–432, 1990.Google Scholar
  19. 19.
    F. Nadi. Topological Design of Modular Neural Networks. In: [11], pp. I-213–I-218.Google Scholar
  20. 20.
    N. K. Perugini, W. E. Engeler. Neural Network Learning Time: Effects of Network and Training Set Size. In: IJCNN89. pp. II-395–II-402, 1989.Google Scholar
  21. 21.
    D. Polani, T. Uthmann. Training Kohnen Feature Maps in Different Topologies: an Analysis Using Genetic Algorithms. In S. Forrest (ed.). 5th ICGA93, Morgan Kaufmann, pp. 326–333, 1993.Google Scholar
  22. 22.
    W. V. Quine. Two Theorems about Truth Functions. Bol. Soc. Math. Mex. 10, pp. 64–70, 1953.MathSciNetGoogle Scholar
  23. 23.
    W. V. Quine. A Way to Simplify Truth Functions. American Math. Soc. 62, pp. 627–631, 1955.MathSciNetMATHGoogle Scholar
  24. 24.
    D. E. Rumelhart, J. L. McClelland (eds.). Parallel Distributed Processing, volume 1. The MIT-Press, 1986.Google Scholar
  25. 25.
    M. Schmitt, F. Vallet. Network Configuration and Initialization Using Mathematical Morphology: Theoretical Study of Measurement Functions. In: [11], pp. II-1045–II-1048.Google Scholar
  26. 26.
    F. M. Silva, L. B. Almeida. Speeding-Up Backpropagation by Data-Orthonormalization. In: [11], pp. I-213–I-218.Google Scholar
  27. 27.
    W. S. Stornetta, B. A. Huberman. An Improved Three- Layer, Back Propagation Algorithm. In: Maureen Caudill, Chales Butler. IEEE 1st ICNN87. pp. II-637–II-644, San Diego, 1987.Google Scholar
  28. 28.
    G. A. Tagliarini, E. W. Page. Learning in Systematically Designed Networks. In: JCNN89. pp. I-497–I-502, Washington, 1989.Google Scholar
  29. 29.
    D. Tourtzky, G. Hinton, T. Sejnowski. Proceedings of the 1988 Connectionist Models Summer School, Carnegie Mellon University, Morgan Kaufmann Publishers, 1988.Google Scholar

Copyright information

© Springer-Verlag/Wien 1995

Authors and Affiliations

  • Uwe Hartmann
    • 1
  1. 1.RAG, ZK 5.21HerneGermany

Personalised recommendations