Skip to main content

From Prime Implicants to Modular Feedforward Networks

  • Conference paper
  • 232 Accesses

Abstract

The paper utilises prime implicants and minimal polynomials in order to reduce the size of the training set of a neural feedforward network. We propose a heuristic in order to compute reduced polynomials which are often able to reduce the training set since the computation of minimal polynomials is intractable. Further abstractions lead to modular feedforward sub-architectures of neural networks for special training patterns. Finally, we introduce overlapping modular sub-architectures for distinct training patterns.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. R. F Albrecht, C. R. Reeves, N. C. Steele (eds.). ANNGA93, Springer, 1993.

    Google Scholar 

  2. G. Barna, K. Kaski. Choosing Optimal Network Structure. In: INNC-90, pp. 890–893, Kluwer, 1990.

    Google Scholar 

  3. S. Becker, Y. Cun. Improving The Convergence of Back-Propagation Learning With Second Order Methods. In: [29], pp. 29–37.

    Google Scholar 

  4. K. J. Cios, N. Liu. A Comparative Study of Machine Learning Algorithms for Generation of Neural Networks. In: [11], pp. I-189–I-194.

    Google Scholar 

  5. T. Denoeux, R. Lengelle, S. Canu. Initialization of Weights in a Feedforward Neural Network Using Prototypes. In: [11], pp. I-623–I-628.

    Google Scholar 

  6. G. P. Drago, S. Ridella. An Optimum Weights Initialization for Improving Scaling Relationships in BP-Learning. In: [11], pp. II-1519–II-1524.

    Google Scholar 

  7. S. E. Fahlman. Faster Learning Variations on Back Propagation: An Empirical Study. In: [29], pp. 38–51.

    Google Scholar 

  8. M. R. Garey, D. S. Johnson. Computers and Intractability. Freeman and Company, 1979.

    Google Scholar 

  9. H. Haario, P. Jokinen. Increasing the Learning Speed of a Backpropagation Algorithm by Linerization. In: [11], pp. I-629–I-634.

    Google Scholar 

  10. S. A. Harp, T. Samad, A. Guha. Designing Application- specific Neural Networks Using the Genetic Algorithm. In D. S. Touretzky (ed). IEEE CNIPS90, vol. 2. pp. 447–454, Morgan Kaufmann, 1990.

    Google Scholar 

  11. T. Kohnen, K. Makisara, O. Simula, J. Kangas (eds). Artificial Neural Networks, North-Holland, 1991.

    Google Scholar 

  12. M. A. Kraaijveld, R. P. W. Duin. On Backpropagation Learning of Edited Data Sets. In: INNC-90, pp. 741–744, Kluwer, 1990.

    Google Scholar 

  13. Y. Lee, S. Oh, M. Kim. The Effect of Initial Weights on Premature Saturation in Back-Propagation Learning. In: IJCNN91. pp. I-765–I-770., 1991.

    Google Scholar 

  14. J. Lin, J. S. Vitter. Complexity Issues in Learning by Neural Nets. In Ronald Rivest, David Haussler, Manfred K. Warmuth (eds). COLT89. pp. 118–132, Morgan Kaufmann 1989.

    Google Scholar 

  15. E. J. McCluskey. Minimization of Boolean Functions. Bell Systems Tech. J. 35, pp. 1417–1444, 1956.

    MathSciNet  Google Scholar 

  16. S. Makram-Ebeid, J.-A. Sirat, J.-R. Viala. A Rationalized Error Back-Propagation Learning Algorithm. In: IJCNN89. pp. II-373–I-380, 1989.

    Google Scholar 

  17. G. F. Miller, P. M. Todd, S. U. Hedge. Designing Neural Networks Using Genetic Algorithms. In J. D. SchafFer (ed.). ICGA89. pp. 379–384, Morgan Kaufmann, 1989.

    Google Scholar 

  18. K. Möller, S. Thrun. Task Modularization by Network Modulation. In Neuro Nimes ’90. pp. 419–432, 1990.

    Google Scholar 

  19. F. Nadi. Topological Design of Modular Neural Networks. In: [11], pp. I-213–I-218.

    Google Scholar 

  20. N. K. Perugini, W. E. Engeler. Neural Network Learning Time: Effects of Network and Training Set Size. In: IJCNN89. pp. II-395–II-402, 1989.

    Google Scholar 

  21. D. Polani, T. Uthmann. Training Kohnen Feature Maps in Different Topologies: an Analysis Using Genetic Algorithms. In S. Forrest (ed.). 5th ICGA93, Morgan Kaufmann, pp. 326–333, 1993.

    Google Scholar 

  22. W. V. Quine. Two Theorems about Truth Functions. Bol. Soc. Math. Mex. 10, pp. 64–70, 1953.

    MathSciNet  Google Scholar 

  23. W. V. Quine. A Way to Simplify Truth Functions. American Math. Soc. 62, pp. 627–631, 1955.

    MathSciNet  MATH  Google Scholar 

  24. D. E. Rumelhart, J. L. McClelland (eds.). Parallel Distributed Processing, volume 1. The MIT-Press, 1986.

    Google Scholar 

  25. M. Schmitt, F. Vallet. Network Configuration and Initialization Using Mathematical Morphology: Theoretical Study of Measurement Functions. In: [11], pp. II-1045–II-1048.

    Google Scholar 

  26. F. M. Silva, L. B. Almeida. Speeding-Up Backpropagation by Data-Orthonormalization. In: [11], pp. I-213–I-218.

    Google Scholar 

  27. W. S. Stornetta, B. A. Huberman. An Improved Three- Layer, Back Propagation Algorithm. In: Maureen Caudill, Chales Butler. IEEE 1st ICNN87. pp. II-637–II-644, San Diego, 1987.

    Google Scholar 

  28. G. A. Tagliarini, E. W. Page. Learning in Systematically Designed Networks. In: JCNN89. pp. I-497–I-502, Washington, 1989.

    Google Scholar 

  29. D. Tourtzky, G. Hinton, T. Sejnowski. Proceedings of the 1988 Connectionist Models Summer School, Carnegie Mellon University, Morgan Kaufmann Publishers, 1988.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag/Wien

About this paper

Cite this paper

Hartmann, U. (1995). From Prime Implicants to Modular Feedforward Networks. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-7535-4_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-7535-4_46

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-82692-8

  • Online ISBN: 978-3-7091-7535-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics