Skip to main content

Constant Fan-in Digital Neural Networks are VLSI-Optimal

  • Chapter
Mathematics of Neural Networks

Part of the book series: Operations Research/Computer Science Interfaces Series ((ORCS,volume 8))

Abstract

The paper presents a theoretical proof revealing an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures (e.g. neural networks). We are in fact able to prove that efficient digital VLSI implementations (known as VLSI-optimal when minimising the AT 2 complexity measure — A being the area of the chip, and T the delay for propagating the inputs to the outputs) of neural networks are achieved for small-constant fan-in gates. This result builds on quite recent ones dealing with a very close estimate of the area of neural networks when implemented by threshold gates, but it is also valid for classical Boolean gates. Limitations and open questions are presented in the conclusions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Y.S. Abu-Mostafa, Connectivity Versus Entropy, in: Neural Information Processing Systems (Proc. NIPS*87, Denver, Colorado), ed. D.Z. Anderson, American Institute of Physics, New York, (1988) pp1–8.

    Google Scholar 

  2. V. Beiu, J.A. Peperstraete, J. Vandewalle and R. Lauwereins, Efficient Decomposition of COMPARISON and Its Applications, in: ESANN’93 (Proc. European Symposium on Artificial Neural Networks ‘93, Brussels, Belgium), ed. M. Verleysen, Dfacto, Brussels, (1993) pp45–50.

    Google Scholar 

  3. V. Beiu, J.A. Peperstraete, J. Vandewalle and R. Lauwereins, COMPARISON and Threshold Gate Decomposition, in: MicroNeuro’ 93 (Proc. International Conference on Microelectronics for Neural Networks’ 93, Edinburgh, UK), eds. D.J. Myers and A.F. Murray, UnivEd Tech. Ltd., Edinburgh, (1993) pp83–90.

    Google Scholar 

  4. V. Beiu, J.A. Peperstraete, J. Vandewalle and R. Lauwereins, Learning from Examples and VLSI Implementation of Neural Networks, in: Cybernetics and System Research’ 94 (Proc. European Meeting on Cybernetics and System Research’ 94, Vienna, Austria), ed. R. Trappl, World Scientific Publishing, Singapore, (1994) pp1767–1774.

    Google Scholar 

  5. V. Beiu, J.A. Peperstraete, J. Vandewalle and R. Lauwereins, Area-Time Performances of Some Neural Computations, in: SPRANN’ 94 (Proc. International Symposium on Signal Processing, Robotics and Neural Networks’ 94, Lille, France), eds. P. Borne, T. Fukuda and S.G. Tzafestas, GERF EC, Lille, (1994) pp664–668.

    Google Scholar 

  6. V. Beiu and J.G. Taylor, VLSI Optimal Neural Network Learning Algorithm, in: Artificial Neural Nets and Genetic Algorithms (Proc. Int. Conf., Ales, France), eds. D.W. Pearson, N.C. Steele and R.F. Albrecht, Springer-Verlag, Vienna, (1995) pp61–64.

    Google Scholar 

  7. V. Beiu and J.G. Taylor, Area-Efficient Constructive Learning Algorithms, in Proc. CSCS-10 (10th International Conference on Control System and Computer Science, Bucharest, România), ed. I. Dumitrache, PUBucharest, Bucharest, (1995), pp293–310.

    Google Scholar 

  8. J. Bruck and J. Goodman, On the Power of Neural Networks for Solving Hard Problems, in: Neural Information Processing Systems (Proc. NIPS*87, Denver, Colorado), ed. D.Z. Anderson, American Institute of Physics, New York, (1988) pp137–143.

    Google Scholar 

  9. D. Hammerstrom, The Connectivity Analysis of Simple Associations-or-How Many Connections Do You Need, in: Neural Information Processing Systems (Proc. NIPS*87, November, Denver, Colorado), ed. D.Z. Anderson, American Institute of Physics, New York, (1988) pp338–347.

    Google Scholar 

  10. H. Klaggers and M. Soegtrop, Limited Fan-In Random Wired Cascade-Correlation, in: MicroNeuro’93 (Proc. International Conference on Microelectronics for Neural Networks, Edinburgh’ 93, UK), eds. D.J.6Myers and A.F. Murray, UnivEd Tech. Ltd., Edinburgh, (1993) pp79–82.

    Google Scholar 

  11. A.V. Krishnamoorthy, R. Paturi, M. Blume, G.D. Linden, L.H. Linden and S.C. Esener, Hardware Tradeoffs for Boolean Concept Learning, in WCNN’94 (Proc. World Conference on Neural Networks’ 94, San Diego, USA), Lawrence Erlbaum and INNS Press, Hillsdale, (1994) Vol. 1, pp551–559.

    Google Scholar 

  12. D.S. Phatak and I. Koren, Connectivity and Performance Tredeoffs in the Cascade-Correlation Learning Architecture, IEEE Trans. on Neural Networks Vol. 5(6) (1994), pp930–935.

    Article  Google Scholar 

  13. N.P. Red’kin, Synthesis of Threshold Circuits for certain Classes of Boolean Functions, Kibernetica Vol. 5 (1970), pp6–9. Translated in Cybernetics Vol. 6(5) (1973), pp540-544.

    MathSciNet  Google Scholar 

  14. K.-Y. Siu, V. Roychowdhury and T. Kailath, Depth-Size Tradeoffs for Neural Computations, IEEE Trans. on Computers Vol. 40(12) (1991), pp1402–1412.

    Article  MathSciNet  Google Scholar 

  15. R.C. Williamson, ε-Entropy and the Complexity of Feedforward Neural Networks, in: Neural Information Processing Systems (Proc. NIPS*90, Denver, Colorado), eds. R.P. Lippmann, J.E. Moody and D.S. Touretzky, Morgan Kaufmann, San Mateo, (1991), pp946–952.

    Google Scholar 

  16. B.-T. Zhang and H. Mühlenbein, Genetic Programming of Minimal Neural Networks Using Occam’s Razor, Technical Report: Arbeitspapiere der GMD 734, Schloß Birlinghoven, Sankt Augustin, Germany (1993).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer Science+Business Media New York

About this chapter

Cite this chapter

Beiu, V. (1997). Constant Fan-in Digital Neural Networks are VLSI-Optimal. In: Ellacott, S.W., Mason, J.C., Anderson, I.J. (eds) Mathematics of Neural Networks. Operations Research/Computer Science Interfaces Series, vol 8. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-6099-9_12

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-6099-9_12

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-7794-8

  • Online ISBN: 978-1-4615-6099-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics