On the Node Complexity of Threshold Gate Circuits with Sub-linear Fan-ins

  • Valeriu Beiu
Conference paper
Part of the Advances in Soft Computing book series (AINSC, volume 19)


This paper discusses size-optimal solutions for implementing arbitrary Boolean functions using threshold gates. After presenting the state-of-the-art, we start from the result of Horne and Hush [12], which shows that threshold gate circuits restricted to fan-in 2 can implement arbitrary Boolean functions, but require O(2 n /n) gates in 2n layers. This result will be generalized to arbitrary fan-ins (Δ), lowering the depth to n/logΔ + n/Δ, and proving that all the (relative) minimums of size are obtained for sub-linear fan-ins (Δ < n − logn). The fact that size-optimal solutions have sub-linear fan-ins is encouraging, as the area and the delay of VLSI implementations are related to the fan-in of the gates.


Neural Network Synaptic Weight VLSI Implementation Exponential Size Threshold Gate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Arai, M. (1993) Bounds on the Number of Hidden Units in Binary-Valued Three-Layer Neural Networks. Neural Networks 6, 855–860CrossRefGoogle Scholar
  2. 2.
    Arbib, M.A. (1995) The Handbook of Brain Theory and Neural Networks. MIT Press, CambridgeGoogle Scholar
  3. 3.
    Baum, E.B. (1988) On the Capabilities of Multilayer Perceptrons. J. Complexity 4, 193–215MathSciNetMATHCrossRefGoogle Scholar
  4. 4.
    Beiu, V. (1996) Entropy Bounds for Classification Algorithms. Neural Network World 6, 497–505Google Scholar
  5. 5.
    Beiu, V. (1996) Digital Integrated Circuit Implementations. Chapter E1.4 in [9]Google Scholar
  6. 6.
    Beiu, V. (1998) On the Circuit and VLSI Complexity of Threshold Gate COMPARISON. Neurocomputing 19, 77–98CrossRefGoogle Scholar
  7. 7.
    Beiu, V., De Pauw, T. (1997) Tight Bounds on the Size of Neural Networks for Classification Problems. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds.) Biological and Artificial Computation. Springer-Verlag, Berlin, pp. 743–752Google Scholar
  8. 8.
    Bruck, J., Goodmann, J.W. (1990) On the Power of Neural Networks for Solving Hard Problems. J. Complexity 6, 129–135MathSciNetMATHCrossRefGoogle Scholar
  9. 9.
    Fiesler, E., Beale, R. (1996) Handbook of Neural Computation. IoP, New YorkGoogle Scholar
  10. 10.
    Hammerstrom, D. (1988) The Connectivity Analysis of Simple Association —or— How Many Connections Do You Need. In: Anderson, D.Z. (ed.) Neural Information Processing Systems. AIPress, New York, pp. 338–347Google Scholar
  11. 11.
    Hassoun, M.H. (1995) Fundamentals of Artificial Neural Networks. MIT Press, CambridgeMATHGoogle Scholar
  12. 12.
    Horne, B.G., Hush, D.R. (1994) On the Node Complexity of Neural Networks. Neural Networks 7, 1413–1426CrossRefGoogle Scholar
  13. 13.
    Hu, S. (1965) Threshold Logic. Univ. California Press, BerkeleyGoogle Scholar
  14. 14.
    Huang, S.-C., Huang, Y.-F. (1991) Bounds on the Number of Hidden Neurons of Multilayer Perceptrons in Classification and Recognition. IEEE Trans. Neural Networks 2, 47–55CrossRefGoogle Scholar
  15. 15.
    Lupanov, O.B. (1973) The Synthesis of Circuits from Threshold Elements. Problemy Kibernetiki 20, 109–140MathSciNetGoogle Scholar
  16. 16.
    Minnik, R.C. (1961) Linear-Input Logic. IRE Trans. Electr. Comp. 10, 6–16Google Scholar
  17. 17.
    Neciporuk, E.I. (1964) The Synthesis of Networks from Threshold Elements. Soviet Mathematics 5, 163–166. English trans]. (1964) Automation Express 7, 27–32 & 35–39Google Scholar
  18. 18.
    Parberry, I. (1994) Circuit Complexity and Neural Networks, MIT Press, CambridgeMATHGoogle Scholar
  19. 19.
    Shannon, C. (1949) The Synthesis of Two-Terminal Switching Circuits. Bell Sys. Tech. J. 28, 59–98MathSciNetGoogle Scholar
  20. 20.
    Siu, K.-Y., Roychowdhury, V.P., Kailath, T. (1991) Depth-Size Tradeoffs for Neural Computations. IEEE Trans. Comp. 40, 1402–1412MathSciNetCrossRefGoogle Scholar
  21. 21.
    Williamson, R.C. (1990) E-Entropy and the Complexity of Feedforward Neural Networks. In: Lippmann, R.P., Moody, J.E., Touretzky, D.S. (eds.) Advances in Neural Information Processing Systems. Morgan Kaufmann, San Mateo, pp. 946–952Google Scholar
  22. 22.
    Wray, J., Green, G.G.R. (1995) Neural Networks, Approximation Theory, and Finite Precision Computation. Neural Networks 8, 31–372CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Valeriu Beiu
    • 1
  1. 1.School of Electrical Engineering & Computer ScienceWashington State UniversityPullmanUSA

Personalised recommendations