Skip to main content

On the complexity of VLSI-friendly neural networks for classification problems

  • Posters
  • Conference paper
  • First Online:
Advances in Artificial Intelligence (Canadian AI 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1418))

  • 186 Accesses

Abstract

This paper presents some complexity results for the specific case of a VLSI friendly neural network used in classification problems. A VLSI-friendly neural network is a neural network using exclusively integer weights in a narrow interval. The results presented here give updated worst-case lower bounds for the number of weights used by the network. It is shown that the number of weights can be lower bounded by an expression calculated using parameters depending exclusively on the problem (the minimum distance between patterns of opposite classes, the maximum distance between any patterns, the number of patterns and the number of dimensions). The theoretical approach is used to calculate the necessary weight range, a lower bound for the number of bits necessary to solve the problem in the worst case and the necessary number of weights for several problems. Then, a constructive algorithm using limited precision integer weights is used to construct and train neural networks for the same problems. The experimental values obtained are then compared with the theoretical values calculated. The comparison shows that the necessary weight precision can be estimated accurately using the given approach. However, the estimated numbers of weights are in general larger than the values obtained experimentally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baffes, P.T., J.M. Zelle — Growing layers of perceptrons: introducing the extentron algorithm, Proc. of 1992 Intl. Joint Conf. on Neural Networks, II, pp. 392–397, IEEE Press, 1992.

    Article  Google Scholar 

  2. Beiu V., Draghici S., Makaruk H.E. — On Limited Fan-in Optimal Neural Networks, Proc. of the IV Brazilian Symposium on Neural Networks, SBRN (Goiana, Brazil, 3–5 December, 1997). Also published as Technical Report LA-UR-97-1567, Los Alamos National Laboratories, 1997.

    Google Scholar 

  3. Coggins R., M. Jabri, Wattle: A Trainable Gain Analogue VLSI Neural Network, Advances in NIPS 6 (NIPS*93, Denver, CO), Morgan Kaufman, San Mateo, CA, 874–881, 1994.

    Google Scholar 

  4. Draghici S., Beiu, V. — Entropy based comparison of neural networks for classification, in Proc. of The 9-th Italian Workshop on Neural Nets, WIRN Vietri-sul-mare, 22–24 May, Springer-Verlag, 1997. Also published as Technical Report LA-UR-97-483, Los Alamos. National Laboratory, 1997.

    Google Scholar 

  5. Draghici S., Sethi, I.K: On the possibilities of the limited precision weights neural networks in classification problems, Proc. of International Work-Conference on Artificial and Natural Neural Networks IWANN'97, Lanzarote, Canary Islands, June 4–6, 1997.

    Google Scholar 

  6. Draghici S., Beiu V., Entropy based comparison of neural networks for classification, Proc. of The 9-th Italian Workshop on Neural Nets, WIRN Vietri-sul-mare, 22–24 May, 1997.

    Google Scholar 

  7. Draghici S., Sethi I.K. — Adapting theoretical constructive algorithms to hardware implementations for classification problems, Proc. of the International Conference on Engineering Applications of Neural Networks, Stockholm, Sweden, 16–18 June, 1997.

    Google Scholar 

  8. Dundar G., K. Rose, The Effect of Quantization on Multilayer Neural Networks, IEEE Transactions on Neural Networks 6 (6), pp. 1446–1451, 1995.

    Article  Google Scholar 

  9. Hand D.J., Discrimination and Classification, John Wiley, 1981.

    Google Scholar 

  10. Hecht-Nielsen, R., Kolmogorov's mapping neural network existence theorem. Proc. of the IEEE Conference on Neural Networks III, pp. 11–13, New York, IEEE Press. 1987.

    Google Scholar 

  11. Hohfeld M., S.E. Fahlman, Learning with limited numerical precision using the Cascade-Correlation Algorithm, Tech.Rep. CMU-CS-91-130, School of Comp. Sci. Carnegie Mellon, May 1991. Also in IEEE Transactions on Neural Networks, NN-3(4), 602–611, 1992.

    Google Scholar 

  12. Hohfeld M., S.E. Fahlman, Probabilistic rounding in neural networks with limited precision. In U. Ruckert and J.A. Nossek (eds.): Microelectronics for Neural Networks (Proc. MicroNeuro'91 — Munich, Germany), Kyrill & Method Verlag, 1–8, October 1991. Also in Neurocomputing, 4, 291–299, 1992.

    Google Scholar 

  13. Hornik K., M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators, Neural Networks, vol. 2, pp. 359–366, 1989.

    Article  Google Scholar 

  14. Homik K., Some new results on neural network approximation, Neural Networks, vol. 6, pp. 1069–1072,1993.

    Google Scholar 

  15. Khan A.H., E.L. Hines, Integer weight neural networks, Electronics Letters, 30 (15), pp. 1237–1238, 1994.

    Article  Google Scholar 

  16. Khan A.H., R.G. Wilson, Integer weight approximation of continuous-weight multilayer feedforward nets, Proc. IEEE Int. Conf. on Neura. Networks, vol. 1, pp. 392–397, Washington DC, June 1996, IEEE Press, New York, NY, 1996.

    Google Scholar 

  17. Kurkova, V. — Kolmogorov's theorem and multilayer neural networks. Neural Networks 5, 501–506, 1992.

    Article  Google Scholar 

  18. Kwan H.K., Tang C.Z., Designing Multilayer Feedforward Neural Networks Using Simplified Activation Functions and One-Power-of-Two Weights. Electronic Letters, 28 (25), pp. 2343–2344, 1992.

    Google Scholar 

  19. Kwan H.K., Tang C.Z., Multiplierless Multilayer Feedforward Neural Networks Design Suitable for Continuous Input-Output Mapping, Electronic Letters, 29 (14), pp. 1259–1260, 1993.

    Google Scholar 

  20. Marchesi M., G. Orlandi, F. Piazza, L. Pollonara, A. Uncini, Multilayer Perceptrons with Discrete Weights, Proc. Int. Joint Conf. on Neural Networks IJCNN'90, San Diego, Vol. II, pp. 623–630, June, 1990.

    Google Scholar 

  21. Marchesi M., G. Orlandi, F. Piazza, A. Uncini, Fast Neural Networks without Multipliers, IEEE Transactions on Neural Networks, NN-4 (1), pp. 53–62, 1993.

    Article  Google Scholar 

  22. Tang C.Z., H.K. Kwan, Multilayer Feedforward Neural Networks with Single Power-ofTwo Weights. IEEE Trans. On Signal Processing, SP-41(8), 2724–2727, 1993.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Robert E. Mercer Eric Neufeld

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Draghici, S. (1998). On the complexity of VLSI-friendly neural networks for classification problems. In: Mercer, R.E., Neufeld, E. (eds) Advances in Artificial Intelligence. Canadian AI 1998. Lecture Notes in Computer Science, vol 1418. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-64575-6_58

Download citation

  • DOI: https://doi.org/10.1007/3-540-64575-6_58

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64575-7

  • Online ISBN: 978-3-540-69349-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics