Skip to main content

Cascade error projection: A learning algorithm for hardware implementation

  • Plasticity Phenomena (Maturing, Learning & Memory)
  • Conference paper
  • First Online:
Foundations and Tools for Neural Modeling (IWANN 1999)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1606))

Included in the following conference series:

Abstract

In this paper, we workout a detailed mathematical analysis for a new learning algorithm termed Cascade Error Projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters. Funthermore, CEP learning algorithm is operated only on one layer, whereas the other set of weights can be calculated deterministically. In association with the dynamical stepsize change concept to convert the weight update from infinite space into a finite space, the relation between the current stepsize and the previous energy level is also given and the estimation procedure for optimal stepsize is used for validation of our proposed technique.

The weight values of zero are used for starting the learning for every layer, and a single hidden unit is applied instead of using a pool of candidate hidden units similar to cascade correlation scheme. Therefore, simplicity in hardware implementation is also obtained. Furthermore, this analysis allows us to select from other methods (such as the conjugate gradient descent or the Newton’s second order) one of which will be a good candidate for the learning technique. The choice of learning technique depends on the constraints of the problem (e.g., speed, performance, and hardware implementation); one technique may be more suitable than others. Moreover, for a discrete weight space, the theoretical analysis presents the capability of learning with limited weight quantization. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incoporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. T.A. Duong, T. Brown, M. Tran, H. Langenbacher, and T. Daud, “Analog VLSI neural network building block chips for haldware-in-the-loop learning”, Proc. IEEE/INNS Int’l Join Conf. on Neural Networks, Beijing, China Nov. 3–6, 1992.

    Google Scholar 

  2. T. A. Duong et al, “Low Power Analog Neurosynapse Chips for a 3-D “Sugarcube” Neuroprocessor”, Proc. of IEEE Intl’ Conf. on Neural Networks(ICNN/WCCI), Vol. III, pp. 1907–1911, June 28–July 2, 1994, Orlando, Florida.

    Google Scholar 

  3. B.E. Boser, E. Sackinger, J. Bromley, Y. LeCun, and L.D. Jackel, “An Analog Neural Network Processor with Programmable Topology”, IEEE Journal of Solid State Circuits, vol. 26, NO. 12, Dec. 1991.

    Google Scholar 

    Google Scholar 

  4. P. W. Hollis, J.S. Harper, and J.J. Paulos, “The effects of Precision Constraints in a Backpropagation learning Network”, Neural Computation, vol. 2, pp. 363–373, 1990.

    Article  Google Scholar 

  5. M. Hoehfeld and S. Fahlman, “Learning with limited numerical precision using the cascade-correlation algorithm”, IEEE Trans. Neural Networks, vol. 3, No. 4, pp 602–611, July 1992.

    Article  Google Scholar 

  6. T.A. Duong, S.P. Eberhardt, T. Daud, and A. Thakoor, “Learning in neural networks: VLSI implementation strategies”, In: Fuzzy logic and Neural Network Handbook, Chap. 27, Ed: C.H. Chen, McGraw-Hill, 1996.

    Google Scholar 

  7. S.P. Eberhardt, T.A. Duong and A.P. Thakoor, “Design of parallel hardware neural network systems from custom analog VLSI “building-block” chips”, IEEE/INNS Proc. IJCNN, June 18–22, 1989 Washington D.C., vol. II, pp. 183.

    Google Scholar 

  8. T. A. Duong, S. P. Eberhardt, M. D. Trans, T. Daud, and A. P. Thakoor, “Learning and Optimization with Cascaded VLSI Neural network Building-Block Chips”, Proc. IEEE/INNS International Join Conference on Neural Networks, June 7–11, 1992, Baltimore, MD, vol. I, pp. 184–189.

    Google Scholar 

  9. T. A. Duong, Cascade Error Projection An sufficient Hardware learning theory. Ph.D. Thesis, UCI, 1995.

    Google Scholar 

  10. S. E. Fahlmann, C. Lebiere, “The Cascade Correlation learning architecture”, in Advances in Neural Information Processing Systems II, Ed. D. Touretzky, Morgan Kaufmann, San Mateo, CA, 1990, pp. 524–532.

    Google Scholar 

  11. T. A. Duong, “Cascade Error Projection-An efficient hardware learning algorithm”, Proceeding Int’l IEEE/ICNN in Perth, Western Australia, vol.1, pp. 175–178. Oct. 27–Dec 1, 1995 (Invited Paper).

    Google Scholar 

  12. T. A. Duong, A. Stubberud, T. Daud, and A. Thakoor, “Cascade Error Projection—A New Learning Algorithm”, Proceeding Int’l IEEE/ICNN in Washington D.C., vol. 1, pp. 229–234, Jun. 3–Jun 7, 1996.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Juan V. Sánchez-Andrés

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Duong, T.A., Daud, T. (1999). Cascade error projection: A learning algorithm for hardware implementation. In: Mira, J., Sánchez-Andrés, J.V. (eds) Foundations and Tools for Neural Modeling. IWANN 1999. Lecture Notes in Computer Science, vol 1606. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0098202

Download citation

  • DOI: https://doi.org/10.1007/BFb0098202

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66069-9

  • Online ISBN: 978-3-540-48771-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics