Skip to main content

Sensitivity Analysis and Learning of Non-Linear Dynamic Systems by Two Dual Signal-Flow-Graph Approaches

  • Conference paper
Neural Nets WIRN Vietri-99

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

  • 176 Accesses

Abstract

In this paper, two methods named Backward Computation (BC) and Forward Computation (FC) for both on-line and batch backward gradient computation of a system output (for sensitivity analysis) or cost function (for learning) with respect to system parameters are derived by the Signal-Flow-Graph representation theory and its known properties. The system can be any causal, in general non-linear and time-variant, dynamic system represented by a SFG, in particular any feedforward, time delay or recurrent neural network In this work, we use discrete time notation, but the same theory holds for the continuous time case. The gradient is obtained in a straightforward way by the analysis of two SFGs, the original one and its adjoint (for the BC method) or its derivative (FC method) both obtained from the first by simple transformations without the complex chain rule expansions of derivatives usually employed.

The BC and FC methods are dual and the adjoint and derivative SFGs (of the same SFG) can be obtained one from the other by graph transformations. The BC method is local in space but not in time while the FC is local in time but not in space.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Almeida, L. B., 1987, “A learning rule for asynchronous perceptrons with feedback in combinatorial environment”, Proc. Int. Conf. on Neural Networks, vol. 2, pp. 609–618.

    Google Scholar 

  • Back A.D., Tsoi A.C., 1991, “FIR and IIR synapses, a new neural network architecture for time series modelling”, Neural Computation 3: 375–385.

    Article  Google Scholar 

  • Back A.D., Tsoi A.C., 1993, “A simplified gradient algorithm for IIR synapse Multilayer Perceptron”, Neural Computation 5: 456–462.

    Article  Google Scholar 

  • Beaufays F., Wan E., 1994, “Relating Real-Time Backpropagation and Backpropagation-Through-Time: An Application of Flow Graph Interreciprocity”, Neural Computation 6: 296–306.

    Article  Google Scholar 

  • Campolucci P., 1998, “A circuit theory approach to recurrent neural networks architectures and learning methods”, PhD dissertation, University of Bologna, Italy, 1998.

    Google Scholar 

  • Campolucci P., Marchegiani A., Uncini A., Piazza F., 1997, “Signal-Flow-Graph derivation of on-line gradient learning algorithms” Proc. of ICNN-97, IEEE Int. Conference on Neural Networks, Houston (USA), June 1997.

    Google Scholar 

  • Campolucci P., Piazza F., Uncini A., 1995, “On-line learning algorithms for neural networks with IIR synapses”, Proc. of the IEEE International Conference of Neural Networks, Perth, November 1995.

    Google Scholar 

  • Campolucci P., Uncini A., Piazza F., 1998, “Dynamical Systems Learning by a Circuit Theoretic Approach”, Proc. of ISCAS-98, IEEE Int. Symposium on Circuits and Systems, 1998.

    Google Scholar 

  • Campolucci P., Uncini A., Piazza F., Rao B.D., 1999, “On-line learning algorithms for locally recurrent neural networks”, IEEE Transactions on Neural Networks, Vol. 10, No. 2, March 1999.

    Google Scholar 

  • Director S.W., Rohrer R.A., 1969, “The generalized adjoint network and network sensitivities”, IEEE Trans. on Circuit Theory, vol. CT-16, pp. 318–323, Aug. 1969.

    Google Scholar 

  • Gherrity M., 1989, “A learning algorithm for analog, fully recurrent neural networks”, Proc. Int. Joint Conference Neural Networks, I, 643–644.

    Article  Google Scholar 

  • Griewank A., 1988, “On Automatic Differentiation”, Mathematics and Computer Science Division, Preprint ANL/MCS-P10-1088, November 1988.

    Google Scholar 

  • Haykin S., 1994, “Neural Networks: a comprehensive foundation”, IEEE Press-Macmillan.

    Google Scholar 

  • Horne B.G. and Giles C.L., “An experimental comparison of recurrent neural networks”, in Advances in Neural Information Processing Systems vol. 7, Cambridge, MA: MIT Press 1995 p. 697.

    Google Scholar 

  • Jabri M. and Flower B., 1992, “Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks”, IEEE Trans. on Neural Networks, vol. 3, no. 1, pp. 154–157, January 1992.

    Article  Google Scholar 

  • Lee A.Y., 1974, “Signal Flow Graphs—Computer-Aided System Analysis and Sensitivity Calculations”, IEEE Transactions on Circuits and Systems, vol. cas-21, no. 2, pp. 209–216, March 1974.

    Article  Google Scholar 

  • Martinelli, G. and Perfetti, R., 1991, “Circuit theoretic approach to the backpropagation learning algorithm”, Proc. of the IEEE Int. Symposium on Circuits and Systems, 1991.

    Google Scholar 

  • Mason S.J., 1953, “Feedback theory-Some properties of signal-flow graphs”, Proc. of the Institute of Radio Engineers vol. 41, pp. 1144–1156.

    Google Scholar 

  • Mason S.J., 1956, “Feedback theory-Further properties of signal-flow graphs”, Proc. of the Institute of Radio Engineers vol. 44, pp. 920–926.

    Google Scholar 

  • Narendra K.S., Parthasarathy K., 1991, “Gradient Methods for the Optimization of Dynamical Systems Containing Neural Networks”, IEEE Trans. on Neural Networks, vol. 2, no. 2, March 1991.

    Google Scholar 

  • Nerrand O., Roussel-Ragot P., Personnaz L., Dreyfus G., Marcos S., 1993, “Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms”, Neural Computation 5: 165–199.

    Article  Google Scholar 

  • Oppenheim, A. V. and Schafer, R. W., 1975, “Digital Signal Processing”, Prentice-Hall.

    Google Scholar 

  • Osowski, S. 1994, “Signal flow graphs and neural networks”, Biological Cybernetics, 70, pp. 387–395.

    Article  MATH  Google Scholar 

  • Pearlmutter, B. A., 1995, “Gradient Calculations for Dynamic Recurrent Neural Networks: a Survey”, IEEE Trans. on Neural Networks, vol. 6, no. 5.

    Google Scholar 

  • Penfield, P., Spence, R. and Duiker, S., 1970, “Tellegen’s Theorem and Electrical Networks”, MIT Press, Cambridge, MA.

    Google Scholar 

  • Shynk J.J., 1989, “Adaptive IIR filtering”, IEEE ASSP Magazine, April 1989.

    Google Scholar 

  • Srinivasan B., Prasad U.R. and Rao N.J., 1994, “Backpropagation through Adjoints for the identification of Non linear Dynamic Systems using Recurrent Neural Models”, IEEE Trans. on Neural Networks pp. 213–228, March 1994.

    Google Scholar 

  • Tellegen, B.D.H., 1952, “A general network theorem, with applications”, Philips Res. Rep. 7, 259–269.

    MathSciNet  MATH  Google Scholar 

  • Tsoi A.C., Back A.D., 1994, “Locally recurrent globally feedforward networks: a critical review of architectures”, IEEE Transactions on Neural Networks, vol. 5, no. 2, 229–239, March 1994.

    Article  Google Scholar 

  • Uncini A., Vecci L., Campolucci P., Piazza F., 1999, “Complex-valued Neural Networks with adaptive spline activation function for digital radio links nonlinear equalization”, IEEE Transactions on Signal Processing, Vol. 47, No. 2, February 1999.

    Google Scholar 

  • Wan E.A., Beaufays F., 1996, “Diagrammatic Derivation of Gradient Algorithms for Neural Networks”, Neural Computation 8: 182–201.

    Article  Google Scholar 

  • Wan E.A., Beaufays F., 1998, “Diagrammatic methods for deriving and relating temporal neural networks algorithms” In adaptive Processing of Sequences and Data Structures, Lecture Notes in Artificial Intelligence — Springer Verlag 1998.

    Google Scholar 

  • Werbos P.J., 1974, “Beyond regression: New tools for prediction and analysis in the behavioural sciences”, Ph.D. dissertation, Committee on Appi. Math., Harvard Univ., Cambridge, MA, Nov. 1974.

    Google Scholar 

  • Werbos, P. J., 1990, “Backpropagation through time: what it does and how to do it”, Proc. of IEEE, Special issue on neural networks, vol. 78, no. 10, pp. 1550–1560.

    Google Scholar 

  • Williams R.J., Peng J., 1990, “An efficient gradient-based algorithm for on line training of recurrent network trajectories”, Neural Computation 2: 490–501.

    Article  Google Scholar 

  • Williams R.J., Zipser D., 1989, “A Learning Algorithm for Continually Running Fully Recurrent Neural Networks”, Neural Computation 1: 270–280.

    Article  Google Scholar 

  • Williams, R. J. and Zipser, D., 1994, “Gradient-based learning algorithms for recurrent networks and their computational complexity”, in Backpropagation: Theory, Architectures and Applications, Y. Chauvin and D.E. Rumelhart, Eds. Hillsdale, NJ: Lawrence Erlbaum Associates, pp. 433–486.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag London Limited

About this paper

Cite this paper

Campolucci, P. (1999). Sensitivity Analysis and Learning of Non-Linear Dynamic Systems by Two Dual Signal-Flow-Graph Approaches. In: Marinaro, M., Tagliaferri, R. (eds) Neural Nets WIRN Vietri-99. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0877-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0877-1_5

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-1226-6

  • Online ISBN: 978-1-4471-0877-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics