Skip to main content

Nonlinear Filtering Design Using Dynamic Neural Networks with Fast Training

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2869))

Abstract

This paper presents nonlinear filtering using dynamic neural networks (DNNs). In addition, the general usage of linear filtering structure is given. DNN which has a quasi-linear structure has been effectively used as a filter with fast training algorithm such as Levenberg-Marquardt method. The test results are shown that the performance of DNN as linear and nonlinear filters is satisfactory.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Pitas, I.: Nonlinear Digital Filters, Principles and Applications. Kluwer Academic Publishers, Dordrecht (1990)

    MATH  Google Scholar 

  2. Khalil, H.K.: Nonlinear Systems. Prentice-Hall, Inc., Englewood Cliffs (1996)

    Google Scholar 

  3. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. of the National Academy of Sciences 79, 2554–2558 (1982)

    Article  MathSciNet  Google Scholar 

  4. Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two-state neurons. Proc. of the National Academy of Sciences 81, 3088–3092 (1984)

    Article  Google Scholar 

  5. Konar, A.F., Samad, T.: Dynamic Neural Networks, Technical Report SSDC-92- I 4152-2, Honeywell Technology Center, 3660 Technology Drive, Minneapolis, MN 55418 (1992)

    Google Scholar 

  6. Konar, A.F., Becerikli, Y., Samad, T.: Trajectory Tracking with Dynamic Neural Networks. In: Proc. of the 1997 IEEE Int. Sym. On Intelligent Control (ISIC 1997), Ýstanbul (1997)

    Google Scholar 

  7. Lippmann, R.P.: An Introduction to Computing with Neural Nets. IEEE, ASSP Magazine, 4–22 (1987)

    Google Scholar 

  8. Haykin, S.: Neural Networks, A Comprehensive Foundation. Prentice-Hall, Inc., Englewood Cliffs (1999)

    MATH  Google Scholar 

  9. Cichocki, A., Unbehauen, R.: Neural Networks for Optimization and Signal Processing. John Wiley & Sons. Ltd. & B.G. Teubner (1993)

    Google Scholar 

  10. Scales, L.E.: Introduction to Nonlinear Optimization. Springer, New York (1985)

    Google Scholar 

  11. Jordan, M.I.: Attractor dynamics and parallelism in connectionist sequential machine. In: Proc. of the Eighth Annual Conference of Cognitive Science Society. Lawrence Erlbaum Associates, Hillsdale (1986)

    Google Scholar 

  12. Elman, J.L.: Finding Structure in Time. Technical Report 8801, Center for Research in Language, University of California, San Diego, La Jolla, CA 92093 (1988)

    Google Scholar 

  13. Servan-Schreiber, D., Cleeremans, A., McClellans, J.L.: Encoding Sequential Structure in Simple Recurrent Networks. Technical Report CMU-CS-88-183, School of Computer Science, Carnegie Mellon University, Pittsburg, PA 15213 (1988)

    Google Scholar 

  14. Morris, A.J.: On artificial neural networks in process engineering. IEE Proc. Pt. D. Control Theory and Applications (1992)

    Google Scholar 

  15. Pineda, F.: Generalization of backpropagation to recurrent and higher order neural networks. In: Proc. of the IEEE International Conference on Neural Networks (1987)

    Google Scholar 

  16. Pearlmutter, B.: Learning state space trajectories in recurrent neural networks. Neural Computation 1, 263–269 (1989)

    Article  Google Scholar 

  17. Barhen, J., Toomarian, N., Gulati, S.: Adjoint operator algorithms for faster learning in dynamical neural networks. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems 2. Morgan Kaufmann, San Mateo (1990)

    Google Scholar 

  18. Tsung, F.-S.: Learning in recurrent finite difference networks. In: Touretzky, D.S., et al. (eds.) Connectionist Models: Proceedings of the 1990 Summer School. Morgan Kaufmann, San Mateo (1990)

    Google Scholar 

  19. Williams, R.J., Zipser, D.: A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. ICS Report 8805, Institute for Cognitive Science, University of California, San Diego, La Jolla, Ca. 92093 (1988)

    Google Scholar 

  20. Chapra, S.C., Canale, R.P.: Numerical Methods for Engineers. McGraw-Hill, Inc., New York (1989)

    Google Scholar 

  21. Samad, T., Mathur, A.: System Identification with neural Networks. Technical Report CSDD-89-I4920-2, Honeywell SSDC, 3660 Technology Drive, Minneapolis, MN 55418 (1988)

    Google Scholar 

  22. Samad, T., Mathur, A.: Parameter estimation for process control with neural networks. International Journal of Approximate Reasoning (1992)

    Google Scholar 

  23. Konar, A.F., Samad, T.: Hybrid Neural Network/ Algorithmic System Identification. Technical Report SSDC-91-I4051-1, Honeywell SSDC, 3660 Technology Drive, Minneapolis, MN 55418 (1991)

    Google Scholar 

  24. Konar, A.F., Samad, T.: Dynamic Neural Networks. Technical report SSDC-92- I 4142-2, Honeywell Technology Center, 3660 Technology Drive, Minneapolis, MN 55418 (1992)

    Google Scholar 

  25. Becerikli, Y.: Neuro-Optimal Control, Ph.D Thesis, Sakarya University, Sakarya, Turkey (1998)

    Google Scholar 

  26. Becerikli, Y., Konar, A.F., Samad, T.: Intelligent optimal control with dynamic neural networks. Neural Networks 16(2), 251–259 (2003)

    Article  Google Scholar 

  27. Leistritz, L., Galicki, M., Witte, H., Kochs, E.: Training Trajectories by Continuous Recurrent Multilayer Networks. IEEE Trans. on Neural Networks 13(2) (March 2002)

    Google Scholar 

  28. Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 11, 431–441 (1963)

    Article  MATH  MathSciNet  Google Scholar 

  29. Konar, A.F.: Gradient and curvature in nonlinear identification. In: Proc. Honeywell Advanced Control Workshop, January 21–22 (1991)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Becerikli, Y. (2003). Nonlinear Filtering Design Using Dynamic Neural Networks with Fast Training. In: Yazıcı, A., Şener, C. (eds) Computer and Information Sciences - ISCIS 2003. ISCIS 2003. Lecture Notes in Computer Science, vol 2869. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39737-3_75

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-39737-3_75

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-20409-1

  • Online ISBN: 978-3-540-39737-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics