Abstract
This paper presents nonlinear filtering using dynamic neural networks (DNNs). In addition, the general usage of linear filtering structure is given. DNN which has a quasi-linear structure has been effectively used as a filter with fast training algorithm such as Levenberg-Marquardt method. The test results are shown that the performance of DNN as linear and nonlinear filters is satisfactory.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Pitas, I.: Nonlinear Digital Filters, Principles and Applications. Kluwer Academic Publishers, Dordrecht (1990)
Khalil, H.K.: Nonlinear Systems. Prentice-Hall, Inc., Englewood Cliffs (1996)
Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. of the National Academy of Sciences 79, 2554–2558 (1982)
Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two-state neurons. Proc. of the National Academy of Sciences 81, 3088–3092 (1984)
Konar, A.F., Samad, T.: Dynamic Neural Networks, Technical Report SSDC-92- I 4152-2, Honeywell Technology Center, 3660 Technology Drive, Minneapolis, MN 55418 (1992)
Konar, A.F., Becerikli, Y., Samad, T.: Trajectory Tracking with Dynamic Neural Networks. In: Proc. of the 1997 IEEE Int. Sym. On Intelligent Control (ISIC 1997), Ýstanbul (1997)
Lippmann, R.P.: An Introduction to Computing with Neural Nets. IEEE, ASSP Magazine, 4–22 (1987)
Haykin, S.: Neural Networks, A Comprehensive Foundation. Prentice-Hall, Inc., Englewood Cliffs (1999)
Cichocki, A., Unbehauen, R.: Neural Networks for Optimization and Signal Processing. John Wiley & Sons. Ltd. & B.G. Teubner (1993)
Scales, L.E.: Introduction to Nonlinear Optimization. Springer, New York (1985)
Jordan, M.I.: Attractor dynamics and parallelism in connectionist sequential machine. In: Proc. of the Eighth Annual Conference of Cognitive Science Society. Lawrence Erlbaum Associates, Hillsdale (1986)
Elman, J.L.: Finding Structure in Time. Technical Report 8801, Center for Research in Language, University of California, San Diego, La Jolla, CA 92093 (1988)
Servan-Schreiber, D., Cleeremans, A., McClellans, J.L.: Encoding Sequential Structure in Simple Recurrent Networks. Technical Report CMU-CS-88-183, School of Computer Science, Carnegie Mellon University, Pittsburg, PA 15213 (1988)
Morris, A.J.: On artificial neural networks in process engineering. IEE Proc. Pt. D. Control Theory and Applications (1992)
Pineda, F.: Generalization of backpropagation to recurrent and higher order neural networks. In: Proc. of the IEEE International Conference on Neural Networks (1987)
Pearlmutter, B.: Learning state space trajectories in recurrent neural networks. Neural Computation 1, 263–269 (1989)
Barhen, J., Toomarian, N., Gulati, S.: Adjoint operator algorithms for faster learning in dynamical neural networks. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems 2. Morgan Kaufmann, San Mateo (1990)
Tsung, F.-S.: Learning in recurrent finite difference networks. In: Touretzky, D.S., et al. (eds.) Connectionist Models: Proceedings of the 1990 Summer School. Morgan Kaufmann, San Mateo (1990)
Williams, R.J., Zipser, D.: A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. ICS Report 8805, Institute for Cognitive Science, University of California, San Diego, La Jolla, Ca. 92093 (1988)
Chapra, S.C., Canale, R.P.: Numerical Methods for Engineers. McGraw-Hill, Inc., New York (1989)
Samad, T., Mathur, A.: System Identification with neural Networks. Technical Report CSDD-89-I4920-2, Honeywell SSDC, 3660 Technology Drive, Minneapolis, MN 55418 (1988)
Samad, T., Mathur, A.: Parameter estimation for process control with neural networks. International Journal of Approximate Reasoning (1992)
Konar, A.F., Samad, T.: Hybrid Neural Network/ Algorithmic System Identification. Technical Report SSDC-91-I4051-1, Honeywell SSDC, 3660 Technology Drive, Minneapolis, MN 55418 (1991)
Konar, A.F., Samad, T.: Dynamic Neural Networks. Technical report SSDC-92- I 4142-2, Honeywell Technology Center, 3660 Technology Drive, Minneapolis, MN 55418 (1992)
Becerikli, Y.: Neuro-Optimal Control, Ph.D Thesis, Sakarya University, Sakarya, Turkey (1998)
Becerikli, Y., Konar, A.F., Samad, T.: Intelligent optimal control with dynamic neural networks. Neural Networks 16(2), 251–259 (2003)
Leistritz, L., Galicki, M., Witte, H., Kochs, E.: Training Trajectories by Continuous Recurrent Multilayer Networks. IEEE Trans. on Neural Networks 13(2) (March 2002)
Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 11, 431–441 (1963)
Konar, A.F.: Gradient and curvature in nonlinear identification. In: Proc. Honeywell Advanced Control Workshop, January 21–22 (1991)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Becerikli, Y. (2003). Nonlinear Filtering Design Using Dynamic Neural Networks with Fast Training. In: Yazıcı, A., Şener, C. (eds) Computer and Information Sciences - ISCIS 2003. ISCIS 2003. Lecture Notes in Computer Science, vol 2869. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39737-3_75
Download citation
DOI: https://doi.org/10.1007/978-3-540-39737-3_75
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-20409-1
Online ISBN: 978-3-540-39737-3
eBook Packages: Springer Book Archive