Skip to main content

Neural Networks

  • Chapter
  • First Online:
Robust and Fault-Tolerant Control

Part of the book series: Studies in Systems, Decision and Control ((SSDC,volume 197))

Abstract

This chapter is devoted to the presentation of neural-network models in the context of control systems design. It is divided into four parts. The first two parts introduce the reader to the theory of static and dynamic neural network structures. These parts can be treated as a quick review of already developed and well-documented neural network architectures, giving an insight into their properties and the possibility of their application in control theory. The third part is focused on the problem of model design. As the majority of control system designs are model based, developing an accurate model of a plant is of crucial importance, especially for nonlinear systems. Two modelling approaches are discussed: forward and inverse modelling. Moreover, the problem of a training of feed-forward and recurrent neural models is described in the context of parallel and series-parallel identification schemes. The fourth part discusses a very important issue of uncertainty associated with the model. This notion is crucial when dealing with robust and fault-tolerant control. We describe the methods that could be used in estimating the uncertainty associated with neural network models, namely the set-membership identification, model error modelling and statistical approaches.

Portions of the chapter reused by permission from Springer Nature, Artificial Neural Networks for the Modelling and Fault Diagnosis of Technical Processes by Krzysztof Patan \(\copyright \)2008.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ackley, D.H., Hinton, G.E., Sejnowski, T.J.: A learning algorithm for Boltzmann machines. Cogn. Sci. 9, 147–169 (1985)

    Google Scholar 

  2. Anderson, S., Merrill, J.W.L., Port, R.: Dynamic speech categorization with recurrent networks. In: Touretzky, D., Hinton, G., Sejnowski, T. (eds.) Proceedings of the 1988 Connectionist Models Summer School (Pittsburg 1988), pp. 398–406. Morgan Kaufmann, San Mateo (1989)

    Google Scholar 

  3. Antonelo, E.A., Camponogara, E., Foss, B.: Echo state networks for data-driven downhole pressure estimation in gas-lift oil wells. Neural Netw. 85, 106–117 (2017)

    Google Scholar 

  4. Atkinson, A.C., Donev, A.N., Tobias, R.D.: Optimum Experimental Designs, with SAS. Oxford University Press, Oxford (2007)

    Google Scholar 

  5. Auda, G., Kamel, M.: CMNN: cooperative modular neural networks for pattern recognition. Pattern Recognit. Lett. 18, 1391–1398 (1997)

    MATH  Google Scholar 

  6. Ayoubi, M.: Fault diagnosis with dynamic neural structure and application to a turbo-charger. In: Proceedings of the International Symposium on Fault Detection Supervision and Safety for Technical Processes, SAFEPROCESS’94, Espoo, Finland, vol. 2, pp. 618–623 (1994)

    Google Scholar 

  7. Back, A.D., Tsoi, A.C.: FIR and IIR synapses, a new neural network architecture for time series modelling. Neural Comput. 3, 375–385 (1991)

    Google Scholar 

  8. Badoni, M., Singh, B., Singh, A.: Implementation of echo-state network-based control for power quality improvement. IEEE Trans. Ind. Electron. 64, 5576–5584 (2017)

    Google Scholar 

  9. Battlori, R., Laramee, C.B., Land, W., Schaffer, J.D.: Evolving spiking neural networks for robot control. Procedia Comput. Sci. 6, 329–334 (2011)

    Google Scholar 

  10. Bianchi, F.M., Livi, L., Alippi, C.: Investigating echo-state networks dynamics by means of recurrence analysis. IEEE Trans. Neural Netw. Learn. Syst. 29, 427–439 (2018)

    MathSciNet  Google Scholar 

  11. Camacho, E.F., Bordóns, C.: Model Predictive Control, 2nd edn. Springer, London (2007)

    Google Scholar 

  12. Campolucci, P., Uncini, A., Piazza, F., Rao, B.D.: On-line learning algorithms for locally recurrent neural networks. IEEE Trans. Neural Netw. 10, 253–271 (1999)

    Google Scholar 

  13. Chen, S., Billings, S.A.: Neural network for nonliner dynamic system modelling and identification. Int. J. Control 56, 319–346 (1992)

    MATH  Google Scholar 

  14. Choi, B.B., Lawrence, C.: Inverse kinematics problem in robotics using neural network. Technical report 105869, NASA (1992)

    Google Scholar 

  15. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2, 303–314 (1989)

    MathSciNet  MATH  Google Scholar 

  16. Czajkowski, A., Patan, K., Szymański, M.: Application of the state space neural network to the fault tolerant control system of the PLC-controlled laboratory stand. Eng. Appl. Artif. Intell. 30, 168–178 (2014)

    Google Scholar 

  17. Demuth, H., Beale, M.: Neural Network Toolbox for Use with MATLAB. The MathWorks Inc, Natick (1993)

    Google Scholar 

  18. Ding, L., Gustafsson, T., Johansson, A.: Model parameter estimation of simplified linear models for a continuous paper pulp degester. J. Process Control 17, 115–127 (2007)

    Google Scholar 

  19. Eckhorn, R., Reitbock, H.J., Arndt, M., Dicke, P.: A neural network for feature linking via synchronous activity: results from cat visual cortex and from simulations. In: Cotterill, R.M.J. (ed.) Models of Brain Function, pp. 255–272. Cambridge University Press, Cambridge (1989)

    Google Scholar 

  20. Elman, J.L.: Finding structure in time. Cogn. Sci. 14, 179–211 (1990)

    Google Scholar 

  21. Fahlman, S.E.: Fast learning variation on back-propagation: an empirical study. In: Touretzky, D., Hilton, G., Sejnowski, T. (eds.) Proceedings of the 1988 Connectionist Models Summer School (Pittsburg 1988), pp. 38–51. Morgan Kaufmann, San Mateo (1989)

    Google Scholar 

  22. Fasconi, P., Gori, M., Soda, G.: Local feedback multilayered networks. Neural Comput. 4, 120–130 (1992)

    Google Scholar 

  23. Fedorov, V.V., Hackl, P.: Model-Oriented Design of Experiments. Lecture Notes in Statistics. Springer, New York (1997)

    Google Scholar 

  24. Ferrari, S., Stengel, R.F.: Smooth function approximation using neural networks. IEEE Trans. Neural Netw. 16, 24–38 (2005)

    Google Scholar 

  25. Garzon, M., Botelho, F.: Dynamical approximation by recurrent neural networks. Neurocomputing 29, 25–46 (1999)

    Google Scholar 

  26. Gerstner, W., Kistler, W.M.: Spiking Neuron Models. Single Neurons, Populations, Plasticity. Cambridge University Press, Cambridge (2002)

    Google Scholar 

  27. Girosi, J., Poggio, T.: Neural network and the best approximation property. Biol. Cybern. 63, 169–176 (1990)

    MATH  Google Scholar 

  28. Gori, M., Bengio, Y., Mori, R.D.: BPS: a learning algorithm for capturing the dynamic nature of speech. In: International Joint Conference on Neural Networks, vol. II, pp. 417–423 (1989)

    Google Scholar 

  29. Gunnarson, S.: On some asymptotic uncertainty bounds in recursive least squares identification. IEEE Trans. Autom. Control 38, 1685–1689 (1993)

    MathSciNet  Google Scholar 

  30. Gupta, M.M., Jin, L., Homma, N.: Static and Dynamic Neural Networks. From Fundamentals to Advanced Theory. Wiley, New Jersey (2003)

    Google Scholar 

  31. Gupta, M.M., Rao, D.H.: Dynamic neural units with application to the control of unknown nonlinear systems. J. Intell. Fuzzy Syst. 1, 73–92 (1993)

    Google Scholar 

  32. Hagan, M., Demuth, H.B., Beale, M.H.: Neural Network Design. PWS Publishing, Boston (1996)

    Google Scholar 

  33. Hagan, M.T., Menhaj, M.B.: Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 5, 989–993 (1994)

    Google Scholar 

  34. Haykin, S.: Neural Networks. A Comprehensive Foundation, 2nd edn. Prentice-Hall, New Jersey (1999)

    Google Scholar 

  35. Hertz, J., Krogh, A., Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company, Inc., Reading (1991)

    Google Scholar 

  36. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006)

    MathSciNet  MATH  Google Scholar 

  37. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006)

    MathSciNet  MATH  Google Scholar 

  38. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)

    Google Scholar 

  39. Hopfield, J.J.: Neural networks as physical systems with emergent collective computational abilities. In: Proceedings of the National Academy of Sciences, vol. 79, pp. 2554–2558 (1982)

    Google Scholar 

  40. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989)

    MATH  Google Scholar 

  41. Hunt, K.J., Sbarbaro, D., Zbikowski, R., Gathrop, P.J.: Neural networks for control systems – a survey. Automatica 28, 1083–1112 (1992)

    Google Scholar 

  42. Isermann, R., Münchhof, M.: Neural networks and lookup tables for identification. In: Identification of Dynamic Systems. Springer, Berlin (2011)

    Google Scholar 

  43. Izhikevich, E.M.: Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572 (2003)

    Google Scholar 

  44. Jaeger, H.: The echo state approach to analysing and training recurrent neural networks. Technical report. GMD report 148, German National Research Center for Information Technology, Germany (2001)

    Google Scholar 

  45. Janczak, A.: Identification of Nonlinear Systems Using Neural Networks and Polynomial Models. A Block-Oriented Approach. Lecture Notes in Control and Information Sciences. Springer, Berlin (2005)

    Google Scholar 

  46. Jin, L., Nikiforuk, P.N., Gupta, M.M.: Approximation of discrete-time state-space trajectories using dynamic recurrent neural networks. IEEE Trans. Autom. Control 40, 1266–1270 (1995)

    MathSciNet  MATH  Google Scholar 

  47. Johnson, J.L.: Pulse-coupled neural nets: translation, rotation, scale, distortion, and intensity signal invariance for images. Appl. Opt. 33(26), 6239–6253 (1994)

    Google Scholar 

  48. Johnson, J.L., Padgett, M.L.: PCNN models and applications. Neural Netw. 10(3), 480–498 (1999)

    Google Scholar 

  49. Johnson, J.L., Ritter, D.: Observation of periodic waves in a pulse-coupled neural network. Opt. Lett. 18(15), 1253–1255 (1993)

    Google Scholar 

  50. Jordan, M.I.: Attractor dynamic and parallelism in a connectionist sequential machine. In: Proceedings of the 8th Annual Conference of the Cognitive Science Society (Amherst, 1986), pp. 531–546. Erlbaum, Hillsdale (1986)

    Google Scholar 

  51. Jordan, M.I., Jacobs, R.A.: Supervised learning and systems with excess degrees of freedom. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems II (Denver 1989), pp. 324–331. Morgan Kaufmann, San Mateo (1990)

    Google Scholar 

  52. Kohonen, T.: Self-organization and Associative Memory. Springer, Berlin (1984)

    MATH  Google Scholar 

  53. Kohonen, T.: Self-organizing Maps. Springer, Berlin (2001)

    MATH  Google Scholar 

  54. Korbicz, J., Kościelny, J., Kowalczuk, Z., Cholewa, W. (eds.) Fault Diagnosis. Models, Artificial Intelligence, Applications. Springer, Berlin (2004)

    Google Scholar 

  55. Kuschewski, J.G., Hui, S., Żak, S.: Application of feedforward neural network to dynamical system identification and control. IEEE Trans. Neural Netw. 1, 37–49 (1993)

    Google Scholar 

  56. Ławryńczuk, M.: Computationally Efficient Model Predictive Control Algorithms. A Neural Network Approach. Studies in Systems, Decision and Control, vol. 3. Springer, Switzerland (2014)

    Google Scholar 

  57. Leshno, M., Lin, V., Pinkus, A., Schoken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6, 861–867 (1993)

    Google Scholar 

  58. Lindblad, T., Kinser, J.M.: Image Processing Using Pulse-Coupled Neural Networks. Springer, London (1998)

    MATH  Google Scholar 

  59. Ma, Y., Zhan, K., Wang, Z.: Applications of Pulse-Coupled Neural Networks. Springer, Berlin (2010)

    MATH  Google Scholar 

  60. Maass, W.: Networks of spiking neurons: the third generation of neural network models. Neural Netw. 10, 1659–1671 (1997)

    Google Scholar 

  61. Maass, W., Natschlaeger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560 (2002)

    MATH  Google Scholar 

  62. Marciniak, A., Korbicz, J.: Diagnosis system based on multiple neural classifiers. Bull. Pol. Acad. Sci. Tech. Sci. 49, 681–701 (2001)

    MATH  Google Scholar 

  63. McCulloch, W.S., Pitts, W.: A logical calculus of ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943)

    MathSciNet  MATH  Google Scholar 

  64. Milanese, M.: Set membership identification of nonlinear systems. Automatica 40, 957–975 (2004)

    MathSciNet  MATH  Google Scholar 

  65. Mozer, M.C.: A focused backpropagation algorithm for temporal pattern recognition. Complex Syst. 3, 349–381 (1989)

    MathSciNet  MATH  Google Scholar 

  66. Narendra, K.S., Parthasarathy, K.: Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Netw. 1, 12–18 (1990)

    Google Scholar 

  67. Nelles, O.: Nonlinear System Identification. From Classical Approaches to Neural Networks and Fuzzy Models. Springer, Berlin (2001)

    MATH  Google Scholar 

  68. Nørgaard, M., Ravn, O., Poulsen, N., Hansen, L.: Networks for Modelling and Control of Dynamic Systems. Springer, London (2000)

    MATH  Google Scholar 

  69. Parlos, A.G., Chong, K.T., Atiya, A.F.: Application of the recurrent multilayer perceptron in modelling complex process dynamics. IEEE Trans. Neural Netw. 5, 255–266 (1994)

    Google Scholar 

  70. Patan, K.: Robust fault diagnosis in catalytic cracking converter using artificial neural networks. In: Proceedings of the 16th IFAC World Congress, 3–8 July, Prague, Czech Republic (2005). Published on CD-ROM

    Google Scholar 

  71. Patan, K.: Approximation of state-space trajectories by locally recurrent globally feed-forward neural networks. Neural Netw. 21, 59–63 (2008)

    MATH  Google Scholar 

  72. Patan, K.: Artificial Neural Networks for the Modelling and Fault Diagnosis of Technical Processes. Lecture Notes in Control and Information Sciences. Springer, Berlin (2008)

    Google Scholar 

  73. Patan, K.: Two stage neural network modelling for robust model predictive control. ISA Trans. 72, 56–65 (2018)

    Google Scholar 

  74. Patan, K., Korbicz, J., Głowacki, G.: DC motor fault diagnosis by means of artificial neural networks. In: Proceedings of the 4th International Conference on Informatics in Control, Automation and Robotics, ICINCO 2007, Angers, France, 9–12 May 2007. Published on CD-ROM

    Google Scholar 

  75. Patan, K., Parisini, T.: Stochastic learning methods for dynamic neural networks: simulated and real-data comparisons. In: Proceedings of the 2002 American Control Conference (IEEE Cat. No.CH37301), vol. 4, pp. 2577–2582 (2002)

    Google Scholar 

  76. Patan, K., Parisini, T.: Identification of neural dynamic models for fault detection and isolation: the case of a real sugar evaporation process. J. Process Control 15, 67–79 (2005)

    Google Scholar 

  77. Patan, K., Patan, M.: Optimal training sequences for locally recurrent neural network. Lect. Notes Comput. Sci. 5768, 80–89 (2009)

    MATH  Google Scholar 

  78. Patan, K., Patan, M., Kowalów, D.: Optimal sensor selection for model identification in iterative learning control of spatio-temporal systems. In: 55th IEEE Conference on Decision and Control (CDC) (2016)

    Google Scholar 

  79. Patan, M.: Distributed scheduling of sensor networks for identification of spatio-temporal processes. Appl. Math. Comput. Sci. 22(2), 299–311 (2012)

    MathSciNet  MATH  Google Scholar 

  80. Patan, M.: Sensor Networks Scheduling for Identification of Distributed Systems. Lecture Notes in Control and Information Sciences, vol. 425. Springer, Berlin (2012)

    Google Scholar 

  81. Patan, M., Bogacka, B.: Optimum group designs for random-effects nonlinear dynamic processes. Chemom. Intell. Lab. Syst. 101, 73–86 (2010)

    Google Scholar 

  82. Pearlmutter, B.A.: Learning state space trajectories in recurrent neural networks. In: International Joint Conference on Neural Networks (Washington 1989), vol. II, pp. 365–372. IEEE, New York (1989)

    Google Scholar 

  83. Pham, D.T., Xing, L.: Neural Networks for Identification. Prediction and Control. Springer, Berlin (1995)

    Google Scholar 

  84. Plaut, D., Nowlan, S., Hinton, G.: Experiments of learning by back propagation. Technical report CMU-CS-86-126, Department of Computer Science, Carnegie Melon University, Pittsburg, PA (1986)

    Google Scholar 

  85. Poddar, P., Unnikrishnan, K.P.: Memory neuron networks: a prolegomenon. Technical report GMR-7493, General Motors Research Laboratories (1991)

    Google Scholar 

  86. Psaltis, D., Sideris, A., Yamamura, A.A.: A multilayered neural network controller. IEEE Control Syst. Mag. 8(2), 17–21 (1988)

    Google Scholar 

  87. Quinn, S.L., Harris, T.J., Bacon, D.W.: Accounting for uncertainty in control-relevant statistics. J. Process Control 15, 675–690 (2005)

    Google Scholar 

  88. Reinelt, W., Garulli, A., Ljung, L.: Comparing different approaches to model error modeling in robust identification. Automatica 38, 787–803 (2002)

    MathSciNet  MATH  Google Scholar 

  89. Rojas, R.: Neural Networks. A Systematic Introduction. Springer, Berlin (1996)

    MATH  Google Scholar 

  90. Rosenblatt, F.: Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington (1962)

    MATH  Google Scholar 

  91. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. Parallel Distributed Processing, vol. I (1986)

    Google Scholar 

  92. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)

    MATH  Google Scholar 

  93. Sastry, P.S., Santharam, G., Unnikrishnan, K.P.: Memory neuron networks for identification and control of dynamical systems. IEEE Trans. Neural Netw. 5, 306–319 (1994)

    Google Scholar 

  94. Smolensky, P.: Information processing in dynamical systems: foundations of harmony theory. In: Rumelhart, D.E., McLelland, J.L. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognitio, pp. 194–281. MIT Press (1986)

    Google Scholar 

  95. Sollich, P., Krogh, A.: Learning with ensembles: how over-fitting can be useful. In: Advances in Neural Information Processing System. Proceedings of the 1996 Conference, vol. 9, pp. 190–196 (1996)

    Google Scholar 

  96. Sontag, E.: Feedback stabilization using two-hidden-layer nets. IEEE Trans. Neural Netw. 3, 981–990 (1992)

    Google Scholar 

  97. Sørensen, O.: Neural networks performing system identification for control applications. In: Proceedings of the 3rd International Conference on Artificial Neural Networks, Brighton, UK, pp. 172–176 (1993)

    Google Scholar 

  98. Specht, D.F.: Probabilistic neural networks. Neural Netw. 3, 109–118 (1990)

    Google Scholar 

  99. Stornetta, W.S., Hogg, T., Hubermann, B.A.: A dynamic approach to temporal pattern processing. In: Anderson, D.Z. (ed.) Neural Information Processing Systems, pp. 750–759. American Institute of Physics, New York (1988)

    Google Scholar 

  100. Tsoi, A.C., Back, A.D.: Locally recurrent globally feedforward networks: a critical review of architectures. IEEE Trans. Neural Netw. 5, 229–239 (1994)

    Google Scholar 

  101. Ucinski, D.: Optimal Measurement Methods for Distributed Parameter System Identification. CRC Press, Boca Raton (2004)

    Google Scholar 

  102. Uciński, D.: Sensor network scheduling for identification of spatially distributed processes. Appl. Math. Comput. Sci. 22(1), 25–40 (2012)

    MathSciNet  MATH  Google Scholar 

  103. Walter, E., Pronzato, L.: Identification of Parametric Models from Experimental Data. Springer, London (1997)

    MATH  Google Scholar 

  104. Wang, X., Hou, Z.G., Lv, F., Tan, M., Wang, Y.: Mobile robots’ modular navigation controller using spiking neural networks. Neurocomputing 134, 230–238 (2014)

    Google Scholar 

  105. Warwick, K., Kambhampati, C., Parks, P., Mason, J.: Dynamic systems in neural networks. In: Hunt, K.J., Irwin, G.R., Warwick, K. (eds.) Neural Network Engineering in Dynamic Control Systems, pp. 27–41. Springer, Berlin (1995)

    Google Scholar 

  106. Werbos, P.J.: Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis, Harvard University (1974)

    Google Scholar 

  107. Widrow, B.: Generalization and information storage in networks of adaline neurons. In: Yovits, M., Jacobi, G.T., Goldstein, G. (eds.) Self-organizing Systems 1962 (Chicago 1962), pp. 435–461. Spartan, Washington (1962)

    Google Scholar 

  108. Widrow, B., Hoff, M.E.: Adaptive switching circuit. In: 1960 IRE WESCON Convention Record, Part 4, pp. 96–104. IRE, New York (1960)

    Google Scholar 

  109. Wiklendt, L., Chalup, S., Middleton, R.: A small spiking neural network with LQR control applied to the acrobot. Neural Comput. Appl. 18, 369–375 (2009)

    Google Scholar 

  110. Williams, R.J., Zipser, D.: Experimental analysis of the real-time recurrent learning algorithm. Connect. Sci. 1, 87–111 (1989)

    Google Scholar 

  111. Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1, 270–289 (1989)

    Google Scholar 

  112. Xu, L., Krzyzak, A., Suen, C.: Methods for combining multiple classifiers and their applications to handwriting recognition. IEEE Trans. Syst. Man Cybern. 22, 418–435 (1992)

    Google Scholar 

  113. Zamarreno, J.M., Vega, P.: State space neural network. Properties and application. Neural Netw. 11, 1099–1112 (1998)

    Google Scholar 

  114. Żurada, J.M.: Lambda learning rule for feedforward neural networks. In: Proceedings of the International Conference on Neural Networks, San Francisco, USA, March 28–April 1, pp. 1808–1811 (1993)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Krzysztof Patan .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Patan, K. (2019). Neural Networks. In: Robust and Fault-Tolerant Control. Studies in Systems, Decision and Control, vol 197. Springer, Cham. https://doi.org/10.1007/978-3-030-11869-3_2

Download citation

Publish with us

Policies and ethics