Skip to main content

Optimal ergodic control of nonlinear stochastic systems

  • Parametric Stochastic Control of Non-Linear Systems and Stochastic Equivalent Linearization
  • Conference paper
  • First Online:
Probabilistic Methods in Applied Physics

Part of the book series: Lecture Notes in Physics ((LNP,volume 451))

Abstract

We study a class of ergodic stochastic control problems for diffusion processes. We describe the basic ideas concerning the Hamilton-Jacobi-Bellman equation. For a given class of control problems we establish an existence and uniqueness property of the invariant measure. Then we present a numerical approximation to the optimal feedback control based on the discretization of the infinitesimal generator using finite difference schemes. Finally, we apply these techniques to the control of semi-active suspensions for road vehicle.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. M. AKIAN, J.P. CHANCELIER, and J.P. QUADRAT. Dynamic programming complexity and application. In Proceedings of 27th Conference on Decision and Control, pages 1551–1558, Austin, Texas, December 1988. IEEE.

    Google Scholar 

  2. J. ALANOLY and S. SANKAR. Semi-active force generators for shock isolation. Journal of Sound and Vibration, 126(1):145–156, 1988.

    Google Scholar 

  3. S. BELLIZZI and R. BOUC. Adaptive suboptimal parametric control for nonlinear stochastic systems — Application to semi-active vehicle suspensions. In Effective Stochastic, P. Krée & W. Wedig (eds.), 1989. To appear.

    Google Scholar 

  4. S. BELLIZZI, R. BOUC, F. CAMPILLO, and E. PARDOUX. Contrôle optimal semi-actif de suspension de véhicule. In Analysis and Optimization of Systems, A. Bensoussan and J.L. Lions (eds.). INRIA, Antibes, 1988. Lecture Notes in Control and Information Sciences 111, 1988.

    Google Scholar 

  5. A. BENSOUSSAN. Stochastic Control by Functional Analysis Methods, volume 11 of Studies in Mathematics and its Applications. North-Holland, Amsterdam, 1982.

    Google Scholar 

  6. A. BENSOUSSAN. Discretization of the Bellman equation and the corresponding stochastic control problem. In 26th Conference on Decision and Control, pages 2251–2254, Los Angeles, CA, December 1987.

    Google Scholar 

  7. A. BENSOUSSAN. Perturbation Methods in Optimal Control. John Wiley & Sons, New-York, 1988.

    Google Scholar 

  8. A. BENSOUSSAN and J.L. LIONS. Applications des Inéquations Variationnelles en Contrôle Stochastique, volume 6 of Méthodes Mathématiques pour l'Informatique. Dunod, Paris, 1978.

    Google Scholar 

  9. A. BENOUSSAN and J.L. LIONS. Contróle Impulsionnel et Inéquations Quasi Variationnelles, volume 11 of Méthodes Mathématiques pour l'Informatique. Dunod, Paris, 1982.

    Google Scholar 

  10. D.P. BERTSEKAS. Dynamic Programming and Stochastic Control, volume 123 of Mathematics in Science and Engineering. Academic Press, New-York, 1976.

    Google Scholar 

  11. R. BOUC. Forced vibration of mechanical system with hysteresis. In Proceedings of 4 th conference ICNO, Prague, 1967. Résumé.

    Google Scholar 

  12. R. BOUC. Modèle mathématique d'hystérésis. Acustica, 24(3):16–25, 1971.

    Google Scholar 

  13. F. CAMPILLO, F. LE GLAND, and E. PARDOUX. Approximation d'un probléme de contrôle ergodique dégénéré. In Colloque International Automatique Non Linéaire, Nantes, 13–17 Juin 1988. CNRS. To appear.

    Google Scholar 

  14. F. DELEBECQUE and J.P. QUADRAT. Contribution of stochastic control singular perturbation averaging and team theories to an example of large-scale systems: management of hydropower production. IEEE Transactions on Automatic Control, AC-23(2):209–221, April 1978.

    Google Scholar 

  15. B.T. DOSHI. Continuous time control of Markov processes on an arbitrary state space: average return criterion. Stochastic Processes and their Applications, 4:55–77,1976.

    Google Scholar 

  16. N. ETHIER and T.G. KURTZ. Markov Processes — Characterization and Convergence. J. Wiley & Sons, New-York, 1986.

    Google Scholar 

  17. W.H. FLEMING and R.W. RISHEL. Deterministic and Stochastic Optimal Control, volume 1 of Applications of Mathematics. Springer-Verlag, New York, 1975.

    Google Scholar 

  18. F. LE GLAND. Estimation de Paramètres dans les Processus Stochastiques, en Observation Incompüte — Applications a un Probüme de Radio-Astronomie. Thèse de Docteur-Ingénieur, Université de Paris IX-Dauphine, 1981.

    Google Scholar 

  19. R.M. GOODDALL and W. KORTUM. Active controls in ground transportation — A review of the state-of-the-art and future potential. Vehicle systems dynamics, 12:225–257, 1983.

    Google Scholar 

  20. U.G. HAUSSMANN. A Stochastic Maximum Principle for Optimal Control of Diffusions, volume 151 of Pitman Research Notes in Mathematics Series. Longman, Harlow, UK, 1986.

    Google Scholar 

  21. R.H.W. HOPPE. Multi-grid methods for Hamilton-Jacobi-Bellman equations. Numerische Mathematik, 49:239–254, 1986.

    Google Scholar 

  22. R.A. HOWARD. Dynamic Programming and Markov Processes. J. Wiley, New York, 1960.

    Google Scholar 

  23. I. KARATZAS and S.E. SHREVE. Brownian Motion and Stochastic Calculus, volume 113 of Graduate Texts in Mathematics. Springer-Verlag, New-York, 1988.

    Google Scholar 

  24. N. EL KAROUL Les aspects probabilistes du contróle stochastique. Ecole d'Eté de Probability de Saint-Flour IX-1979, P.L. Hennequin (ed.), volume 876 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1981.

    Google Scholar 

  25. H.J. KUSHNER. Introduction to Stochastic Control. Holt, Rinehart and Winston Inc., New York, 1971.

    Google Scholar 

  26. H.J. KUSHNER. Probability Methods for Approximations in Stochastic Control and for Elliptic Equations, volume 129 of Mathematics in Science and Engineering. Academic Press, New-York, 1977.

    Google Scholar 

  27. H.J. KUSHNER. Optimality conditions for the average cost per unit time problem with a diffusion model. SIAM Journal of Control and Optimization, 16(2):330–346, March 1978.

    Google Scholar 

  28. H.J. KUSHNER. Numerical methods for stochastic control problems in continuous time. Technical report, Lefschetz Center for Dynamical Systems, 1988.

    Google Scholar 

  29. H. KWAKERNAAK and R. SIVAN. Linear Optimal Control Systems. John Wiley & Sons, New York, 1972.

    Google Scholar 

  30. P.L. LIONS. Generalized solutions of Hamilton-Jacobi equations, volume 69 of Research Notes in Mathematics. Pitman, Boston, 1982.

    Google Scholar 

  31. P.L. LIONS. On the Hamilton-Jacobi-Bellman equations. Acta Applicandae Mathematicaé, 1:17–41, 1983.

    Google Scholar 

  32. P.L. LIONS and B. MERCIER. Approximation numérique des équations de Hamilton-Jacobi-Bellman. R.A.I.R.0. Numerical Analysis, 14(4):369–393, 1980.

    Google Scholar 

  33. D. MICHEL and E. PARDOUX. An introduction to Malliavin's calculus and some of its applications. To appear.

    Google Scholar 

  34. J.P. QUADRAT. Existence de solution et algorithme de résolution numérique, de probèms de contrêle optimal de diffusion stochastiques dégénérée ou non. SIAM Journal of Control and Optimization, 18(2):199–226, March 1980.

    Google Scholar 

  35. J.P. QUADRAT. Sur l'Identifacation et le Contróle de Systèmes Dynamiques Stochastiques. Thése, Université de Paris IX-Dauphine, 1981.

    Google Scholar 

  36. M. ROBIN. Long-term average cost control problems for continuous time Markov processes: a survey. Acta Applicandae Mathematicae, 1:281–299, 1983.

    Google Scholar 

  37. S. TAKAHASHI, T. KANEKO, and K. TAKAHASHI. A damping force control method which reduce energy to the vehicle body. JSAE Review, 8(3):95–98, 1987.

    Google Scholar 

Download references

Authors

Editor information

Paul Krée Walter Wedig

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag

About this paper

Cite this paper

Campillo, F. (1995). Optimal ergodic control of nonlinear stochastic systems. In: Krée, P., Wedig, W. (eds) Probabilistic Methods in Applied Physics. Lecture Notes in Physics, vol 451. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60214-3_59

Download citation

  • DOI: https://doi.org/10.1007/3-540-60214-3_59

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60214-9

  • Online ISBN: 978-3-540-44725-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics