Potentials of modern active suspension control strategies – from model predictive control to deep learning approaches

  • Guru Bhargava KhandavalliEmail author
  • Marcus Kalabis
  • Daniel Wegener
  • Lutz Eckstein
Conference paper
Part of the Proceedings book series (PROCEE)


The active suspension system has always been a topic of interest because of its ability to influence the ride quality by exerting independent forces on the suspension by the usage of separate actuators. Various strategies have been proposed over the years in order to estimate the appropriate control action. These strategies are typically feedback oriented and dependent on many factors like the control objective, the frequency of the excitation and system non-linearities that result in the formulation of a complex problem. A simulation-based study using a comprehensive quarter car model with an active suspension is beneficial to summarize the character of each of these approaches by comparing attributes like formulation, performance, robustness, tunability and requirements for implementation on real physical systems.

Since active suspensions use an on-board computer and sensor measurements to determine the control action, the enactment of a strategy is limited by the computational requirements and available measurements. This study aims to foresee such challenges when implementing modern control strategies like H∞ control or preview based approaches like Model Predictive Control (MPC) and Deep Learning methods. The latter will focus on both Supervised Learning (SL) and Reinforcement Learning (RL) approaches. These methods are firstly developed in a virtual environment and subsequently implemented on a physical quarter car setup excited by a servo-hydraulic actuator. Finally, a comparison of the performances of the different control approaches is presented.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1] L. Eckstein, Vertical and Lateral Dynamics of Vehicles, Aachen, 2014.Google Scholar
  2. [2] S. Savaresi, C. Poussot-Vassal, C. Spelta, O. Sename and L. Dugard, Semi-Active Suspension Control Design for Vehicles, Elsevier, 2010.Google Scholar
  3. [3] G. Zames, “Feedback and Optimal Sensitivity: Model Reference Transformations, Multiplicative Seminorms, and Approximate Inverses,” IEEE Transactions on Automatic Control, pp. 301-320, April 1981.Google Scholar
  4. [4] MathWorks Inc., “Robust Control of an Active Suspension,” 2019. [Online]. Available:
  5. [5] H. Chen and K.-H. Guo, “Constrained H∞ Control of Active Suspensions: An LMI Approach,” IEEE Transactions on Control Systems Technology, pp. 412-4221, 2005.Google Scholar
  6. [6] N. M. Ghazaly, A.-N. Sharkawy, A. S. Ali and G. Abdel-Jaber, “H∞ Control of Active Suspension System for a Quarter Car Model,” International Journal of Vehicle Structures and Systems, pp. 81-87, January 2016.Google Scholar
  7. [7] M. Yamashita, K. Fujimori, K. Hayakawa and H. Kimura, “Application of H∞ control to active suspension systems,” IFAC Proceedings Volumes, pp. 87-90, 1993.Google Scholar
  8. [8] A. Alessio and A. Bemporad, “A Survey on Explicit Model Predictive Control,” in Nonlinear Model Predictive Control, Berlin, Heidelberg, Springer, 2009, pp. 345-369.Google Scholar
  9. [9] K. Kouramas, N. Faísca, C. Panos and E. Pistikopoulos, “Explicit/multiparametric model predictive control (MPC) of linear discrete-time systems by dynamic and multi-parametric programming,” Automatica 47, pp. 1638-1645, 2011.Google Scholar
  10. [10] L. H. Cseko”, M. Kvasnica and B. Lantos, “Analysis of the explicit model predictive control for the semi-active suspension,” Periodica Polytechnica, pp. 41-58, 2010.Google Scholar
  11. [11] J. Theunissen, A. Sorniotti, P. Gruber, S. Fallah, M. Dhaens, K. Reybrouck and C. Lauwerys, “Explicit model predictive control of active suspension systems,” Proceedings of the International Conference on Advanced Vehicle Powertrains, pp. 344-362, 2017.Google Scholar
  12. [12] R. Dessort and C. Chucholowski, “Explicit model predictive control of semiactive suspension systems using Artificial Neural Networks (ANN),” 8th International Munich Chassis Symposium, 2017.Google Scholar
  13. [13] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra and M. Riedmiller, “Playing Atari with Deep Reinforcement Learning,” arXiv preprint arXiv:1312.5602, 2013.
  14. [14] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, London, England: The MIT Press, 2017.Google Scholar
  15. [15] G. Frost, T. J. Gordon, M. N. Howell and Q. H. Wu, “Moderated Reinforcement Learning of Active and Semi-Active Suspension Control Laws,” Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, p. 249–257, 1 November 1992.Google Scholar
  16. [16] M. Howell, G. Frost, T. Gordon and Q. Wu, “Continuous action reinforcement learning applied to vehicle suspension control,” Mechatronics, pp. 263,276, 1997.Google Scholar
  17. [17] S. Tognetti, S. M. Savaresi, C. Spelta and M. Restelli, “Batch Reinforcement Learning for Semi-Active Suspension Control,” 18th IEEE International Conference on Control Applications, pp. 582-587, 8-10 July 2009.Google Scholar
  18. [18] I. O. Bucak and H. Oz, “Vibration control of a nonlinear quarter-car active suspension by reinforcement learning,” International Journal of Systems Science, 2011.Google Scholar
  19. [19] G. Koch, E. P. S. Spirk and B. Lohmann, “Design and Modelling of a Quarter-Vehicle Testrig for Active Suspension Control,” Technical Reports on Automatic Control, 21 July 2010.Google Scholar
  20. [20] M. F. Soong, R. Ramli and A. Saifizul, “Between simplicity and accuracy: Effect of adding modeling details on quarter car vehicle model accuracy,” PLoS ONE 12(6), 2017.Google Scholar
  21. [21] C. Zhang, O. Vinyals, R. Munos and S. Bengio, “A Study on Overfitting in Deep Reinforcement Learning,” CoRR, 2018.Google Scholar
  22. [22] J. C. Doyle, K. Glover, P. P. Khargonekar and B. A. Francis, “State-Space Solutions to Standard H2 and H∞ Control Problems,” IEEE Transactions on Automatic Control, pp. 831-847, August 1989.Google Scholar
  23. [23] M. Grant and S. Boyd, “CVX: Matlab Software for Disciplined Convex Programming, version 2.1,” March 2014. [Online]. Available:
  24. [24] M. Grant and S. Boyd, “Graph implementations for nonsmooth convex programs,” in Recent Advances in Learning and Control, Springer-Verlag Limited, 2008, pp. 95-110.Google Scholar
  25. [25] L. Wang, Model Predictive Control System and Design Implementation using MATLAB, Springer, 2008.Google Scholar
  26. [26] MathWorks, Inc., “Optimization Toolbox,” 2019. [Online]. Available:
  27. [27] MathWorks, Inc., “Deep Learning Toolbox,” 2019. [Online]. Available:
  28. [28] A. Gosavi, Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Leanring, New York, NY: Springer, 2014.Google Scholar
  29. [29] F. Gustafsson, Control of Inverted Double Pendulum using Reinforcement Learning, 2016.Google Scholar
  30. [30] L. Zuo and S. Nayfeh, “Low order continuous-time filters for approximation of the ISO 2631-1 human vibration sensitivity weightings,” Journal of Sound and Vibration 265, pp. 459-465, 2003.Google Scholar
  31. [31] E. Frazzolli, 6.241 Dynamic Systems and Control, Lecture 25: H∞ sysnthesis, Lecture Slides, Massachusetts Institute of Technology, 2011.Google Scholar

Copyright information

© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020

Authors and Affiliations

  • Guru Bhargava Khandavalli
    • 1
    Email author
  • Marcus Kalabis
    • 2
  • Daniel Wegener
    • 1
  • Lutz Eckstein
    • 1
  1. 1.Institut für KraftfahrzeugeRWTH Aachen University (ika)AachenGermany
  2. 2.Ford-Werke-GmbHCologneGermany

Personalised recommendations