Randomized algorithms for robust controller synthesis using statistical learning theory

  • M. Vidyasagar
Part A Learning And Computational Issues
Part of the Lecture Notes in Control and Information Sciences book series (LNCIS, volume 241)


By now it is known that several problems in the robustness analysis and synthesis of control systems are NP-complete or NP-hard. These negative results force us to modify our notion of “solving” a given problem. If we cannot solve a problem exactly because it is NP-hard, then we must settle for solving it approximately. If we cannot solve all instances of a problem, we must settle for solving “almost all” instances of a problem. An approach that is recently gaining popularity is that of using randomized algorithms. The notion of a randomized algorithm as defined here is somewhat different from that in the computer science literature, and enlarges the class of problems that can be efficiently solved. We begin with the premise that many problems in robustness analysis and synthesis can be formulated as the minimization of an objective function with respect to the controller parameters. It is argued that, in order to assess the performance of a controller as the plant varies over a prespecified family, it is better to use the average performance of the controller as the objective function to be minimized, rather than its worst-case performance, as the worst-case objective function usually leads to rather conservative designs. Then it is shown that a property from statistical learning theory known as uniform convergence of empirical means (UCEM) plays an important role in allowing us to construct efficient randomized algorithms for a wide variety of controller synthesis problems. In particular, whenever the UCEM property holds, there exists an efficient (i.e., polynomial-time) randomized algorithm. Using very recent results in VC-dimension theory, it is shown that the UCEM property holds in several problems such as robust stabilization and weighted H 2/H -norm minimization. Hence it is possible to solve such problems efficiently using randomized algorithms.


Probability Measure Robustness Analysis Statistical Learning Theory Boolean Formula Randomize Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    R. Braatz, P. Young, J. Doyle and M. Morari, “Computational complexity of the μ calculation,” IEEE Trans. Auto. Control, 39, pp. 1000–1002, 1994.zbMATHCrossRefMathSciNetGoogle Scholar
  2. [2]
    V. Blondel and J. N. Tsitsiklis, “NP-hardness of some linear control design problems,” SIAM J. Control and Opt., (to appear).Google Scholar
  3. [3]
    J. Doyle, “Analysis of feedback systems with structured uncertainties,” Proc. IEEE, 129, pp. 242–250, 1982.MathSciNetGoogle Scholar
  4. [4]
    F. R. Gantmacher, Matrix Theory, Volume II, Chelsea, New York, 1959.Google Scholar
  5. [5]
    M.R. Garey and D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman, New York, 1979.zbMATHGoogle Scholar
  6. [6]
    D. Haussler, “Decision theoretic generalizations of the PAC model for neural net and other learning applications,” Information and Computation, 100, pp. 78–150, 1992.zbMATHCrossRefMathSciNetGoogle Scholar
  7. [7]
    W. Hoeffding, Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc. 58, pp. 13–30, 1963.zbMATHCrossRefMathSciNetGoogle Scholar
  8. [8]
    M. Karpinski and A.J. Macintyre, “Polynomial bounds for VC dimension of sigmoidal neural networks,” Proc. 27th ACM Symp. Thy. of Computing, pp. 200–208, 1995.Google Scholar
  9. [9]
    M. Karpinski and A.J. Macintyre, “Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks,” J. Comp. Sys. Sci., (to appear).Google Scholar
  10. [10]
    P.P. Khargonekar and A. Tikku, “Randomized algorithms for robust control analysis have polynomial complexity,” Proc. Conf. on Decision and Control, 1996 (to appear).Google Scholar
  11. [11]
    A. Kowalczyk, H. Ferra and J. Szymanski, “Combining statistical physics with VC-bounds on generalisation in learning systems,” Proceeding of the Sixth Australian Conference on Neural Networks (ACNN'95, pp. 41–44, Sydney, 1995.Google Scholar
  12. [12]
    A.J. Macintyre and E.D. Sontag, “Finiteness results for sigmoidal neural networks,” Proc. 25th ACM Symp. Thy. of Computing, pp. 325–334, 1993.Google Scholar
  13. [13]
    C. Marrison and R. Stengel, “The use of random search and genetic algorithms to optimize stochastic robustness functions,” Proc. Amer. Control Conf., Baltimore, MD, pp. 1484–1489, 1994.Google Scholar
  14. [14]
    R. Motwani and P. Raghavan, Randomized Algorithms, Cambridge U. Press, Cambridge, 1995.zbMATHGoogle Scholar
  15. [15]
    A. Nemirovskii, “Several NP-hard problems arising in robust stability analysis,” Math. of Control, Signals, and Systems, 6(2), pp. 99–105, 1993.zbMATHCrossRefMathSciNetGoogle Scholar
  16. [16]
    S. Poljak and J. Rohn, “Checking robust nonsingularity is NP-hard,” Math. Control, Signals, and Systems, 6(1), pp. 1–9, 1993.zbMATHCrossRefMathSciNetGoogle Scholar
  17. [17]
    L.R. Ray and R.F. Stengel, “Stochastic robustness of linear time-invariant control systems,” IEEE Trans. Auto. Control, 36, pp. 82–87, 1991.zbMATHCrossRefMathSciNetGoogle Scholar
  18. [18]
    J. M. Steele, “Empirical discrepancies and subadditive processes,” Ann. Prob., 6, pp. 118–127, 1978.zbMATHMathSciNetGoogle Scholar
  19. [19]
    R. Tempo, E.W. Bai and F. Dabbene, “Probabilistic robustness analysis: Explicit bounds for the minimum number of sampling points,” Proc. Conf. on Decision and Control, 1996 (to appear).Google Scholar
  20. [20]
    V.N. Vapnik, Estimation of Dependences Based on Empirical Data, Springer-Verlag, 1982.Google Scholar
  21. [21]
    V.N. Vapnik and A.Ya. Chervonenkis, “On the uniform convergence of relative frequencies to their probabilities,” Theory of Prob. and its Appl. 16(2), pp. 264–280, 1971.CrossRefMathSciNetGoogle Scholar
  22. [22]
    V.N. Vapnik and A.Ya. Chervonenkis, “Necessary and and sufficient conditions for the uniform convergence of means to their expectations,” Theory of Prob. and its Appl., 26(3), pp. 532–553, 1981.CrossRefMathSciNetGoogle Scholar
  23. [23]
    M. Vidyasagar, A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems, Springer-Verlag, London, 1997.zbMATHGoogle Scholar

Copyright information

© Springer-Verlag London Limited 1999

Authors and Affiliations

  • M. Vidyasagar
    • 1
  1. 1.Centre for Artificial Intelligence and RoboticsRaj Bhavan Circle, High GroundsBangaloreIndia

Personalised recommendations