Abstract
By now it is known that several problems in control and matrix theory are NP-hard. These include matrix problems that arise in control theory, as well as other problems in the robustness analysis and synthesis of control systems. These negative results force us to modify our notion of “solving” a given problem. If we cannot solve a problem exactly because it is NP-hard, then we must settle for solving it approximately. If we cannot solve all instances of a problem, we must settle for solving “almost all” instances of a problem. An approach that is recently gaining popularity is that of using randomized algorithms. The notion of a randomized algorithm as defined here is somewhat different from that in the computer science literature, and enlarges the class of problems that can be efficiently solved. We begin with the premise that many problems in robustness analysis and synthesis can be formulated as the minimization of an objective function with respect to the controller parameters. It is argued that, in order to assess the performance of a controller as the plant varies over a prespecified family, it is better to use the average performance of the controller as the objective function to be minimized, rather than its worstcase performance, as the worst-case objective function usually leads to rather conservative designs. Then it is shown that a property from statistical learning theory known as uniform convergence of empirical means (UCEM) plays an important role in allowing us to construct efficient randomized algorithms for a wide variety of controller synthesis problems. In particular, whenever the UCEM property holds, there exists an efficient (i.e., polynomial-time) randomized algorithm. Using very recent results in VC-dimension theory, it is shown that the UCEM property holds in several problems such as robust stabilization and weighted H∞-norm minimization. Hence it is possible to solve such problems efficiently using randomized algorithms. The paper is concluded by showing that the statistical learning methodology is also applicable to some NP-hard matrix problems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Braatz, R., Young, P., Doyle, J. and Morari, M. (1994). Computational complexity of the μ calculation. IEEE Trans. Automatic Control, 39, pp.1000–1002.
Blondel, V. and Tsitsiklis, J.N. (1998). NP-hardness of some linear control design problems. SIAM J. Control and Opt., (to appear).
Doyle, J. (1982). Analysis of feedback systems with structured uncertainties. Proc. IEEE, 129, pp.242–250.
Gantmacher, F.R. (1959). Matrix Theory. Volume II, Chelsea, New York.
Garey, M.R. and Johnson, D.S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman, New York.
Haussler, D. (1992). Decision theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 100, pp.78–150.
Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc. 58, pp.13–30.
Jury, E.I. (1977). Inners and Stability of Dynamical Systems. John Wiley, New York.
Kailath, T. (1979). Linear Systems. Prentice-Hall, Englewood Cliffs, NJ, USA.
Karpinski, M. and Macintyre, A.J. (1995). Polynomial bounds for VC dimension of sigmoidal neural networks. Proc. 27th ACM Symp. Thy. of Computing, pp.200–208.
Karpinski, M. and Macintyre, A.J. (1998). Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks. J. Comp. Sys. Sci., (to appear).
Khargonekar, P.P. and Tikku, A. (1996). Randomized algorithms for robust control analysis have polynomial complexity. Proc. Conf. on Decision and Control.
Kowalczyk, A., Ferra, H. and Szymanski, J. (1995). Combining statistical physics with VC-bounds on generalisation in learning systems. Proceeding of the Sixth Australian Conference on Neural Networks (ACNN′95), pp.41–44, Sydney.
Macintyre, A.J. and Sontag, E.D. (1993). Finiteness results for sigmoidal neural networks. Proc. 25th ACM Symp. Thy. of Computing, pp. 325–334.
Marrison, C. and Stengel, R. (1994). The use of random search and genetic algorithms to optimize stochastic robustness functions. Proc. Amer. Control Conf., Baltimore, MD, pp.1484–1489.
Motwani, R. and Raghavan, P. (1995). Randomized Algorithms, Cambridge U. Press, Cambridge.
Nemirovskii, A. (1993). Several NP-hard problems arising in robust stability analysis. Math. of Control, Signals, and Systems, 6(2), pp.99–105.
Poljak, S. and Rohn, J. (1993). Checking robust nonsingularity is NP-hard. Math. Control, Signals, and Systems, 6(1), pp.1–9.
Ray, L.R. and Stengel, R.F. (1991). Stochastic robustness of linear time-invariant control systems. IEEE Trans. Automatic Control, 36, pp.82–87.
Steele, J.M. (1978). Empirical discrepancies and subadditive processes. Ann. Prob., 6, pp.118–127.
Tempo, R., Bai, E.W. and Dabbene, F. (1996). Probabilistic robustness analysis: Explicit bounds for the minimum number of sampling points. Proc. Conf. on Decision and Control.
Vapnik, V.N., (1982). Estimation of Dependences Based on Empirical Data, Springer-Verlag.
Vapnik, V.N. and Chervonenkis, A.Ya. (1971). On the uniform convergence of relative frequencies to their probabilities. Theory of Prob. and its Appl. 16(2), pp.264–280.
Vapnik, V.N. and Chervonenkis, A.Ya. (1981). “Necessary and sufficient conditions for the uniform convergence of means to their expectations,” Theory of Prob. and its Appl, 26(3), pp.532–5
Vidyasagar, M. (1997). A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems. Springer-Verlag, London.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Vidyasagar, M. (1998). Statistical Learning in Control and Matrix Theory. In: Suykens, J.A.K., Vandewalle, J. (eds) Nonlinear Modeling. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-5703-6_7
Download citation
DOI: https://doi.org/10.1007/978-1-4615-5703-6_7
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-7611-8
Online ISBN: 978-1-4615-5703-6
eBook Packages: Springer Book Archive