Skip to main content

Abstract

In many practical cases the theoretical conditions required by the “one-shot” solutions are not met. Moreover, operational constraints of the process often require that only small adjustments to existing parameter settings can be made, which precludes the application of VRFT in these situations. In such cases the data-driven control design must be performed through iterative procedures in which each iteration requires collecting more data, each time with a different controller in the loop. In Chap. 4 a general review of basic optimization theory is given, setting the stage for the chapters to follow. The basic convergence properties of the basic optimization algorithms—steepest descent and Newton-Raphson in particular—are analyzed. Some robustness properties, that is, convergence under imprecise information, of these algorithms are also demonstrated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Of course similar definitions can be made for maxima.

  2. 2.

    Note that the Hessian is symmetric and thus all its eigenvalues are real.

  3. 3.

    This second constraint is not present when the algorithm is an autonomous system.

  4. 4.

    For a more complete proof the reader is referred to standard optimization books. Such proofs get somewhat technical, so we prefer to give here only a sketch that provides insight into the convergence mechanisms.

References

  1. A.S. Bazanella, M. Gevers, L. Mišković, B.D.O. Anderson, Iterative minimization of H 2 control performance criteria. Automatica 44(10), 2549–2559 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  2. J.R. Blum, Multidimensional stochastic approximation methods. Ann. Math. Stat. 25(4), 737–744 (1954)

    Article  MATH  Google Scholar 

  3. D. Eckhard, A.S. Bazanella, Robust convergence of the steepest descent method for data-based control. Int. J. Syst. Sci. (2011 in press). http://www.informaworld.com/10.1080/00207721.2011.563874

  4. D. Eckhard, A.S. Bazanella, Data-based controller tuning: Improving the convergence rate, in Decision and Control (CDC), 2010 49th IEEE Conference on, (2010), pp. 4801–4806

    Chapter  Google Scholar 

  5. H. Robbins, S. Monro, A stochastic approximation method. Ann. Math. Stat. 22(3), 400–407 (1951)

    Article  MathSciNet  MATH  Google Scholar 

  6. M. Vidyasagar, Nonlinear Systems Analysis, 2nd edn. (Prentice Hall, New York, 1992)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexandre Sanfelice Bazanella .

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media B.V.

About this chapter

Cite this chapter

Sanfelice Bazanella, A., Campestrini, L., Eckhard, D. (2012). Iterative Optimization. In: Data-Driven Controller Design. Communications and Control Engineering. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-2300-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-94-007-2300-9_4

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-94-007-2299-6

  • Online ISBN: 978-94-007-2300-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics