Skip to main content

Part of the book series: Mathematics and Its Applications ((MAIA,volume 500))

  • 1031 Accesses

Abstract

The mathematical theory of optimal control began in the early fifties, originally as a special topic within the field of differential equations. The Pontryagin maximum principle and the method of dynamic programming, which were the two major breakthroughs in the late fifties, put optimal control theory within the broad framework of the calculus of variations. In the early stages of the theory the emphasis was on deterministic control problems for finite dimensional (mostly) linear systems. Soon after finite dimensional nonlinear control systems were also investigated. By the early seventies the fundamental problems of finite dimensional control theory had been mathematically posed and answered, and it is correct to say that the theory had reached a satisfying stage of completeness. At that time the emphasis started shifting in two new directions. One was the removal of the smoothness hypotheses from the underlying theory. With the tools of “convex analysis” which was developed during the sixties, and of its extensions to locally Lipschitz functions which occurred in the early seventies, researchers were able to remove differentiability and single-valuedness conditions from the objective functional and the dynamics of the system. In this way, they were able to increase the range of applications, moving into areas like mathematical economics and game theory, where nonsmoothness and the set-valued character of the models are standard features. Therefore optimal control theory moved along in parallel with nonlinear analysis, and eventually merged to create what is known today as “nonsmooth analysis”, a body of knowledge concerning the theory and applications of the differential properties of functions (and sets) which are not differentiate (“tangentializable”) in the classical sense. The other direction in which optimal control theory turned was the study of infinite dimensional systems, known as “distributed parameter systems”. Most lumped parameter systems (i.e. systems driven by ode’s) are approximations of distributed parameter systems and so it is clear that the study of infinite dimensional systems, such as control systems driven by pde’s and fde’s, are both of intrinsic interest and of use in various applications (such as control of chemical processes, control of elastic structures, hydrodynamical systems, etc). These two directions have characterized the larger body of the research in optimal control theory since the mid-seventies.

One who pays some attention to history will not be surprised if those who cry most loudly that we must smash and destroy, are later found among the administrators of some new system of repression.

—Noam Chomsky, American Power and the New Mandarins

The sinister dialectic of victim-hangman relations: a structure of successive humiliations that starts in international markets and financial centers and ends in every citizen’s home.

—Eduardo Galeano, Open Veins of Latin America

The erratum of this chapter is available at http://dx.doi.org/10.1007/978-1-4615-4665-8_12

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Hu, S., Papageorgiou, N.S. (2000). Optimal Control. In: Handbook of Multivalued Analysis. Mathematics and Its Applications, vol 500. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-4665-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-4665-8_4

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-7111-3

  • Online ISBN: 978-1-4615-4665-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics