Abstract
Classical methods of constrained optimization are often based on the assumptions that projection onto the constraint manifold is routine, but accessing second-derivative information is not. Both assumptions need revision for the application of optimization to systems constrained by partial differential equations, in the contemporary limit of millions of state variables and in the parallel setting. Large-scale PDE solvers are complex pieces of software that exploit detailed knowledge of architecture and application and cannot easily be modified to fit the interface requirements of a black box optimizer. Furthermore, in view of the expense of PDE analyses, optimization methods not using second derivatives may require too many iterations to be practical. For general problems, automatic differentiation is likely to be the most convenient means of exploiting second derivatives. We delineate a role for automatic differentiation in matrix-free optimization formulations involving Newton’s method, in which little more storage is required than that for the analysis code alone.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer Science+Business Media New York
About this chapter
Cite this chapter
Keyes, D.E., Hovland, P.D., McInnes, L.C., Samyono, W. (2002). Using Automatic Differentiation for Second-Order Matrix-free Methods in PDE-constrained Optimization. In: Corliss, G., Faure, C., Griewank, A., Hascoët, L., Naumann, U. (eds) Automatic Differentiation of Algorithms. Springer, New York, NY. https://doi.org/10.1007/978-1-4613-0075-5_3
Download citation
DOI: https://doi.org/10.1007/978-1-4613-0075-5_3
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-6543-6
Online ISBN: 978-1-4613-0075-5
eBook Packages: Springer Book Archive