Skip to main content

A Gradient Method For Approximating Saddle Points and Constrained Maxima

  • Chapter
  • First Online:
Traces and Emergence of Nonlinear Programming

Abstract

In the following, X and Y will be vectors with components Xi, Yj. By X ≥ 0 will be meant X ≥ 0 for all i. Let g(X), fj(X) (j=1, •••) be functions with suitable differentiability properties, where fj(X)≥0 for all X, and define \( {\rm F}(\rm X, Y)=g(X)+{\sum_{j=1}^{m}} Y_{j}\Big\{1-[{f_{j}}(X)]^{l+\eta} \Big\}\).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Basel

About this chapter

Cite this chapter

Arrow, K.J., Hurwicz, L. (2014). A Gradient Method For Approximating Saddle Points and Constrained Maxima. In: Giorgi, G., Kjeldsen, T. (eds) Traces and Emergence of Nonlinear Programming. Birkhäuser, Basel. https://doi.org/10.1007/978-3-0348-0439-4_2

Download citation

Publish with us

Policies and ethics