Skip to main content

Nonsmooth Convex Optimization

  • Chapter
  • First Online:
Lectures on Convex Optimization

Part of the book series: Springer Optimization and Its Applications ((SOIA,volume 137))

  • 15k Accesses

Abstract

In this chapter, we consider the most general convex optimization problems, which are formed by non-differentiable convex functions. We start by studying the main properties of these functions and the definition of subgradients, which are the main directions used in the corresponding optimization schemes. We also prove the necessary facts from Convex Analysis, including different variants of Minimax Theorems. After that, we establish the lower complexity bounds and prove the convergence rate of the Subgradient Method for constrained and unconstrained optimization problems. This method appears to be optimal uniformly in the dimension of the space of variables. In the next section, we consider other optimization methods, which can work in spaces of moderate dimension (the Method of Centers of Gravity, the Ellipsoid Algorithm). The chapter concludes with a presentation of methods based on a complete piece-wise linear model of the objective function (Kelley’s method, the Level Method).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 64.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Recall that without additional assumptions, we cannot guarantee closedness of the sum of two closed convex sets (see Item 2 in Theorem 2.2.8 and Example 2.2.1). For that, we need boundedness of one of them. However, the epighraphs are never bounded.

  2. 2.

    In the proof of Theorem 3.1.11, we worked with the â„“ 1-norm. However, the result remains valid for any norm in \(\mathbb {R}^n\), since in finite dimensions all norms are topologically equivalent.

  3. 3.

    As compared with the standard version of this theorem, we replace the continuity assumptions by assumptions on closedness of the epigraphs.

  4. 4.

    In Chap. 6 we call it the adjoint problem due to the fact that very often representation (3.1.83) is not unique.

  5. 5.

    From Example 3.1.2(5), we can see that Δ k is a symmetric convex function of {h i}. Therefore, its minimum is achieved at the point having same values for all variables.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Nesterov, Y. (2018). Nonsmooth Convex Optimization. In: Lectures on Convex Optimization. Springer Optimization and Its Applications, vol 137. Springer, Cham. https://doi.org/10.1007/978-3-319-91578-4_3

Download citation

Publish with us

Policies and ethics