Skip to main content

Accelerating Gradient Descent with Projective Response Surface Methodology

  • Conference paper
  • First Online:
Learning and Intelligent Optimization (LION 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10556))

Included in the following conference series:

Abstract

We present a new modification of gradient descent algorithm based on surrogate optimization with projection into low-dimensional space. It consequently builds an approximation of the target function in low-dimensional space and takes the approximation optimum point mapped back to original parameter space as the next parameter estimate. An additional projection step is used to fight the curse of dimensionality. Major advantage of the proposed modification is that it does not change gradient descent iterations, thus it may be used with almost any zero- or first-order iterative method. We give a theoretical motivation for the proposed algorithm and experimentally illustrate its properties on modelled data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Box, G.E., Draper, N.R., et al.: Empirical Model-Building and Response Surfaces. John Wiley & Sons, New York (1987)

    MATH  Google Scholar 

  2. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2004)

    Book  MATH  Google Scholar 

  3. Ford, J., Moghrabi, I.: Multi-step quasi-Newton methods for optimization. J. Comput. Appl. Math. 50(1–3), 305–323 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  4. Forrester, A., Keane, A.: Recent advances in surrogate-based optimization. Prog. Aerosp. Sci. 45(1), 50–79 (2009)

    Article  Google Scholar 

  5. Granichin, O., Volkovich, V., Toledano-Kitai, D.: Randomized Algorithms in Automatic Control and Data Mining. Springer, Heidelberg (2015)

    Book  Google Scholar 

  6. Granichin, O.N.: Stochastic approximation search algorithms with randomization at the input. Autom. Remote Control 76(5), 762–775 (2015)

    Article  MATH  MathSciNet  Google Scholar 

  7. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, New York (2004)

    Book  MATH  Google Scholar 

  8. Polyak, B.T.: Introduction to Optimization. Translations Series in Mathematics and Engineering. Optimization Software (1987)

    Google Scholar 

Download references

Acknowledgments

This work was supported by Russian Science Foundation (project 16-19-00057).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Senov .

Editor information

Editors and Affiliations

A Proofs

A Proofs

Proof

(of Proposition 1)

Since \((\mathbf {I}-\mathbf {P}^\top \mathbf {P})\) is not invertible, the above equation has an infinite number of solutions. Hence, we are free to choose any one of them, e.g. \(\mathbf {x}=\frac{1}{K}\sum _1^K \mathbf {x}_t\).

Proof

(of Proposition 2)

$$\begin{aligned} f(\mathbf {x})&= \left( \mathbf {P}^\top \mathbf {z} + \mathbf {v}\right) ^\top \mathbf {A} \left( \mathbf {P}^\top \mathbf {z} + \mathbf {v}\right) + \mathbf {b}^\top \left( \mathbf {P}^\top \mathbf {z} + \mathbf {v}\right) + c \\&= \mathbf {z}^\top \mathbf {P} \mathbf {A} \mathbf {P}^\top \mathbf {z} + \left( \mathbf {b}^\top + \mathbf {v}^\top \mathbf {A} \right) \mathbf {P}^\top \mathbf {z} + \left( \mathbf {v}^\top \mathbf {A} \mathbf {v} - \mathbf {b}^\top \mathbf {v} + c \right) . \end{aligned}$$

Substituting \(\mathbf {v}\) back and taking derivative with respect to \(\mathbf {z}\), we obtain:

$$\begin{aligned} 0&=\; 2 \mathbf {P} \mathbf {A} \mathbf {P}^\top \widehat{\mathbf {z}} + \left( \mathbf {b}^\top + \mathbf {x}^\top \left( \mathbf {I} - \mathbf {P}^\top \mathbf {P}\right) ^\top \mathbf {A}\right) \mathbf {P}^\top \\ \leadsto \widehat{\mathbf {z}}&= -\frac{1}{2} \left( \mathbf {P} \mathbf {A} \mathbf {P}^\top \right) ^{-1} \left( \left( \mathbf {b}^\top + \mathbf {x}^\top \left( \mathbf {I} - \mathbf {P}^\top \mathbf {P}\right) ^\top \mathbf {A}\right) \mathbf {P}^\top \right) ^\top \\&= -\frac{1}{2} \mathbf {P} \left( \mathbf {A}^{-1} \mathbf {b} + \left( \mathbf {I} - \mathbf {P}^\top \mathbf {P} \right) \mathbf {x}\right) = -\frac{1}{2} \mathbf {P} \mathbf {A}^{-1} \mathbf {b}. \end{aligned}$$

Proof

(of Proposition 3) From Propositions 1 and 2: \(\widehat{\mathbf {x}} = \left( \mathbf {I} - \mathbf {P}^\top \mathbf {P}\right) \overline{\mathbf {x}} - \frac{1}{2} \mathbf {P}^\top \mathbf {P} \mathbf {A}^{-1}\mathbf {b} \). Hence

$$\begin{aligned}&\Vert \arg \!\min f - \widehat{\mathbf {x}} \Vert _2^2 = \bigg \Vert -\frac{1}{2}\mathbf {A}^{-1}\mathbf {b} - \widehat{\mathbf {x}} \bigg \Vert _2^2 \\&\qquad = \bigg \Vert - \left( \mathbf {I} - \mathbf {P}^\top \mathbf {P}\right) \overline{\mathbf {x}} + \frac{1}{2} \mathbf {P}^\top \mathbf {P} \mathbf {A}^{-1}\mathbf {b} - \frac{1}{2}\mathbf {A}^{-1} \mathbf {b} \bigg \Vert _2^2 \\&\qquad = \bigg \Vert (\mathbf {I} - \mathbf {P}^\top \mathbf {P})\left( \frac{1}{2} \mathbf {A}^{-1} \mathbf {b} - \overline{\mathbf {x}} \right) \bigg \Vert _2^2. \end{aligned}$$

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Senov, A. (2017). Accelerating Gradient Descent with Projective Response Surface Methodology. In: Battiti, R., Kvasov, D., Sergeyev, Y. (eds) Learning and Intelligent Optimization. LION 2017. Lecture Notes in Computer Science(), vol 10556. Springer, Cham. https://doi.org/10.1007/978-3-319-69404-7_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-69404-7_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-69403-0

  • Online ISBN: 978-3-319-69404-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics