Abstract
We define necessary notations and review important definitions that will be used later in our analysis. Let \(C^{2}(\mathbb {R}^{n})\) be the vector space of real-valued twice-continuously differentiable functions. Let ∇ be the gradient operator and ∇2 be the Hessian operator. Let ∥⋅∥2 be the Euclidean norm in \(\mathbb {R}^{n}\). Let μ be the Lebesgue measure in \(\mathbb {R}^n\).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
For the purposes, strict saddle points include local maximizers.
References
R. Ge, F. Huang, C. Jin, Y. Yuan, Escaping from saddle points—online stochastic gradient for tensor decomposition, in Proceedings of the 28th Conference on Learning Theory (2015), pp. 797–842
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Shi, B., Iyengar, S.S. (2020). Necessary Notations of the Proposed Method. In: Mathematical Theories of Machine Learning - Theory and Applications. Springer, Cham. https://doi.org/10.1007/978-3-030-17076-9_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-17076-9_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-17075-2
Online ISBN: 978-3-030-17076-9
eBook Packages: EngineeringEngineering (R0)