Development of Novel Techniques of CoCoSSC Method

  • Bin Shi
  • S. S. Iyengar


This chapter provides an introduction to our main contributions concerning the development of the novel methods of CoCoSSC.


Gradient descent Step size CoCoSSC method Time-varying elastic-net Stable manifold theorem Diffeomorphism 


  1. [BPC+11]
    S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)CrossRefGoogle Scholar
  2. [CJLP10]
    C.M. Carvalho, M.S. Johannes, H.F. Lopes, N.G. Polson, Particle learning and smoothing. Stat. Sci. 25, 88–106 (2010)MathSciNetCrossRefGoogle Scholar
  3. [CJW17]
    Z. Charles, A. Jalali, R. Willett, Sparse subspace clustering with missing and corrupted data. arXiv preprint: arXiv:1707.02461 (2017)Google Scholar
  4. [CT05]
    E.J. Candes, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)MathSciNetCrossRefGoogle Scholar
  5. [CT07]
    E. Candes, T. Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35(6), 2313–2351 (2007)MathSciNetCrossRefGoogle Scholar
  6. [DZ17]
    A. Datta, H. Zou, Cocolasso for high-dimensional error-in-variables regression. Ann. Stat. 45(6), 2400–2426 (2017)MathSciNetCrossRefGoogle Scholar
  7. [Har71]
    P. Hartman, The stable manifold of a point of a hyperbolic map of a banach space. J. Differ. Equ. 9(2), 360–379 (1971)MathSciNetCrossRefGoogle Scholar
  8. [LPP+17]
    J.D. Lee, I. Panageas, G. Piliouras, M. Simchowitz, M.I. Jordan, B. Recht, First-order methods almost always avoid saddle points. arXiv preprint arXiv:1710.07406 (2017)Google Scholar
  9. [LSJR16]
    J.D. Lee, M. Simchowitz, M.I. Jordan, B. Recht, Gradient descent only converges to minimizers, in Conference on Learning Theory (2016), pp. 1246–1257Google Scholar
  10. [Nes13]
    Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, vol. 87 (Springer, Berlin, 2013)zbMATHGoogle Scholar
  11. [OW17]
    M. O’Neill, S.J. Wright, Behavior of accelerated gradient methods near critical points of nonconvex problems. arXiv preprint arXiv:1706.07993 (2017)Google Scholar
  12. [PP16]
    I. Panageas, G. Piliouras, Gradient descent only converges to minimizers: non-isolated critical points and invariant regions. arXiv preprint arXiv:1605.00405 (2016)Google Scholar
  13. [QX15]
    C. Qu, H. Xu, Subspace clustering with irrelevant features via robust dantzig selector, in Advances in Neural Information Processing Systems (2015), pp. 757–765Google Scholar
  14. [SEC14]
    M. Soltanolkotabi, E. Elhamifar, E.J. Candes, Robust subspace clustering. Ann. Stat. 42(2), 669–699 (2014)MathSciNetCrossRefGoogle Scholar
  15. [SHB16]
    Y. Shen, B. Han, E. Braverman, Stability of the elastic net estimator. J. Complexity 32(1), 20–39 (2016)MathSciNetCrossRefGoogle Scholar
  16. [Shu13]
    M. Shub, Global Stability of Dynamical Systems (Springer, Berlin, 2013)Google Scholar
  17. [TV18]
    M.C. Tsakiris, R. Vidal, Theoretical analysis of sparse subspace clustering with missing entries. arXiv preprint arXiv:1801.00393 (2018)Google Scholar
  18. [WX16]
    Y.-X. Wang, H. Xu, Noisy sparse subspace clustering. J. Mach. Learn. Res. 17(12), 1–41 (2016)MathSciNetzbMATHGoogle Scholar
  19. [ZH05]
    H. Zou, T. Hastie, Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat Methodol. 67(2), 301–320 (2005)MathSciNetCrossRefGoogle Scholar
  20. [ZWW+16]
    C. Zeng, Q. Wang, W. Wang, T. Li, L. Shwartz, Online inference for time-varying temporal dependency discovery from time series, in 2016 IEEE International Conference on Big Data (Big Data) (IEEE, Piscataway, 2016), pp. 1281–1290CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Bin Shi
    • 1
  • S. S. Iyengar
    • 2
  1. 1.University of CaliforniaBerkeleyUSA
  2. 2.Florida International UniversityMiamiUSA

Personalised recommendations