Skip to main content

Low Complexity Regularization of Linear Inverse Problems

  • Chapter
Sampling Theory, a Renaissance

Part of the book series: Applied and Numerical Harmonic Analysis ((ANHA))

Abstract

Inverse problems and regularization theory is a central theme in imaging sciences, statistics, and machine learning. The goal is to reconstruct an unknown vector from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown vector is to solve a convex optimization problem that enforces some prior knowledge about its structure. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including (i) recovery guarantees and stability to noise, both in terms of 2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation; (iii) convergence properties of the forward-backward proximal splitting scheme that is particularly well suited to solve the corresponding large-scale regularized optimization problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. A. Agarwal, A. Anandkumar, P. Netrapalli, Exact recovery of sparsely used overcomplete dictionaries (2013) [arxiv]

    Google Scholar 

  2. H. Akaike, Information theory and an extension of the maximum likelihood principle, in Second International Symposium on Information Theory (Springer, New York, 1973), pp. 267–281

    Google Scholar 

  3. J. Allen, Short-term spectral analysis, and modification by discrete Fourier transform. IEEE Trans. Acoust. Speech Signal Process. 25(3), 235–238 (1977)

    Article  MATH  Google Scholar 

  4. D. Amelunxen, M. Lotz, M.B. McCoy, J.A. Tropp, Living on the edge: a geometric theory of phase transitions in convex optimization. CoRR, abs/1303.6672 (2013)

    Google Scholar 

  5. M.S. Asif, J. Romberg, Sparse recovery of streaming signals using L1-homotopy. Technical report, Preprint (2013) [arxiv 1306.3331]

    Google Scholar 

  6. J.-F. Aujol, G. Aubert, L. Blanc-Féraud, A. Chambolle, Image decomposition into a bounded variation component and an oscillating component. J. Math. Imaging Vis. 22, 71–88 (2005)

    Article  Google Scholar 

  7. F. Bach, Consistency of the group Lasso and multiple kernel learning. J. Mach. Learn. Res. 9, 1179–1225 (2008)

    MathSciNet  MATH  Google Scholar 

  8. F. Bach, Consistency of trace norm minimization. J. Mach. Learn. Res. 9, 1019–1048 (2008)

    MathSciNet  MATH  Google Scholar 

  9. S. Bakin, Adaptive regression and model selection in data mining problems. Ph.D. thesis, Australian National University, 1999

    Google Scholar 

  10. A.S. Bandeira, E. Dobriban, D.G. Mixon, W.F. Sawin, Certifying the restricted isometry property is hard. IEEE Trans. Inf. Theory 59(6), 3448–3450 (2013)

    Article  MathSciNet  Google Scholar 

  11. A. Barron, L. Birgé, P. Massart, Risk bounds for model selection via penalization. Probab. Theory Relat. Fields 113(3), 301–413 (1999)

    Article  MATH  Google Scholar 

  12. H.H. Bauschke, P.L. Combettes, A dykstra-like algorithm for two monotone operators. Pac. J. Optim. 4(3), 383–391 (2008)

    MathSciNet  MATH  Google Scholar 

  13. H.H. Bauschke, P.L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces (Springer, New York, 2011)

    Book  MATH  Google Scholar 

  14. H.H. Bauschke, A.S Lewis, Dykstras algorithm with bregman projections: a convergence proof. Optimization 48(4), 409–427 (2000)

    Google Scholar 

  15. A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  16. A. Beck, M. Teboulle, Gradient-based algorithms with applications to signal recovery, in Convex Optimization in Signal Processing and Communications (Cambridge University Press, Cambridge, 2009)

    Google Scholar 

  17. L. Birgé, P. Massart, From model selection to adaptive estimation, Chapter 4, in Festschrift for Lucien Le Cam, ed. by D. Pollard, E. Torgersen, L.Y. Grace (Springer, New York, 1997), pp. 55–87

    Chapter  Google Scholar 

  18. L. Birgé, P. Massart, Minimal penalties for Gaussian model selection. Probab. Theory Relat. Fields 138(1–2), 33–73 (2007)

    Article  MATH  Google Scholar 

  19. T. Blu, F. Luisier, The SURE-LET approach to image denoising. IEEE Trans. Image Process. 16(11), 2778–2786 (2007)

    Article  MathSciNet  Google Scholar 

  20. T. Blumensath, M.E. Davies, Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  21. J. Bolte, A. Daniilidis, A.S. Lewis, Generic optimality conditions for semialgebraic convex programs. Math. Oper. Res. 36(1), 55–70 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  22. C. Boncelet, Image noise models. Handbook of Image and Video Processing (Academic, New York, 2005)

    Google Scholar 

  23. J.F. Bonnans, A. Shapiro, Perturbation Analysis of Optimization Problems. Springer Series in Operations Research and Financial Engineering (Springer, New York, 2000)

    Google Scholar 

  24. L. Borup, R. Gribonval, M. Nielsen, Beyond coherence: recovering structured time-frequency representations. Appl. Comput. Harmon. Anal. 24(1), 120–128 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  25. S.P. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004)

    Book  MATH  Google Scholar 

  26. K. Bredies, D.A. Lorenz, Linear convergence of iterative soft-thresholding. J. Four. Anal. Appl. 14(5–6), 813–837 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  27. K. Bredies, H.K. Pikkarainen, Inverse problems in spaces of measures. ESAIM Control Optim. Calc. Var. 19, 190–218 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  28. K. Bredies, K. Kunisch, T. Pock, Total generalized variation. SIAM J. Imaging Sci. 3(3), 492–526 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  29. L.M. Briceño Arias, P.L. Combettes, A monotone+skew splitting model for composite monotone inclusions in duality. SIAM J. Optim. 21(4), 1230–1250 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  30. M. Burger, S. Osher, Convergence rates of convex variational regularization. Inverse Prob. 20(5), 1411 (2004)

    Google Scholar 

  31. T.T. Cai, Adaptive wavelet estimation: a block thresholding and oracle inequality approach. Ann. stat. 27(3), 898–924 (1999)

    Article  MATH  Google Scholar 

  32. T.T. Cai, B.W. Silverman, Incorporating information on neighbouring coefficients into wavelet estimation. Sankhya Indian J. Stat. Ser. B 63, 127–148 (2001)

    MathSciNet  MATH  Google Scholar 

  33. T.T. Cai, H.H. Zhou, A data-driven block thresholding approach to wavelet estimation. Ann. Stat. 37(2), 569–595 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  34. E.J. Candès, D.L. Donoho, Curvelets: a surprisingly effective nonadaptive representation for objects with edges. Technical report, DTIC Document (2000)

    Google Scholar 

  35. E.J. Candès, Y. Plan, Matrix completion with noise. Proc. IEEE 98(6), 925–936 (2010)

    Article  Google Scholar 

  36. E.J. Candès, Y. Plan, A probabilistic and RIPless theory of compressed sensing. IEEE Trans. Inf. Theory 57(11), 7235–7254 (2011)

    Article  Google Scholar 

  37. E.J. Candès, Y. Plan, Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Trans. Inf. Theory 57(4), 2342–2359 (2011)

    Article  Google Scholar 

  38. E.J. Candès, B. Recht, Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  39. E.J. Candès, B. Recht, Simple bounds for recovering low-complexity models. Math. Program. 141(1–2), 577–589 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  40. E. J. Candès, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)

    Article  MATH  Google Scholar 

  41. E.J. Candès, T. Tao, Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)

    Article  MATH  Google Scholar 

  42. E.J. Candès, T. Tao, The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)

    Article  Google Scholar 

  43. E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    Article  MATH  Google Scholar 

  44. E.J. Candès, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

    Article  MATH  Google Scholar 

  45. E.J. Candès, M. Wakin, S. Boyd, Enhancing sparsity by reweighted 1 minimization. J. Four. Anal. Appl. 14, 877–905 (2007)

    Article  Google Scholar 

  46. E.J. Candès, Y.C. Eldar, D. Needell, P. Randall, Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal. 31(1), 59–73 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  47. E.J. Candès, X. Li, Y. Ma, J. Wright, Robust principal component analysis? J. ACM 58(3), 11:1–11:37 (2011)

    Google Scholar 

  48. E.J. Candès, C.A. Sing-Long, J.D. Trzasko, Unbiased risk estimates for singular value thresholding and spectral estimators. IEEE Trans. Signal Process. 61(19), 4643–4657 (2012)

    Article  Google Scholar 

  49. E.J. Candès, T. Strohmer, V. Voroninski, Phaselift: exact and stable signal recovery from magnitude measurements via convex programming. Commun. Pure Appl. Math. 66(8), 1241–1274 (2013)

    Article  MATH  Google Scholar 

  50. E.J. Candès, C. Fernandez-Granda, Towards a mathematical theory of super-resolution. Commun. Pure Appl. Math. 67(6), 906–956 (2014)

    Article  MATH  Google Scholar 

  51. Y. Censor, S. Reich, The dykstra algorithm with bregman projections. Commun. Appl. Anal. 2, 407–419 (1998)

    MathSciNet  MATH  Google Scholar 

  52. A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  53. A. Chambolle, V. Caselles, D. Cremers, M. Novaga, T. Pock, An introduction to total variation for image analysis, in Theoretical Foundations and Numerical Methods for Sparse Recovery (De Gruyter, Berlin, 2010)

    Google Scholar 

  54. V. Chandrasekaran, B. Recht, P.A. Parrilo, A. Willsky, The convex geometry of linear inverse problems. Found. Comput. Math. 12(6), 805–849 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  55. C. Chaux, L. Duval, A. Benazza-Benyahia, J.-C. Pesquet, A nonlinear stein-based estimator for multichannel image denoising. IEEE Trans. Signal Process. 56(8), 3855–3870 (2008)

    Article  MathSciNet  Google Scholar 

  56. G. Chen, M. Teboulle, A proximal-based decomposition method for convex minimization problems. Math. Program. 64(1–3), 81–101 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  57. J. Chen, X. Huo, Theoretical results on sparse representations of multiple-measurement vectors. IEEE Trans. Signal Process. 54(12), 4634–4643 (2006)

    Article  Google Scholar 

  58. S.S. Chen, D.L. Donoho, M.A. Saunders, Atomic decomposition by Basis Pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  59. Y. Chen, T. Pock, H. Bischof, Learning 1-based analysis and synthesis sparsity priors using bi-level optimization, in NIPS (2012)

    Google Scholar 

  60. R. Ciak, B. Shafei, G. Steidl, Homogeneous penalizers and constraints in convex image restoration. J. Math. Imaging Vis. 47, 210–230 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  61. J.F. Claerbout, F. Muir, Robust modeling with erratic data. Geophysics 38(5), 826–844 (1973)

    Article  Google Scholar 

  62. K.L. Clarkson, Coresets, sparse greedy approximation, and the frank-wolfe algorithm, in 19th ACM-SIAM Symposium on Discrete Algorithms (2008), pp. 922–931

    Google Scholar 

  63. P.L. Combettes, J.-C. Pesquet, A proximal decomposition method for solving convex variational inverse problems. Inverse Prob. 24(6), 065014 (2008)

    Google Scholar 

  64. P.L. Combettes, J.-C. Pesquet, Proximal splitting methods in signal processing, in Fixed-Point Algorithms for Inverse Problems in Science and Engineering, ed. by H.H. Bauschke, R.S. Burachik, P.L. Combettes, V. Elser, D.R. Luke, H. Wolkowicz (Springer, New York, 2011), pp. 185–212

    Chapter  Google Scholar 

  65. P.L. Combettes, J.C. Pesquet, Primal–dual splitting algorithm for solving inclusions with mixtures of composite, lipschitzian, and parallel-sum type monotone operators. Set-Valued Var. Anal. 20(2), 307–330 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  66. P.L. Combettes, V.R. Wajs, Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  67. L. Condat, A primal–dual splitting method for convex optimization involving lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 158, 1–20 (2012)

    MathSciNet  Google Scholar 

  68. M Coste, An introduction to o-minimal geometry. Pisa: Istituti editoriali e poligrafici internazionali (2000)

    Google Scholar 

  69. S.F. Cotter, B.D. Rao, J. Engan, K. Kreutz-Delgado, Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Process. 53(7), 2477–2488 (2005)

    Article  MathSciNet  Google Scholar 

  70. A. Daniilidis, D. Drusvyatskiy, A.S. Lewis, Orthogonal invariance and identifiability. Technical report (2013) [arXiv 1304.1198]

    Google Scholar 

  71. A. Daniilidis, J. Malick, H.S. Sendov, Spectral (isotropic) manifolds and their dimension, to appear in Journal d’Analyse Mathematique, 25 (2014)

    Google Scholar 

  72. I. Daubechies, R. DeVore, M. Fornasier, C.S. Gunturk, Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 63(1), 1–38 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  73. G. Davis, S.G. Mallat, Z. Zhang, Adaptive time-frequency approximations with matching pursuits. Technical report, Courant Institute of Mathematical Sciences (1994)

    Book  Google Scholar 

  74. Y. de Castro, F. Gamboa, Exact reconstruction using beurling minimal extrapolation. J. Math. Anal. Appl. 395(1), 336–354 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  75. C-A. Deledalle, V. Duval, J. Salmon, Non-local Methods with Shape-Adaptive Patches (NLM-SAP). J. Math. Imaging Vis. 43, 1–18 (2011)

    MathSciNet  Google Scholar 

  76. C. Deledalle, S. Vaiter, G. Peyré, M.J. Fadili, C. Dossal, Proximal splitting derivatives for risk estimation, in 2nd International Workshop on New Computational Methods for Inverse Problems (NCMIP), Paris (2012)

    Google Scholar 

  77. C.-A. Deledalle, S. Vaiter, G. Peyré, M.J. Fadili, C. Dossal, Risk estimation for matrix recovery with spectral regularization, in ICML’12 Workshops (2012) [arXiv:1205.1482v1]

    Google Scholar 

  78. D.L. Donoho, X. Huo, Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory 47(7), 2845–2862 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  79. D.L. Donoho, Johnstone, I.M.: Adapting to unknown smoothness via wavelet shrinkage. J. Am. Stat. Assoc. 90(432), 1200–1224 (1995)

    Google Scholar 

  80. D.L. Donoho, B.F. Logan, Signal recovery and the large sieve. SIAM J. Appl. Math. 52(2), 577–591 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  81. D.L. Donoho, P.B. Stark, Uncertainty principles and signal recovery. SIAM J. Appl. Math. 49(3), 906–931 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  82. D.L. Donoho, Y. Tsaig, Fast solution of 1-norm minimization problems when the solution may be sparse. IEEE Trans. Inf. Theory 54(11), 4789–4812 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  83. C. Dossal, S. Mallat, Sparse spike deconvolution with minimum scale, in Proc. SPARS 2005 (2005)

    Google Scholar 

  84. C. Dossal, M.-L. Chabanol, G. Peyré, J.M. Fadili, Sharp support recovery from noisy random measurements by 1-minimization. Appl. Comput. Harmon. Anal. 33(1), 24–43 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  85. C. Dossal, M. Kachour, M.J. Fadili, G. Peyré, C. Chesneau, The degrees of freedom of the Lasso for general design matrix. Stat. Sin. 23, 809–828 (2013)

    MATH  Google Scholar 

  86. J. Douglas, H.H. Rachford, On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82(2), 421–439 (1956)

    Article  MathSciNet  MATH  Google Scholar 

  87. M. Dudík, Z. Harchaoui, J. Malick, Lifted coordinate descent for learning with trace-norm regularization, in Proc. AISTATS, ed. by N.D. Lawrence, M. Girolami. JMLR Proceedings, vol. 22, JMLR.org (2012), pp. 327–336

    Google Scholar 

  88. J.C. Dunn, S. Harshbarger, Conditional gradient algorithms with open loop step size rules. J. Math. Anal. Appl. 62(2), 432–444 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  89. V. Duval, G. Peyré, Exact support recovery for sparse spikes deconvolution. Technical report, Preprint hal-00839635 (2013)

    Google Scholar 

  90. V. Duval, J.-F. Aujol, Y. Gousseau, A bias-variance approach for the non-local means. SIAM J. Imaging Sci. 4(2), 760–788 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  91. R.L. Dykstra, An algorithm for restricted least squares regression. J. Am. Stat. 78, 839–842 (1983)

    Google Scholar 

  92. J. Eckstein, Parallel alternating direction multiplier decomposition of convex programs. J. Optim. Theory Appl. 80(1), 39–62 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  93. J. Eckstein, D.P. Bertsekas, On the douglas–rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1–3), 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  94. J. Eckstein, B.F. Svaiter, General projective splitting methods for sums of maximal monotone operators. SIAM J. Control Optim. 48(2), 787–811 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  95. B. Efron, How biased is the apparent error rate of a prediction rule? J. Am. Stat. Assoc. 81(394), 461–470 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  96. B. Efron, T. Hastie, I. Johnstone, R. Tibshirani, Least angle regression. Ann. Stat. 32(2), 407–451 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  97. M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Iimage Processing (Springer, New York, 2010)

    Book  Google Scholar 

  98. M. Elad, J.-L. Starck, P. Querre, D.L. Donoho, Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA). Appl. Comput. Harmon. Anal. 19(3), 340–358 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  99. M. Elad, P. Milanfar, R. Rubinstein, Analysis versus synthesis in signal priors. Inverse Prob. 23(3), 947 (2007)

    Google Scholar 

  100. Y.C. Eldar, Generalized SURE for exponential families: applications to regularization. IEEE Trans. Signal Process. 57(2), 471–481 (2009)

    Article  MathSciNet  Google Scholar 

  101. M.J. Fadili, G. Peyré, S. Vaiter, C.-A. Deledalle, J. Salmon, Stable recovery with analysis decomposable priors, in Proc. SampTA (2013)

    Google Scholar 

  102. M. Fazel, Matrix rank minimization with applications. Ph.D. thesis, Stanford University, 2002

    Google Scholar 

  103. M. Fazel, H. Hindi, S.P. Boyd, A rank minimization heuristic with application to minimum order system approximation, in Proceedings of the 2001 American Control Conference, vol. 6 (IEEE, Arlington, 2001), pp. 4734–4739

    Google Scholar 

  104. M. Fortin, R. Glowinski, Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems (Elsevier, Amsterdam, 2000)

    Google Scholar 

  105. S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing. Birkhäuser Series in Applied and Numerical Harmonic Analysis (Birkhäuser, Basel, 2013)

    Google Scholar 

  106. M. Frank, P. Wolfe, An algorithm for quadratic programming. Nav. Res. Logist. Q. 3(1–2), 95–110 (1956)

    Article  MathSciNet  Google Scholar 

  107. J.-J. Fuchs, On sparse representations in arbitrary redundant bases. IEEE Trans. Inf. Theory 50(6), 1341–1344 (2004)

    Article  MATH  Google Scholar 

  108. J.-J. Fuchs, Spread representations, in Signals, Systems and Computers (ASILOMAR) (IEEE, Pacific Grove, 2011), pp. 814–817

    Google Scholar 

  109. D. Gabay, Applications of the method of multipliers to variational inequalities, in Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-value Problems, ed. by M. Fortin, R. Glowinski (North-Holland, Amsterdam, 1983)

    Google Scholar 

  110. D. Gabay, B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2(1), 17–40 (1976)

    Article  MATH  Google Scholar 

  111. A Girard, A fast Monte-Carlo cross-validation procedure for large least squares problems with noisy data. Numer. Math. 56(1), 1–23 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  112. R. Giryes, M. Elad, Y.C. Eldar, The projected GSURE for automatic parameter tuning in iterative shrinkage methods. Appl. Comput. Harmon. Anal. 30(3), 407–422 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  113. R. Glowinski, P. Le Tallec, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics, vol. 9 (SIAM, Philadelphia, 1989)

    Book  MATH  Google Scholar 

  114. M. Golbabaee, P. Vandergheynst, Hyperspectral image compressed sensing via low-rank and joint-sparse matrix recovery, in 2012 IEEE International Conference On Acoustics, Speech and Signal Processing (ICASSP) (IEEE, Kyoto, 2012), pp. 2741–2744

    Book  Google Scholar 

  115. G.H. Golub, M. Heath, G. Wahba, Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 21(2), 215–223 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  116. M. Grasmair, Linear convergence rates for Tikhonov regularization with positively homogeneous functionals. Inverse Prob. 27(7), 075014 (2011)

    Google Scholar 

  117. M. Grasmair, O. Scherzer, M. Haltmeier, Necessary and sufficient conditions for linear convergence of l1-regularization. Commun. Pure Appl. Math. 64(2), 161–182 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  118. E. Grave, G. Obozinski, F. Bach, Trace Lasso: a trace norm regularization for correlated designs, in Neural Information Processing Systems (NIPS), Spain (2012)

    Google Scholar 

  119. R. Gribonval, Should penalized least squares regression be interpreted as maximum a posteriori estimation? IEEE Trans. Signal Process. 59(5), 2405–2410 (2011)

    Article  MathSciNet  Google Scholar 

  120. R. Gribonval, M. Nielsen, Beyond sparsity: recovering structured representations by 1-minimization and greedy algorithms. Adv. Comput. Math. 28(1), 23–41 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  121. R. Gribonval, K. Schnass, Dictionary identification - sparse matrix factorization via 1-minimization. IEEE Trans. Inf. Theory 56(7), 3523–3539 (2010)

    Article  MathSciNet  Google Scholar 

  122. R. Gribonval, H. Rauhut, K. Schnass, P. Vandergheynst, Atoms of all channels, unite! average case analysis of multi-channel sparse recovery using greedy algorithms. J. Four. Anal. Appl. 14(5–6), 655–687 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  123. D. Gross, Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57(3), 1548–1566 (2011)

    Article  Google Scholar 

  124. E. Hale, W. Yin, Y. Zhang, Fixed-point continuation for 1-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  125. P. Hall, S. Penev, G. Kerkyacharian, D. Picard, Numerical performance of block thresholded wavelet estimators. Stat. Comput. 7(2), 115–124 (1997)

    Article  Google Scholar 

  126. P. Hall, G. Kerkyacharian, D. Picard, On the minimax optimality of block thresholded wavelet estimators. Stat. Sin. 9(1), 33–49 (1999)

    MathSciNet  MATH  Google Scholar 

  127. N.R. Hansen, A. Sokol, Degrees of freedom for nonlinear least squares estimation. Technical report (2014) [arXiv 1402.2997]

    Google Scholar 

  128. E. Harchaoui, A. Juditsky, A. Nemirovski, Conditional gradient algorithms for norm-regularized smooth convex optimization. Math. Program. 152(1–2), 75–112 (2014)

    MathSciNet  Google Scholar 

  129. W.L. Hare, Identifying active manifolds in regularization problems, Chapter 13, in Fixed-Point Algorithms for Inverse Problems in Science and Engineering, ed. by H.H. Bauschke, R.S., Burachik, P.L. Combettes, V. Elser, D.R. Luke, H. Wolkowicz. Springer Optimization and Its Applications, vol. 49 (Springer, New York, 2011)

    Google Scholar 

  130. W.L. Hare, A.S. Lewis, Identifying active constraints via partial smoothness and prox- regularity. J. Convex Anal. 11(2), 251–266 (2004)

    MathSciNet  MATH  Google Scholar 

  131. W. Hare, A.S. Lewis, Identifying active manifolds. Algorithmic Oper. Res. 2(2) (2007)

    Google Scholar 

  132. J.-B. Hiriart-Urruty, H.Y. Le, Convexifying the set of matrices of bounded rank: applications to the quasiconvexification and convexification of the rank function. Optim. Lett. 6(5), 841–849 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  133. B. Hofmann, B. Kaltenbacher, C. Poeschl, O. Scherzer, A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Prob. 23(3), 987 (2007)

    Google Scholar 

  134. H.M. Hudson, A natural identity for exponential families with applications in multiparameter estimation. Ann. Stat. 6(3), 473–484 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  135. M. Jaggi, M. Sulovsky, A simple algorithm for nuclear norm regularized problems, in ICML (2010)

    Google Scholar 

  136. H. Jégou, M. Douze, C. Schmid, Improving bag-of-features for large scale image search. Int. J. Comput. Vis. 87(3), 316–336 (2010)

    Article  Google Scholar 

  137. H. Jégou, T. Furon, J.-J. Fuchs, Anti-sparse coding for approximate nearest neighbor search, in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, Kyoto, 2012), pp. 2029–2032

    Book  Google Scholar 

  138. R. Jenatton, J.Y. Audibert, F. Bach, Structured variable selection with sparsity-inducing norms. J. Mach. Learn. Res. 12, 2777–2824 (2011)

    MathSciNet  MATH  Google Scholar 

  139. R. Jenatton, R. Gribonval, F. Bach, Local stability and robustness of sparse dictionary learning in the presence of noise (2012) [arxiv:1210.0685]

    Google Scholar 

  140. J. Jia, B. Yu, On model selection consistency of the elastic net when p ≫ n. Stat. Sin. 20, 595–611 (2010)

    Google Scholar 

  141. K. Kato, On the degrees of freedom in shrinkage estimation. J. Multivariate Anal. 100(7), 1338–1352 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  142. K. Knight, W. Fu, Asymptotics for Lasso-Type Estimators. Ann. Stat. 28(5), 1356–1378 (2000)

    MathSciNet  MATH  Google Scholar 

  143. J.M. Lee, Smooth Manifolds (Springer, New York, 2003)

    Book  Google Scholar 

  144. C. Lemaréchal, F. Oustry, C. Sagastizábal, The \(\mathcal{U}\)-lagrangian of a convex function. Trans. Am. Math. Soc. 352(2), 711–729 (2000)

    Article  MATH  Google Scholar 

  145. A.S. Lewis, Active sets, nonsmoothness, and sensitivity. SIAM J. Optim. 13(3), 702–725 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  146. A.S. Lewis, The mathematics of eigenvalue optimization. Math. Program. 97(1–2), 155–176 (2003)

    MathSciNet  MATH  Google Scholar 

  147. A.S. Lewis, J. Malick, Alternating projections on manifolds. Math. Oper. Res. 33(1), 216–234 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  148. A.S. Lewis, S. Zhang, Partial smoothness, tilt stability, and generalized hessians. SIAM J. Optim. 23(1), 74–94 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  149. K.-C. Li. From Stein’s unbiased risk estimates to the method of generalized cross validation. Ann. Stat. 13(4), 1352–1377 (1985)

    Article  MATH  Google Scholar 

  150. J. Liang, M.J Fadili, G. Peyré, Local linear convergence of forward–backward under partial smoothness. Technical report (2014) [arxiv preprint arXiv:1407.5611]

    Google Scholar 

  151. S.G. Lingala, Y. Hu, E.V.R. Di Bella, M. Jacob, Accelerated dynamic MRI exploiting sparsity and low-rank structure: k-t SLR. IEEE Trans. Med. Imaging 30(5), 1042–1054 (2011)

    Article  Google Scholar 

  152. P.L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  153. D.A. Lorenz, Convergence rates and source conditions for Tikhonov regularization with sparsity constraints. J. Inverse Ill-Posed Prob. 16(5), 463–478 (2008)

    MathSciNet  MATH  Google Scholar 

  154. D. Lorenz, N. Worliczek, Necessary conditions for variational regularization schemes. Inverse Prob. 29(7), 075016 (2013)

    Google Scholar 

  155. F. Luisier, T. Blu, M. Unser, Sure-let for orthonormal wavelet-domain video denoising. IEEE Trans. Circuits Syst. Video Technol. 20(6), 913–919 (2010)

    Article  Google Scholar 

  156. Y. Lyubarskii, R. Vershynin, Uncertainty principles and vector quantization. IEEE Trans. Inf. Theory 56(7), 3491–3501 (2010)

    Article  MathSciNet  Google Scholar 

  157. J. Mairal, B. Yu, Complexity analysis of the lasso regularization path, in ICML’12 (2012)

    Google Scholar 

  158. S.G. Mallat, A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 11(7), 674–693 (1989)

    Article  MATH  Google Scholar 

  159. S.G. Mallat, A Wavelet Tour of Signal Processing, 3rd edn. (Elsevier/Academic, Amsterdam, 2009)

    MATH  Google Scholar 

  160. S.G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 41(12), 3397–3415 (1993)

    Article  MATH  Google Scholar 

  161. C.L. Mallows, Some comments on \(C_{p}\). Technometrics 15(4), 661–675 (1973)

    MATH  Google Scholar 

  162. B. Mercier, Topics in finite element solution of elliptic problems. Lect. Math. 63 (1979)

    Google Scholar 

  163. M. Meyer, M. Woodroofe, On the degrees of freedom in shape-restricted regression. Ann. Stat. 28(4), 1083–1104 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  164. A. Montanari, Graphical models concepts in compressed sensing, in Compressed Sensing, ed. by Y. Eldar, G. Kutyniok (Cambridge University Press, Cambridge, 2012)

    Google Scholar 

  165. B.S. Mordukhovich, Sensitivity analysis in nonsmooth optimization, in Theoretical Aspects of Industrial Design, vol. 58, ed. by D.A. Field, V. Komkov (SIAM, Philadelphia, 1992), pp. 32–46

    Google Scholar 

  166. S. Nam, M.E. Davies, M. Elad, R. Gribonval, The cosparse analysis model and algorithms. Appl. Comput. Harmon. Anal. 34(1), 30–56 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  167. B.K. Natarajan, Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  168. D. Needell, J. Tropp, R. Vershynin, Greedy signal recovery review, in Conference on Signals, Systems and Computers (IEEE, Pacific Grove, 2008), pp. 1048–1050

    Google Scholar 

  169. S.N. Negahban, M.J. Wainwright, Simultaneous support recovery in high dimensions: Benefits and perils of block-regularization. IEEE Trans. Inf. Theory 57(6), 3841–3863 (2011)

    Article  MathSciNet  Google Scholar 

  170. S. Negahban, P. Ravikumar, M.J. Wainwright, B. Yu, A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Stat. Sci. 27(4), 538–557 (2012)

    Article  MathSciNet  Google Scholar 

  171. Y. Nesterov, Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  172. Y. Nesterov, Gradient methods for minimizing composite objective function. CORE Discussion Papers 2007076, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE) (2007)

    Google Scholar 

  173. Y. Nesterov, A. Nemirovskii, Y. Ye, Interior-Point Polynomial Algorithms in Convex Programming, vol. 13 (SIAM, Philadelphia, 1994)

    Book  MATH  Google Scholar 

  174. G. Obozinski, B. Taskar, M.I. Jordan, Joint covariate selection and joint subspace selection for multiple classification problems. Stat. Comput. 20(2), 231–252 (2010)

    Article  MathSciNet  Google Scholar 

  175. M.R. Osborne, B. Presnell, B.A. Turlach, A new approach to variable selection in least squares problems. IMA J. Numer. Anal. 20(3), 389–403 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  176. S. Oymak, A. Jalali, M. Fazel, Y.C. Eldar, B. Hassibi, Simultaneously structured models with application to sparse and low-rank matrices. (2012) [arXiv preprint arXiv:1212.3753]

    Google Scholar 

  177. N. Parikh, S.P. Boyd, Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2013)

    Google Scholar 

  178. G.B. Passty, Ergodic convergence to a zero of the sum of monotone operators in hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  179. Y.C. Pati, R. Rezaiifar, P.S. Krishnaprasad, Orthogonal Matching Pursuit: recursive function approximation with applications to wavelet decomposition, in Conference on Signals, Systems and Computers (IEEE, Pacific Grove, 1993), pp. 40–44

    Google Scholar 

  180. J-C. Pesquet, A. Benazza-Benyahia, C. Chaux, A SURE approach for digital signal/image deconvolution problems. IEEE Trans. Signal Process. 57(12), 4616–4632 (2009)

    Article  MathSciNet  Google Scholar 

  181. G. Peyré, M.J. Fadili, J.-L. Starck, Learning the morphological diversity. SIAM J. Imaging Sci. 3(3), 646–669 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  182. G. Peyré, J. Fadili, C. Chesneau, Adaptive structured block sparsity via dyadic partitioning, in Proc. EUSIPCO 2011 (2011), pp. 1455–1459

    Google Scholar 

  183. H. Raguet, J. Fadili, G. Peyré, Generalized forward–backward splitting. SIAM J. Imaging Sci. 6(3), 1199–1226 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  184. S. Ramani, T. Blu, M. Unser, Monte-Carlo SURE: a black-box optimization of regularization parameters for general denoising algorithms. IEEE Trans. Image Process. 17(9), 1540–1554 (2008)

    Article  MathSciNet  Google Scholar 

  185. S. Ramani, Z. Liu, J. Rosen, J.-F. Nielsen, J.A. Fessler, Regularization parameter selection for nonlinear iterative image restoration and mri reconstruction using GCV and SURE-based methods. IEEE Trans. Image Process. 21(8), 3659–3672 (2012)

    Article  MathSciNet  Google Scholar 

  186. S. Ramani, J. Rosen, Z. Liu, J.A. Fessler, Iterative weighted risk estimation for nonlinear image restoration with analysis priors, in Computational Imaging X, vol. 8296 (2012), pp. 82960N–82960N–12

    Google Scholar 

  187. B.D. Rao, K. Kreutz-Delgado, An affine scaling methodology for best basis selection. IEEE Trans. Signal Process. 47(1), 187–200 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  188. B. Recht, M. Fazel, P.A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  189. R. Refregier, F. Goudail, Statistical Image Processing Techniques for Noisy Images - An Application Oriented Approach (Kluwer, New York, 2004)

    Google Scholar 

  190. E. Resmerita, Regularization of ill-posed problems in Banach spaces: convergence rates. Inverse Prob. 21(4), 1303 (2005)

    Google Scholar 

  191. E. Richard, F. Bach, J.-P. Vert, Intersecting singularities for multi-structured estimation, in International Conference on Machine Learning, Atlanta, États-Unis (2013)

    Google Scholar 

  192. R.T. Rockafellar, R. Wets, Variational Analysis, vol. 317 (Springer, Berlin, 1998)

    MATH  Google Scholar 

  193. M. Rudelson, R. Vershynin, On sparse reconstruction from Fourier and Gaussian measurements. Commun. Pure Appl. Math. 61(8), 1025–1045 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  194. L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 60(1), 259–268 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  195. F. Santosa, W.W. Symes, Linear inversion of band-limited reflection seismograms. SIAM J. Sci. Stat. Comput. 7(4), 1307–1330 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  196. O. Scherzer, M. Grasmair, H. Grossauer, M. Haltmeier, F. Lenzen, Variational Methods in Imaging, vol. 167 (Springer, New York, 2009)

    MATH  Google Scholar 

  197. I.W. Selesnick, M.A.T. Figueiredo, Signal restoration with overcomplete wavelet transforms: comparison of analysis and synthesis priors, in Proceedings of SPIE, vol. 7446 (2009), p. 74460D

    Google Scholar 

  198. S. Shalev-Shwartz, A. Gonen, O. Shamir, Large-scale convex minimization with a low-rank constraint, in ICML (2011)

    Google Scholar 

  199. X. Shen, J. Ye, Adaptive model selection. J. Am. Stat. Assoc. 97(457), 210–221 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  200. Solo, V., Ulfarsson, M.: Threshold selection for group sparsity, in IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP) (IEEE, Dallas, 2010), pp. 3754–3757

    Google Scholar 

  201. M.V. Solodov, A class of decomposition methods for convex optimization and monotone variational inclusions via the hybrid inexact proximal point framework. Optim. Methods Softw. 19(5), 557–575 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  202. D.A. Spielman, H. Wang, J. Wright, Exact recovery of sparsely-used dictionaries. J. Mach. Learn. Res. 23, 1–35 (2012)

    Google Scholar 

  203. N. Srebro, Learning with matrix factorizations. Ph.D. thesis, MIT, 2004

    Google Scholar 

  204. J.-L. Starck, M. Elad, D.L. Donoho, Image decomposition via the combination of sparse representatntions and variational approach. IEEE Trans. Image Process. 14(10), 1570–1582 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  205. J.-L. Starck, F. Murtagh, J.M. Fadili, Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity (Cambridge University Press, Cambridge, 2010)

    Book  Google Scholar 

  206. G. Steidl, J. Weickert, T. Brox, P. Mrázek, M. Welk, On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and sides. SIAM J. Numer. Anal. 42(2), 686–713 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  207. C.M. Stein, Estimation of the mean of a multivariate normal distribution. Ann. Stat. 9(6), 1135–1151 (1981)

    Article  MATH  Google Scholar 

  208. T. Strohmer, R.W. Heath Jr., Grassmannian frames with applications to coding and communication. Appl. Comput. Harmon. Anal. 14(3), 257–275 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  209. C. Studer, W. Yin, R.G. Baraniuk, Signal representations with minimum \(\ell_{\infty }\)-norm, in Proc. 50th Ann. Allerton Conf. on Communication, Control, and Computing (2012)

    Google Scholar 

  210. B.F. Svaiter, H. Attouch, J. Bolte, Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized gauss-seidel methods. Math. Program. Ser. A 137(1–2), 91–129 (2013)

    MathSciNet  MATH  Google Scholar 

  211. H.L. Taylor, S.C. Banks, J.F. McCoy, Deconvolution with the 1 norm. Geophysics 44(1), 39–52 (1979)

    Article  Google Scholar 

  212. R. Tibshirani, Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B. Methodol. 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  213. R.J. Tibshirani, J. Taylor, The solution path of the generalized Lasso. Ann. Stat. 39(3), 1335–1371 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  214. R.J. Tibshirani, J. Taylor, Degrees of freedom in Lasso problems. Ann. Stat. 40(2), 1198–1232 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  215. R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, K. Knight, Sparsity and smoothness via the fused Lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 67(1), 91–108 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  216. A.N. Tikhonov, Regularization of incorrectly posed problems. Soviet Math. Dokl. 4, 1624–1627 (1963)

    MATH  Google Scholar 

  217. A.N. Tikhonov, Solution of incorrectly formulated problems and the regularization methods. Soviet Math. Dokl. 4, 1035–1038 (1963)

    Google Scholar 

  218. A.N. Tikhonov, V. Arsenin, Solutions of Ill-Posed Problems. (V. H. Winston and Sons, Washington, 1977)

    Google Scholar 

  219. A.M. Tillman, M.E. Pfetsh., The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory 60(2), 1248–1259 (2014)

    Google Scholar 

  220. J. Tropp, Convex recovery of a structured signal from independent random linear measurements, in Sampling Theory, a Renaissance (Birkhäuser, Basel, 2014)

    Google Scholar 

  221. J.A. Tropp, Just relax: convex programming methods for identifying sparse signals in noise. IEEE Trans. Inf. Theory 52(3), 1030–1051 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  222. P. Tseng, Alternating projection-proximal methods for convex programming and variational inequalities. SIAM J. Optim. 7(4), 951–965 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  223. P. Tseng, S. Yun, A coordinate gradient descent method for nonsmooth separable minimization. Math. Prog. Ser. B 117 (2009)

    Google Scholar 

  224. B.A. Turlach, W.N. Venables, S.J. Wright, Simultaneous variable selection. Technometrics 47(3), 349–363 (2005)

    Article  MathSciNet  Google Scholar 

  225. S. Vaiter, C. Deledalle, G. Peyré, J. Fadili, C. Dossal, Degrees of freedom of the group Lasso, in ICML’12 Workshops (2012), pp. 89–92

    Google Scholar 

  226. S. Vaiter, C. Deledalle, G. Peyré, J. Fadili, C. Dossal, The degrees of freedom of partly smooth regularizers. Technical report, Preprint Hal-00768896 (2013)

    Google Scholar 

  227. S. Vaiter, C.-A. Deledalle, G. Peyré, C. Dossal, J. Fadili, Local behavior of sparse analysis regularization: applications to risk estimation. Appl. Comput. Harmon. Anal. 35(3), 433–451 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  228. S. Vaiter, M. Golbabaee, M.J. Fadili, G. Peyré, Model selection with low complexity priors. Technical report (2013) [arXiv preprint arXiv:1307.2342]

    Google Scholar 

  229. S. Vaiter, G. Peyré, C. Dossal, M.J. Fadili, Robust sparse analysis regularization. IEEE Trans. Inf. Theory 59(4), 2001–2016 (2013)

    Article  Google Scholar 

  230. S. Vaiter, G. Peyré, J.M. Fadili, C.-A. Deledalle, C. Dossal, The degrees of freedom of the group Lasso for a general design, in Proc. SPARS’13 (2013)

    Google Scholar 

  231. S. Vaiter, G. Peyré, M.J. Fadili, Robust polyhedral regularization, in Proc. SampTA (2013)

    Google Scholar 

  232. S. Vaiter, G. Peyré, J. Fadili, Model consistency of partly smooth regularizers. Technical report, Preprint Hal-00987293 (2014)

    Google Scholar 

  233. D. Van De Ville, M. Kocher, SURE-based Non-Local Means. IEEE Signal Process. Lett. 16(11), 973–976 (2009)

    Article  Google Scholar 

  234. D. Van De Ville, M. Kocher, Non-local means with dimensionality reduction and SURE-based parameter selection. IEEE Trans. Image Process. 9(20), 2683–2690 (2011)

    Article  Google Scholar 

  235. L. van den Dries, C. Miller, Geometric categories and o-minimal structures. Duke Math. J. 84(2), 497–540, 08 (1996)

    Google Scholar 

  236. J.E. Vogt, V. Roth, A complete analysis of the 1, p group-Lasso, in International Conference on Machine Learning (2012)

    Google Scholar 

  237. C. Vonesch, S. Ramani, M. Unser, Recursive risk estimation for non-linear image deconvolution with a wavelet-domain sparsity constraint, in International Conference on Image Processing (IEEE, San Diego, 2008), pp. 665–668

    Google Scholar 

  238. B.C. Vũ. A splitting algorithm for dual monotone inclusions involving cocoercive operators. Adv. Comput. Math. 38, 1–15 (2011)

    Google Scholar 

  239. M.J. Wainwright, Sharp thresholds for noisy and high-dimensional recovery of sparsity using 1-constrained quadratic programming (Lasso). IEEE Trans. Inf. Theory 55(5), 2183–2202 (2009)

    Article  MathSciNet  Google Scholar 

  240. S.J. Wright, Identifiable surfaces in constrained optimization. SIAM J. Control Optim. 31(4), 1063–1079 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  241. J. Ye, On measuring and correcting the effects of data mining and model selection. J. Am. Stat. Assoc. 93, 120–131 (1998)

    Article  MATH  Google Scholar 

  242. M. Yuan, Y. Lin, Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B Stat. Methodol. 68(1), 49–67 (2005)

    Article  MathSciNet  Google Scholar 

  243. P. Zhao, B. Yu, On model selection consistency of Lasso. J. Mach. Learn. Res. 7, 2541–2563 (2006)

    MathSciNet  MATH  Google Scholar 

  244. P. Zhao, G. Rocha, B. Yu, The composite absolute penalties family for grouped and hierarchical variable selection. Ann. Stat. 37(6A), 3468–3497 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  245. H. Zou, T. Hastie, R. Tibshirani, On the “degrees of freedom” of the Lasso. Ann. Stat. 35(5), 2173–2192 (2007)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the European Research Council (ERC project SIGMA-Vision). We would like to thank our collaborators Charles Deledalle, Charles Dossal, Mohammad Golbabaee, and Vincent Duval who have helped to build this unified view of the field.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samuel Vaiter .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Vaiter, S., Peyré, G., Fadili, J. (2015). Low Complexity Regularization of Linear Inverse Problems. In: Pfander, G. (eds) Sampling Theory, a Renaissance. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-19749-4_3

Download citation

Publish with us

Policies and ethics