Skip to main content

Sparsity-Constrained Optimization

  • Chapter
  • First Online:
Algorithms for Sparsity-Constrained Optimization

Part of the book series: Springer Theses ((Springer Theses,volume 261))

Abstract

Theoretical and application aspects of sparse estimation in linear models have been studied extensively in areas such as signal processing, machine learning, and statistics. The sparse linear regression and CS algorithms attempt to provide a sparse vector whose consistency with the acquired data is usually measured by the squared error. While this measure of discrepancy is often desirable for signal processing applications, it is not the appropriate choice for a variety of other applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • A. Agarwal, S. Negahban, and M. Wainwright. Fast global convergence rates of gradient methods for high-dimensional statistical recovery. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems, volume 23, pages 37–45. 2010. long version available at http://arxiv.org/abs/1104.4824 arXiv:1104.4824v1 [stat.ML].

  • A. Beck and Y. C. Eldar. Sparsity constrained nonlinear optimization: Optimality conditions and algorithms. http://arxiv.org/abs/1203.4580 arXiv:1203.4580 [cs.IT], Mar. 2012.

  • P. Bickel, Y. Ritov, and A. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. The Annals of Statistics, 37(4):1705–1732, 2009.

    Article  MathSciNet  MATH  Google Scholar 

  • T. Blumensath. Compressed sensing with nonlinear observations. Preprint, 2010. URL http://users.fmrib.ox.ac.uk/~tblumens/papers/B_Nonlinear.pdf.

  • T. Blumensath and M. E. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3):265–274, Nov. 2009.

    Article  MathSciNet  MATH  Google Scholar 

  • F. Bunea. Honest variable selection in linear and logistic regression models via â„“ 1 and â„“ 1 + â„“ 2 penalization. Electronic Journal of Statistics, 2:1153–1194, 2008.

    Article  MathSciNet  MATH  Google Scholar 

  • E. J. Candès, J. K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 59(8):1207–1223, 2006.

    Article  MathSciNet  MATH  Google Scholar 

  • V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6):805–849, Dec. 2012.

    Article  MathSciNet  MATH  Google Scholar 

  • A. J. Dobson and A. Barnett. An Introduction to Generalized Linear Models. Chapman and Hall/CRC, Boca Raton, FL, 3rd edition, May 2008. ISBN 9781584889502.

    MATH  Google Scholar 

  • J. H. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1–22, Feb. 2010. Software available online at http://www-stat.stanford.edu/~tibs/glmnet-matlab/.

  • I. Guyon, S. Gunn, A. Ben Hur, and G. Dror. Result analysis of the NIPS 2003 feature selection challenge. In Advances in Neural Information Processing Systems 17, pages 545–552. 2004. URL http://clopinet.com/isabelle/Projects/NIPS2003/ggad-nips04.pdf.

  • J. D. Hamilton. Time Series Analysis. Princeton University Press, Princeton, NJ, 1994.

    MATH  Google Scholar 

  • T. Hastie, R. Tibshirani, and J. H. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Verlag, 2009.

    Google Scholar 

  • A. Jalali, C. C. Johnson, and P. K. Ravikumar. On learning discrete graphical models using greedy methods. In J. Shawe-Taylor, R. S. Zemel, P. Bartlett, F. C. N. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 24, pages 1935–1943. 2011.

    Google Scholar 

  • S. M. Kakade, O. Shamir, K. Sridharan, and A. Tewari. Learning exponential families in high-dimensions: Strong convexity and sparsity. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, volume 9 of JMLR Workshop and Conference Proceedings, pages 381–388, Sardinia, Italy, 2010.

    Google Scholar 

  • J. Liu, J. Chen, and J. Ye. Large-scale sparse logistic regression. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’09, pages 547–556, New York, NY, USA, 2009. ACM.

    Google Scholar 

  • A. Lozano, G. Swirszcz, and N. Abe. Group orthogonal matching pursuit for logistic regression. In G. Gordon, D. Dunson, and M. Dudik, editors, the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15, pages 452–460, Ft. Lauderdale, FL, USA, 2011. JMLR W&CP.

    Google Scholar 

  • D. Needell and J. A. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3):301–321, 2009.

    Article  MathSciNet  MATH  Google Scholar 

  • S. Negahban, P. Ravikumar, M. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems, volume 22, pages 1348–1356. 2009. long version available at http://arxiv.org/abs/1010.2731 arXiv:1010.2731v1 [math.ST].

  • Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341–362, Jan. 2012.

    Article  MathSciNet  MATH  Google Scholar 

  • S. Shalev-Shwartz, N. Srebro, and T. Zhang. Trading accuracy for sparsity in optimization problems with sparsity constraints. SIAM Journal on Optimization, 20(6):2807–2832, 2010.

    Article  MathSciNet  MATH  Google Scholar 

  • A. Tewari, P. K. Ravikumar, and I. S. Dhillon. Greedy algorithms for structurally constrained high dimensional problems. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, editors, Advances in Neural Information Processing Systems, volume 24, pages 882–890. 2011.

    Google Scholar 

  • J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, Aug. 2012.

    Article  MathSciNet  MATH  Google Scholar 

  • S. A. van de Geer. Empirical Processes in M-estimation. Cambridge University Press, Cambridge, UK, 2000.

    Google Scholar 

  • S. A. van de Geer. High-dimensional generalized linear models and the Lasso. The Annals of Statistics, 36(2):614–645, 2008.

    Article  MathSciNet  MATH  Google Scholar 

  • V. Vapnik. Statistical Learning Theory. Wiley, New York, NY, 1998. ISBN 978-0-471-03003-4.

    MATH  Google Scholar 

  • T. Zhang. Sparse recovery with orthogonal matching pursuit under RIP. IEEE Transactions on Information Theory, 57(9):6215–6221, Sept. 2011.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Bahmani, S. (2014). Sparsity-Constrained Optimization. In: Algorithms for Sparsity-Constrained Optimization. Springer Theses, vol 261. Springer, Cham. https://doi.org/10.1007/978-3-319-01881-2_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-01881-2_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-01880-5

  • Online ISBN: 978-3-319-01881-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics