Skip to main content

Norm Approximation and Regularization

  • Chapter
  • 3456 Accesses

Part of the book series: Springer Optimization and Its Applications ((SOIA,volume 103))

Abstract

In this chapter, we study norm approximation and regularization. We will revisit some examples that had been studied in Chaps. 2 and 3 from a new aspect.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Some researchers argued that the LS-SVM cannot be viewed as a SVM, since all the data points are active to generate the solution of the optimization problems and no sparse supporting vectors exist [9]. So, we introduce LS-SVM in Chap. 4.

References

  1. Huber, P.J.: Robust estimation of a location parameter. Ann. Stat. 53(1), 73–101 (1964)

    Article  Google Scholar 

  2. Tychonoff, A.N.: On the stability of inverse problems. Dokl. Akad. Nauk SSSR 39(5), 195–198 (1943)

    Google Scholar 

  3. Tikhonov, A.N., Arsenin, V.A.: Solution of Ill-Posed Problems. Winston & Sons, Washington, DC (1977)

    Google Scholar 

  4. Hoerl, A.E.: Application of ridge analysis to regression problems. Chem. Eng. Prog. 58, 54–59 (1962)

    Google Scholar 

  5. Hoerl, A.E., Kennard, R.W.: Ridge regression: biased estimation for nonorthogonal problems. Technometrics 42(1), 80–86 (1970)

    Article  MathSciNet  Google Scholar 

  6. Bickel, P.J., Li, B., Tsybakov, A.B., van de Geer, S.A., Yu, B., Valdés, T., Rivero, C., Fan, J., van der Vaart, A.: Regularization in statistics. Test 15(2), 271–344 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  7. Suykens, J.A.K., Vandewalle, J.: Least squares support vector machine classifiers. Neural Process. Lett. 9(3), 293–300 (1999)

    Article  MathSciNet  Google Scholar 

  8. Suykens, J.A.K., Van Gestel, T., De Brabanter, J., De Moor, B., Vandewalle, J.: Least Squares Support Vector Machines. World Scientific, River Edge (2002)

    Book  MATH  Google Scholar 

  9. Rifkin, R.M.: Everything old is new again a fresh look at historical approaches in machine learning. Ph.D. thesis, MIT (2002)

    Google Scholar 

  10. Pelckmans, K., Suykens, J.A.K., De Moor, B.: Morozov, Ivanov and Tikhonov regularization based LS-SVMs. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds.) Neural Information Processing. Lecture Notes in Computer Science, vol. 3316, pp. 1216–1222. Springer, Berlin/Heidelberg (2004)

    Google Scholar 

  11. Morozov, V.A.: Methods for Solving Incorrectly Posed Problems. Springer, New York (1984)

    Book  Google Scholar 

  12. Ivanov, V.V.: The Theory of Approximate Methods and Their Application to the Numerical Solution of Singular Integral Equations. Nordhoff International, Leyden (1976)

    MATH  Google Scholar 

  13. Smola, A.J., Schölkopf, B., Müller, K.-R.: The connection between regularization operators and support vector kernels. Neural Netw. 11(4), 637–649 (1998)

    Article  Google Scholar 

  14. Donoho, D.L., Elad, M.: Optimally sparse representation in general (nonorthogonal) dictionaries via l 1 minimization. Proc. Natl. Acad. Sci. 100(5), 2197–2202 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  15. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Methodol. 58(1), 267–288 (1996)

    MATH  MathSciNet  Google Scholar 

  16. Tibshirani, R.: Regression shrinkage and selection via the lasso: a retrospective. J. R. Stat. Soc. Ser. B Stat. Methodol. 73(3), 273–282 (2011)

    Article  MathSciNet  Google Scholar 

  17. Vidaurre, D., Bielza, C., Larrañaga, P.: A survey of L 1 regression. Int. Stat. Rev. 81(3), 361–387 (2013)

    Article  MathSciNet  Google Scholar 

  18. Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  19. Elad, M.: Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer, New York (2010)

    Book  Google Scholar 

  20. Elder, Y.C., Kutyniok, G.: Compressed Sensing: Theory and Applications. Cambridge University Press, Cambridge/New York (2012)

    Book  Google Scholar 

  21. Bach, F., Jenatton, R., Mairal, J., Obozinski, G.: Convex optimization with sparsity-inducing norms. In: Sra, S., Nowozin, S., Wright, S.J. (eds.) Optimization for Machine Learning, pp. 19–53. MIT, Cambridge (2012)

    Google Scholar 

  22. Bolstad, W.M.: Introduction to Bayesian Statistics, 2nd edn. Wiley, Hoboken (2007)

    Book  MATH  Google Scholar 

  23. MacKay, D.J.C.: Bayesian interpolation. Neural Comput. 4(3), 415–447 (1992)

    Article  Google Scholar 

  24. Tipping, M.E.: Bayesian inference: an introduction to principles and practice in machine learning. In: Bousquet, O., von Luxburg, U., Ratsch, G. (eds.) Advanced Lectures on Machine Learning, pp. 41–62, Springer, Berlin/Heidelberg (2002)

    Google Scholar 

  25. Williams, P.: Bayesian regularization and pruning using a Laplace prior. Neural Comput. 7, 117–143 (1995)

    Article  Google Scholar 

  26. Tipping, M.: The relevance vector machine. In: Solla, S.A., Leen, T.K., Müller, K.-R. (eds.) Advances in Neural Information Processing Systems NIPS 12, pp. 652–658. MIT, Cambridge/London (2000)

    Google Scholar 

  27. Tipping, M.E.: Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 1, 211–244 (2001)

    MATH  MathSciNet  Google Scholar 

  28. Candès, E., Tao, T.: The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35(6), 2313–2351 (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Tsinghua University Press, Beijing and Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Li, L. (2015). Norm Approximation and Regularization. In: Selected Applications of Convex Optimization. Springer Optimization and Its Applications, vol 103. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-46356-7_4

Download citation

Publish with us

Policies and ethics