Advertisement

CG-Lasso Estimator for Multivariate Adaptive Regression Spline

  • Pakize TaylanEmail author
  • Gerhard Wilhelm Weber
Chapter
Part of the Nonlinear Systems and Complexity book series (NSCH, volume 24)

Abstract

Multivariate adaptive regression spline (MARS) denotes a modern methodology from statistical learning which is important in both classification and regression. It is very useful for high-dimensional problems and shows a great promise for fitting nonlinear multivariate functions by using its ability to estimate the contributions of the basis functions so that both the additive and the interactive effects of the predictors are allowed to determine the response variable. The MARS algorithm for estimating the model function consists of two sub-algorithms. In our paper, we propose not to use second algorithm. Instead, we construct a penalized residual sum of squares (PRSS) for MARS as a higher-order Tikhonov regularization problem which is also known as ridge regression that shrinks coefficients and make them more stable. But it cannot perform variable selection in the model and, hence, does not give an easily interpretable model (especially, if the number of variable p is large). For this reason, we change the Tikhonov penalty function with the generalized Lasso penalty for solving the problem PRSS, taking an advantage for feature selection. We treat this problem using continuous optimization techniques which we consider to become an important complementary technology and model-based alternative to the concept of the backward stepwise algorithm. In particular, we apply the elegant framework of conic quadratic programming (CQP), and we call the solution as CG-Lasso. Here, we gain from an area of convex optimization whose programs are very well-structured, herewith, resembling linear programming and, hence, permitting the use of powerful interior point methods (IPMs).

Keywords

Multivariate adaptive regression spline Ridge regression Continuous optimization Conic quadratic programming Interior point methods Inverse problems Exterior point methods Regularization Singular value decomposition Convex problems 

References

  1. 1.
    Breiman, L., Friedman, J.H., Olshen, R., Stone, C.: Classification and Regression Trees. Wadsworth, Belmont (1984)zbMATHGoogle Scholar
  2. 2.
    Friedman, J.H.: Multivariate adaptive regression splines. Ann. Statist. 19(1), 1–141 (1991)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Aster, R.C., Borchers, B., Thurber, C.H.: Parameter Estimation and Inverse Problems. Academic Press, NewYork (2013)zbMATHGoogle Scholar
  4. 4.
    Tibshirani, R.J., Taylor, J.: The solution path of the generalized Lasso. Ann. Statist. 39(3), 1335–1371 (2011)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Nemirovski, A.: Lectures on Modern Convex Optimization. Israel Institute Technology (2002)Google Scholar
  6. 6.
    Nash, G., Sofer, A.: Linear and Nonlinear Programming. McGraw-Hill, New York (1996)Google Scholar
  7. 7.
    Nemirovskii, A.S., Todd, M.J.: Interior point methods in convex programming. Acta Numer. 17, 191–234 (2008)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Bagirov, A., Karmitsa, N., Mäkelä, M.M.: Introduction to Nonsmooth Optimization: Theory, Practice and Software. Springer, New York (2014)CrossRefGoogle Scholar
  9. 9.
    Shor, N.Z.: Minimization Methods for Non-differentiable Functions. Springer, Berlin (1985)CrossRefGoogle Scholar
  10. 10.
    Hoerl, A.E., Kennard, R.W.: Ridge regression iterative estimation of the biasing parameter. Comm. Statist. Theory Methods. 5(1), 77–88 (1976)CrossRefGoogle Scholar
  11. 11.
    Renegar, J.: A Mathematical View of Interior-Point Methods in Convex Optimization MOS-SIAM Series on Optimization. SIAM, Philadelphia (1987)zbMATHGoogle Scholar
  12. 12.
    Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica. 4, 373–395 (1984)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization: Analysis, Algorithms and Engineering Applications MOS-SIAM Series on Optimization. SIAM, Philadelphia (2001)CrossRefGoogle Scholar
  14. 14.
    Lobo, M., Vandenberghe, S.L., Boyd, S., Lebret, H.: Applications of second-order cone programming. Linear Algebra Appl. 284, 193–228 (1998)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Weisberg, S.: Applied Linear Regression. Wiley, Hoboken (2005)CrossRefGoogle Scholar
  16. 16.
    Hansen, P.C.: Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Line-ear Inversion. SIAM, Philadelphia (1998)CrossRefGoogle Scholar
  17. 17.
    Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 58, 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Taylan, P., Weber, G.W., Yerlikaya, F.: A new approach to multivariate adaptive regression splines by using Tikhonov regularization and continuous optimization. J. TOP. 18(2), 377–395 (2010)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Craven, P., Wahba, G.: Smoothing noisy data with spline functions: estimating the correct degree of smoothing by the method of generalized cross-validation. Numer. Math. 31, 377–403 (1979)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Weber, G.W., Batmaz, I., Köksal, G., et al.: CMARS: a new contribution to nonparametric regression with multivariate adaptive regression splines supported by continuous optimization. Inverse Probl. Sci. Eng. 20(3), 371–400 (2012)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Fletcher, R.: Practical Methods of Optimization. Wiley, New York (1987)zbMATHGoogle Scholar
  22. 22.
    Schmidt, M., Fung, G., Rosales, R.: Optimization Methods for L1-Regularization. UBC Technical Report TR-2009-19 (2009)Google Scholar
  23. 23.
    Pringle, R.M., Rayner, A.A.: Generalized Inverse Matrices with Applications to Statistics. Hafner Publishing, NewYork (1971)zbMATHGoogle Scholar
  24. 24.
    Aitchison, P.W.: Generalized inverse matrices and their applications. Int. J. Math. Educ. Sci. Technol. 13(1), 99–109 (1982)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Hastie, T., Tibshirani, R., Friedman, J.H.: The Element of Statistical Learning. Springer, New York (2001)CrossRefGoogle Scholar
  26. 26.
    Blumschein, P., Hung, W., Jonassen, D.: Model-Based Approaches to Learning: Using Systems Models and Simulations to Improve Understanding and problem Solving in Complex Domains. Sense Publishers, Rotterdam (2009)Google Scholar
  27. 27.
    Taylan, P., Weber, G.W., Beck, A.: New approaches to regression by generalized additive models and continuous optimization for modern applications in finance, science and technology. Optimization. 56(5–6), 675–698 (2007)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Işcanoğlu, A., Weber, G.W., Taylan, P.: Predicting Default Probabilities with Generalized Additive Models for Emerging Markets Graduate Summer School on Recent Advances in Statistics. METU, Ankara (2007)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Science FacultyDicle UniversityDiyarbakırTurkey
  2. 2.Faculty of Management EngineeringPoznan Technology UniversityPoznanPoland

Personalised recommendations