Advertisement

Application of a new accelerated algorithm to regression problems

  • Avinash Dixit
  • D. R. Sahu
  • Amit Kumar SinghEmail author
  • T. Som
Methodologies and Application
  • 43 Downloads

Abstract

Many iterative algorithms like Picard, Mann, Ishikawa are very useful to solve fixed point problems of nonlinear operators in real Hilbert spaces. The recent trend is to enhance their convergence rate abruptly by using inertial terms. The purpose of this paper is to investigate a new inertial iterative algorithm for finding the fixed points of nonexpansive operators in the framework of Hilbert spaces. We study the weak convergence of the proposed algorithm under mild assumptions. We apply our algorithm to design a new accelerated proximal gradient method. This new proximal gradient technique is applied to regression problems. Numerical experiments have been conducted for regression problems with several publicly available high-dimensional datasets and compare the proposed algorithm with already existing algorithms on the basis of their performance for accuracy and objective function values. Results show that the performance of our proposed algorithm overreaches the other algorithms, while keeping the iteration parameters unchanged.

Keywords

Nonexpansive mapping S-iteration method Regression Composite minimization problems 

Notes

Acknowledgements

The first author would like to acknowledge the financial support by Indian Institute of Technology (Banaras Hindu University) in terms of teaching assistantship. The third author thankfully acknowledges the Council of Scientific and Industrial Research (CSIR), New Delhi, India, through University Grant Commission (UGC) for providing financial assistance in the form of Junior Research Fellowship through grant (Ref. No. 19/06/2016 (i) EU-V).

Compliance with ethical standards

Conflict of interest

There is no conflict of interest among all authors.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

References

  1. Acedo GL, Xu HK (2007) Iterative methods for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal: Theory Methods Appl 67(7):2258–2271MathSciNetCrossRefzbMATHGoogle Scholar
  2. Agarwal RP, O’Regan D, Sahu DR (2007) Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J Nonlinear Convex Anal 8(1):61–79MathSciNetzbMATHGoogle Scholar
  3. Alvarez F (2004) Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J optim 14:773–782MathSciNetCrossRefzbMATHGoogle Scholar
  4. Alvarez F, Attouch H (2001) An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal 9:3–11MathSciNetCrossRefzbMATHGoogle Scholar
  5. Bauschke HH, Borwein JM (1996) On projection algorithms for solving convex feasibility problems. SIAM Rev 38:367–426MathSciNetCrossRefzbMATHGoogle Scholar
  6. Bauschke HH, Combttes PL (2011) Convex Analysis and Monotone Operator Theory in Hilbert space. Springer, BerlinCrossRefGoogle Scholar
  7. Beck A, Teboulle M (2009) Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans Image Process 18(11):2419–2434MathSciNetCrossRefzbMATHGoogle Scholar
  8. Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci 2(1):183–202MathSciNetCrossRefzbMATHGoogle Scholar
  9. Bot RI, Csetnek ER (2016) An inertial alternating direction method of multipliers. Minimax Theory Appl 1:29–49MathSciNetzbMATHGoogle Scholar
  10. Bot RI, Csetnek ER, Hendrich C (2015) Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl Math Comput 256:472–487MathSciNetzbMATHGoogle Scholar
  11. Byrne CL (2004) A unified treatment of some iterative algorithm in signal processing and image reconstruction. Inverse Probl 18:441–453MathSciNetCrossRefGoogle Scholar
  12. Chambolle A, Dossal C (2015) Dossal On the convergence of the iterates of the “fast iterative shrinkage/thresholding algorithm”. J Optim Theory Appl 166:968–982MathSciNetCrossRefzbMATHGoogle Scholar
  13. Chang SS, Wang G, Wang L, Tang YK, Ma ZL (2014) \(\bigtriangleup \)-convergence theorems for multi-valued nonexpansive mapping in hyperbolic spaces. Appl Math Comput 249:535–540MathSciNetzbMATHGoogle Scholar
  14. Chen C, Ma S, Yang J (2014) A general inertial proximal point algorithm for mixed variational inequality problem. SIAM J Optim 25(4):2120–2142MathSciNetCrossRefzbMATHGoogle Scholar
  15. Chidume CE, Mutangadura S (2001) An example on the Mann iteration method for Lipschitzian pseudocontractions. Proc Am Math Soc 129:2359–2363CrossRefzbMATHGoogle Scholar
  16. Cholamjiak P, Abdou AA, Cho YJ (2015) Proximal point algorithms involving fixed points of nonexpansive mappings in CAT(0) spaces. Fixed Point Theory Appl.  https://doi.org/10.1186/13663-015-0465-4 MathSciNetzbMATHGoogle Scholar
  17. Dotson WG Jr (1970) On the Mann iterative process. Trans Am Math Soc 149:65–73MathSciNetCrossRefzbMATHGoogle Scholar
  18. Ege O, Karaca I (2015) Banach fixed point theorem for digital images. J Nonlinear Sci Appl 8:237–245MathSciNetCrossRefzbMATHGoogle Scholar
  19. Franklin J (1980) Methods of mathematical economics. Springer Verlag, New YorkzbMATHGoogle Scholar
  20. Ishikawa S (1974) Fixed points by a new iteration method. Proc Am Math Soc 44:147–150MathSciNetCrossRefzbMATHGoogle Scholar
  21. Khan SH (2013) A Picard-Mann hybrid iterative process. Fixed Point Theory Appl 1:1–10MathSciNetzbMATHGoogle Scholar
  22. Lions JL, Stampacchia G (1967) Variational inequalities. Commun Pure Appl Math 20:493–519CrossRefzbMATHGoogle Scholar
  23. Lorenz DA, Pock T (2015) An inertial forword–backword algorithm for monotone inclusions. J Math Imaging Vis 51:311–325CrossRefzbMATHGoogle Scholar
  24. Mainge PE (2008) Convergence theorems for inertial KM-type algorithms. J Comput Appl Math 219:223–236MathSciNetCrossRefzbMATHGoogle Scholar
  25. Mann WR (1953) Mean value methods in iteration. Proc Am Math Soc 4:506–610MathSciNetCrossRefzbMATHGoogle Scholar
  26. Mercier B (1980) Mechanics and Variational Inequalities, Lecture Notes. Orsay Centre of Paris UniversityGoogle Scholar
  27. Nesterov YE (2007) Gradient methods for minimizing composite objective functions, Technical report, Center for Operations Research and Econometrics (CORE), Catholie University of LouvainGoogle Scholar
  28. Nesterov YE (1983) A method for unconstrained convex minimization problem with the rate of convergence \(O(\frac{1}{k^2})\). Dokl Akad Nauk SSSR 269(3):543–547MathSciNetGoogle Scholar
  29. Nesterov Y (2004) Introductory lectures on convex optimization: a basic course, Applied Optimization, vol 87. Kluwer Academic Publishers, BostonzbMATHGoogle Scholar
  30. Nesterov YE (2005) Smooth minimization of nonsmooth functions. Math Program 103(1):127–152MathSciNetCrossRefzbMATHGoogle Scholar
  31. Ochs P, Chen Y, Brox T, Pock T (2014) Inertial proximal algorithm for non-convex optimization. SIAM J Imaging Sci 7(2):1388–1419MathSciNetCrossRefzbMATHGoogle Scholar
  32. Parikh N, Boyd S (2014) Proximal algorithms. Found Trends Optim 1(3):127–239.  https://doi.org/10.1561/2400000 CrossRefGoogle Scholar
  33. Polyak BT (1964) Some methods of speeding up the convergence od iteration methods. U S S R Comput Math Math Phys 4:1–17CrossRefGoogle Scholar
  34. Sahu DR (2011) Applications of the S-iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory Appl 12:187–204MathSciNetzbMATHGoogle Scholar
  35. Sakurai K, Liduka H (2014) Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping. Fixed Point Theory Appl 202Google Scholar
  36. Suparatulatorn R, Cholamjiak P (2016) The modified S-iteration process for nonexpansive mappings in CAT(k) spaces. Fixed Point Theory Appl 25Google Scholar
  37. Takahashi W, Takeuchi Y, Kubota R (2008) Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J Math Anal Appl 341(1):276–286MathSciNetCrossRefzbMATHGoogle Scholar
  38. Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J R Stat Soc Ser B 58(1):267–288MathSciNetzbMATHGoogle Scholar
  39. Tseng P (2008) On accelerated proximal gradient methods for convex-concave optimization, Technical report, University of Washington, SeattleGoogle Scholar
  40. Uko LU (1993) Remarks on the generalized Newton method. Math Program 59:404–412MathSciNetCrossRefzbMATHGoogle Scholar
  41. Uko LU (1996) Generalized equations and the generalized Newton method. Math Program 73:251–268MathSciNetzbMATHGoogle Scholar
  42. Yao Y, Cho YJ, Liou YC (2011) Algorithms of common solutions of variational inclusions, mixed equilibrium problems and fixed point problems. Eur J Oper Res 212(2):242–250MathSciNetCrossRefzbMATHGoogle Scholar
  43. Zhao LC, Chang SS, Kim JK (2013) Mixed type iteration for total asymptotically nonexpansive mappings in hyperbolic spaces. Fixed Points Theory Appl 353Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Mathematical SciencesIndian Institute of Technology (BHU)VaranasiIndia
  2. 2.Department of Mathematics, Institute of ScienceBanaras Hindu UniversityVaranasiIndia

Personalised recommendations