A Subgradient-Like Algorithm for Solving Vector Convex Inequalities
In this paper, we propose a strongly convergent variant of Robinson’s subgradient algorithm for solving a system of vector convex inequalities in Hilbert spaces. The advantage of the proposed method is that it converges strongly, when the problem has solutions, under mild assumptions. The proposed algorithm also has the following desirable property: the sequence converges to the solution of the problem, which lies closest to the starting point and remains entirely in the intersection of three balls with radius less than the initial distance to the solution set.
KeywordsProjection methods Strong convergence Subgradient algorithm Vector convex functions
The authors were partially supported by Project PROCAD-nf-UFG/UnB/IMPA, by Project PRONEX-CNPq-FAPERJ and by Project CAPES-MES-CUBA 226/2012 “Modelos de Otimização e Aplicações”.
The authors would like to extend their gratitude toward anonymous referees whose suggestions helped us to improve the presentation of this paper.
- 2.von Neumann, J.: Functional Operators. The Geometry of Orthogonal Spaces, vol. 2. Princeton University Press, Princeton (1950) Google Scholar
- 12.Luc, D.T.: Theory of Vector Optimization. Lecture Notes in Economics and Mathematical Systems, vol. 319. Springer, Berlin (1989) Google Scholar
- 31.Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2007) Google Scholar