Subgradient-Type Methods with Space Dilation
The analysis of the subgradient methods has shown that improvements only in the stepsize rules cannot, in general, significantly accelerate convergence if at each iteration the algorithm proceeds in the direction opposite to that of the subgradient. Indeed, slow convergence is due to the fact that the subgradient is often almost perpendicular to the direction towards the minimum. In such circumstances the reduction of the distance to the minimum is much smaller than the stepsize, and therefore the stepsizes cannot diminish too rapidly, if we want to guarantee the convergence to a minimum.
KeywordsMinimum Point Optimal Point Geometrical Progression Subgradient Method Convex Programming Problem
Unable to display preview. Download preview PDF.