Advertisement

Fixed Point Theory and Applications

, 2009:957407 | Cite as

Super-Relaxed ( Open image in new window )-Proximal Point Algorithms, Relaxed ( Open image in new window )-Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions

  • Ravi P. Agarwal
  • Ram U. Verma
Open Access
Review Article
  • 754 Downloads

Abstract

We glance at recent advances to the general theory of maximal (set-valued) monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed ( Open image in new window )-proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal ( Open image in new window )-monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar (1976), while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar (1976) to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas (1992). Even for the linear convergence analysis for the overrelaxed (or super-relaxed) ( Open image in new window )-proximal point algorithm, the fundamental model for Rockafellar's case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal ( Open image in new window )-monotonicity, and then applying to first-order evolution equations/inclusions.

Keywords

Iterative Procedure Maximal Monotone Real Hilbert Space Resolvent Operator Proximal Point Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1. Introduction and Preliminaries

We begin with a real Hilbert space Open image in new window with the norm Open image in new window and the inner product Open image in new window . We consider the general variational inclusion problem of the following form. Find a solution to

where Open image in new window is a set-valued mapping on Open image in new window .

In the first part, Rockafellar [1] introduced the proximal point algorithm, and examined the general convergence and rate of convergence analysis, while solving (1.1) by showing when Open image in new window is maximal monotone, that the sequence Open image in new window generated for an initial point Open image in new window by
converges weakly to a solution of (1.1), provided that the approximation is made sufficiently accurate as the iteration proceeds, where Open image in new window for a sequence Open image in new window of positive real numbers that is bounded away from zero, and in second part using the first part and further amending the proximal point algorithm succeeded in achieving the linear convergence. It follows from (1.2) that Open image in new window is an approximate solution to inclusion problem

As a matter of fact, Rockafellar did demonstrate the weak convergence and strong convergence separately in two theorems, but for the strong convergence a further imposition of the Lipschitz continuity of Open image in new window at 0 plays the crucial part. Let us recall these results.

Theorem 1.1 (see [1]).

Let Open image in new window be a real Hilbert space. Let Open image in new window be maximal monotone, and let Open image in new window be a zero of Open image in new window . Let the sequence Open image in new window be generated by the iterative procedure
such that

where Open image in new window , Open image in new window , and Open image in new window is bounded away from zero. Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window .

Remark 1.2.

Note that Rockafellar [1] in Theorem 1.1, pointed out by a counterexample that the condition
is crucial; otherwise we may end up getting a nonconvergent sequence even with having just Open image in new window and Open image in new window one dimensional. Consider any maximal monotone mapping Open image in new window such that the set Open image in new window , that is known always to be convex and contains more than one element. Then it turns out that Open image in new window contains a nonconvergent sequence Open image in new window such that

The situation changes when Open image in new window if the convex function Open image in new window attains its minimum nonuniquely.

Next we look, unlike Theorem 1.1, at [1, Theorem  2] in which Rockafellar achieved a linear convergence of the sequence by considering the Lipschitz continuity of Open image in new window at 0 instead.

Theorem 1.3 (see [1]).

Let Open image in new window be a real Hilbert space. Let Open image in new window be maximal monotone, and let Open image in new window be a zero of Open image in new window . Let the sequence Open image in new window be generated by the iterative procedure
such that
where Open image in new window , Open image in new window , and Open image in new window is bounded away from zero. Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window . In addition, let Open image in new window be Lipschitz continuous at 0 with modulus Open image in new window , and
Later on Rockafellar [1] applied Theorem 1.1 to a minimization problem regarding function Open image in new window , where Open image in new window is lower semicontinuous convex and proper by taking Open image in new window . It is well known that in this situation Open image in new window is maximal monotone, and further

As a specialization, we have

That means, the proximal point algorithm for Open image in new window is a minimizing method for Open image in new window .

There is an abundance of literature on proximal point algorithms with applications mostly followed by the work of Rockafellar [1], but we focus greatly on the work of Eckstein and Bertsekas [2], where they have relaxed the proximal point algorithm in the following form and applied to the Douglas-Rachford splitting method. Now let us have a look at the relaxed proximal point algorithm introduced and studied in [2].

Algorithm 1.4.

Let Open image in new window be a set-valued maximal monotone mapping on Open image in new window with Open image in new window , and let the sequence Open image in new window be generated by the iterative procedure

are scalar sequences.

As a matter of fact, Eckstein and Bertsekas [2] applied Algorithm 1.4 to approximate a weak solution to (1.1). In other words, they established Theorem 1.1 using the relaxed proximal point algorithm instead.

Theorem 1.5 (see [2, Theorem  3]).

Let Open image in new window be a set-valued maximal monotone mapping on Open image in new window with Open image in new window , and let the sequence Open image in new window be generated by Algorithm 1.4. If the scalar sequences Open image in new window , Open image in new window and Open image in new window satisfy

then the sequence Open image in new window converges weakly to a zero of Open image in new window .

Convergence analysis for Algorithm 1.4 is achieved using the notion of the firm nonexpansiveness of the resolvent operator Open image in new window . Somehow, they have not considered applying Algorithm 1.4 to Theorem 1.3 to the case of the linear convergence. The nonexpansiveness of the resolvent operator Open image in new window poses the prime difficulty to algorithmic convergence, and may be, this could have been the real steering for Rockafellar to the Lipschitz continuity of Open image in new window instead. That is why the Yosida approximation turned out to be more effective in this scenario, because the Yosida approximation

takes care of the Lipschitz continuity issue.

As we look back into the literature, general maximal monotonicity has played a greater role to studying convex programming as well as variational inequalities/inclusions. Later it turned out that one of the most fundamental algorithms applied to solve these problems was the proximal point algorithm. In [2], Eckstein and Bertsekas have shown that much of the theory of the relaxed proximal point algorithm and related algorithms can be passed along to the Douglas-Rachford splitting method and its specializations, for instance, the alternating direction method of multipliers.

Just recently, Verma [3] generalized the relaxed proximal point algorithm and applied to the approximation solvability of variational inclusion problems of the form (1.1). Recently, a great deal of research on the solvability of inclusion problems is carried out using resolvent operator techniques, that have applications to other problems such as equilibria problems in economics, optimization and control theory, operations research, and mathematical programming.

In this survey, we first discuss in detail the history of proximal point algorithms with their applications to general nonlinear variational inclusion problems, and then we recall some significant developments, especially the relaxation of proximal point algorithms with applications to the Douglas-Rachford splitting method. At the second stage, we turn our attention to over-relaxed proximal point algorithms and their contribution to the linear convergence. We start with some introductory materials to the over-relaxed Open image in new window -proximal point algorithm based on the notion of maximal Open image in new window -monotonicity, and recall some investigations on approximation solvability of a general class of nonlinear inclusion problems involving maximal Open image in new window -monotone mappings in a Hilbert space setting. As a matter fact, we examine the convergence analysis of the over-relaxed Open image in new window -proximal point algorithm for solving a class of nonlinear inclusions. Also, several results on the generalized firm nonexpansiveness and generalized resolvent mapping are given. Furthermore, we explore the real impact of recently obtained results on the celebrated work of Rockafellar, most importantly in the case of over-relaxed (or super-relaxed) proximal point algorithms. For more details, we refer the reader [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55].

We note that the solution set for (1.1) turns out to be the same as of the Yosida inclusion

where Open image in new window is the Yosida regularization of Open image in new window , while there is an equivalent form Open image in new window , that is characterized as the Yosida approximation of Open image in new window with parameter Open image in new window . It seems in certain ways that it is easier to solve the Yosida inclusion than (1.1). In other words, Open image in new window provides better solvability conditions under right choice for Open image in new window than Open image in new window itself. To prove this assertion, let us recall the following existence theorem.

Theorem 1.6.

Let Open image in new window be a set-valued maximal monotone mapping on Open image in new window . Then the following statements are equivalent.

(i)An element Open image in new window is a solution to Open image in new window .

(ii) Open image in new window .

Assume that Open image in new window is a solution to Open image in new window . Then we have
On the other hand, Open image in new window has also been applied to first-order evolution equations/inclusions in Hilbert space as well as in Banach space settings. As in our present situation, resolvent operator Open image in new window is empowered by Open image in new window -maximal monotonicity, the Yosida approximation can be generalized in the context of solving first-order evolution equations/inclusions. In Zeidler [52, Lemma  31.7], it is shown that the Yosida approximation Open image in new window is Open image in new window -Lipschitz continuous, that is,
where this inequality is based on the nonexpansiveness of the resolvent operator Open image in new window , though the result does not seem to be much application oriented, while if we apply the firm nonexpansiveness of the resolvent operator Open image in new window , we can achieve, as applied in [5], more application-oriented results as follows:

where the Lipschitz constant is Open image in new window .

Proof.

Based on this equality and the firm nonexpansiveness of Open image in new window , we derive
Thus, we have

This completes the proof.

We note that from applications' point of view, it seems that the result
that is, Open image in new window is Open image in new window -cocoercive, is relatively more useful than that of the nonexpansive form
It is well known when Open image in new window is maximal monotone, the resolvent operator Open image in new window is single valued and Lipschitz continuous globally with the best constant Open image in new window . Furthermore, the inverse resolvent identity is satisfied

Indeed, the Yosida approximation Open image in new window and its equivalent form Open image in new window are related to this identity. Let us consider

Suppose that Open image in new window , then we have

On the other hand, we have the inverse resolvent identity that lays the foundation of the Yosida approximation.

Lemma 1.7 (see [26, Lemma  12.14]).

Proof.

We include the proof, though its similar to that of the above identity. Assume that Open image in new window then we have

which is the required assertion.

Note that when Open image in new window is maximal monotone, mappings

are single valued, in fact maximal monotone and nonexpansive.

The contents for the paper are organized as follows. Section 1 deals with a general historical development of the relaxed proximal point algorithm and its variants in conjunction with maximal Open image in new window -monotonicity, and with the approximation solvability of a class of nonlinear inclusion problems using the convergence analysis for the proximal point algorithm as well as for the relaxed proximal point algorithm. Section 2 introduces and derives some results on unifying maximal Open image in new window -monotonicity and generalized firm nonexpansiveness of the generalized resolvent operator. In Section 3, the role of the over-relaxed Open image in new window -proximal point algorithm is examined in detail in terms of its applications to approximating the solution of the inclusion problem (1.1). Finally, Section 4 deals with some important specializations that connect the results on general maximal monotonicity, especially to several aspects of the linear convergence.

2. General Maximal η-Monotonicity

In this section we discus some results based on basic properties of maximal Open image in new window -monotonicity, and then we derive some results involving Open image in new window -monotonicity and the generalized firm nonexpansiveness. Let Open image in new window denote a real Hilbert space with the norm Open image in new window and inner product Open image in new window . Let Open image in new window be a multivalued mapping on Open image in new window . We will denote both the map Open image in new window and its graph by Open image in new window , that is, the set Open image in new window . This is equivalent to stating that a mapping is any subset Open image in new window of Open image in new window , and Open image in new window . If Open image in new window is single valued, we will still use Open image in new window to represent the unique Open image in new window such that Open image in new window rather than the singleton set Open image in new window . This interpretation will much depend on the context. The domain of a map Open image in new window is defined (as its projection onto the first argument) by
Open image in new window will denote the full domain of Open image in new window , and the range of Open image in new window is defined by

Definition 2.1.

Let Open image in new window be a multivalued mapping on Open image in new window . The map Open image in new window is said to be

(i)monotone if
(ii) Open image in new window -strongly monotone if there exists a positive constant Open image in new window such that
(iii)strongly monotone if
(iv) Open image in new window -strongly pseudomonotone if
(v)pseudomonotone if
(vi) Open image in new window -relaxed monotone if there exists a positive constant Open image in new window such that
(vii)cocoercive if
(viii) Open image in new window -cocoercive if there is a positive constant Open image in new window such that

Definition 2.2.

Let Open image in new window be a mapping on Open image in new window . The map Open image in new window is said to be

(i)nonexpansive if
(ii)firmly nonexpansive if
(iii) Open image in new window -firmly nonexpansive if there exists a constant Open image in new window such that

In light of Definitions 2.1(vii) and 2.2(ii), notions of cocoerciveness and firm nonexpansiveness coincide, but differ in applications much depending on the context.

Definition 2.3.

A map Open image in new window is said to be

(i)monotone if
(ii) Open image in new window -strongly monotone if there exists a positive constant Open image in new window such that
(iii)strongly monotone if
(iv) Open image in new window -Lipschitz continuous if there exists a positive constant Open image in new window such that

Definition 2.4.

Let Open image in new window be a multivalued mapping on Open image in new window , and let Open image in new window be another mapping. The map Open image in new window is said to be

(ii) Open image in new window -strongly monotone if there exists a positive constant Open image in new window such that
(iii) Open image in new window -strongly monotone if
(iv) Open image in new window -strongly pseudomonotone if
(vi) Open image in new window -relaxed monotone if there exists a positive constant Open image in new window such that
(vii) Open image in new window -cocoercive if there is a positive constant Open image in new window such that

Definition 2.5.

A map Open image in new window is said to be maximal Open image in new window -monotone if

(1) Open image in new window is Open image in new window -monotone,

(2) Open image in new window for Open image in new window .

Proposition 2.6.

Let Open image in new window be a Open image in new window -strongly monotone mapping, and let Open image in new window be a maximal Open image in new window -monotone mapping. Then Open image in new window is maximal Open image in new window -monotone for Open image in new window , where Open image in new window is the identity mapping.

Proof.

The proof follows on applying Definition 2.5.

Proposition 2.7 (see [4]).

Let Open image in new window be Open image in new window -strongly monotone, and let Open image in new window be maximal Open image in new window -monotone. Then generalized resolvent operator Open image in new window is single valued, where Open image in new window is the identity mapping.

Proof.

Now using the Open image in new window -monotonicity of Open image in new window , it follows that

Since Open image in new window is Open image in new window -strongly monotone, it implies Open image in new window . Thus, Open image in new window is single valued.

Definition 2.8.

Let Open image in new window be Open image in new window -strongly monotone, and let Open image in new window be maximal Open image in new window -monotone. Then the generalized resolvent operator Open image in new window is defined by

Proposition 2.9 (see [4]).

Let Open image in new window be a real Hilbert space, let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be Open image in new window -strongly monotone. Then the resolvent operator associated with Open image in new window and defined by
satisfies the following:

Proof.

For any Open image in new window , it follows from the definition of the resolvent operator Open image in new window that
In light of (2.36), we have

Proposition 2.10 (see [4]).

Let Open image in new window be a real Hilbert space, let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be Open image in new window -strongly monotone.

If, in addition, (for Open image in new window )

Proof.

We include the proof for the sake of the completeness. To prove (2.39), we apply (2.38) to Proposition 2.9, and we get
It further follows that

When Open image in new window and Open image in new window in Proposition 2.10, we have the following.

Proposition 2.11.

Let Open image in new window be a real Hilbert space, let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be Open image in new window -strongly monotone.

If, in addition, one supposes that

For Open image in new window and Open image in new window in Proposition 2.10, we find a result of interest as follows.

Proposition 2.12.

Let Open image in new window be a real Hilbert space, let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be strongly monotone.

If, in addition, one supposes (for Open image in new window ) that

For Open image in new window in Proposition 2.10, we have the following result.

Proposition 2.13.

Let Open image in new window be a real Hilbert space, let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be strongly monotone.

If, in addition, one assumes that

3. The Over-Relaxed (η)-Proximal Point Algorithm

This section deals with the over-relaxed Open image in new window -proximal point algorithm and its application to approximation solvability of the inclusion problem (1.1) based on the maximal Open image in new window -monotonicity. Furthermore, some results connecting the Open image in new window -monotonicity and corresponding resolvent operator are established, that generalize the results on the firm nonexpansiveness [2], while the auxiliary results on maximal Open image in new window -monotonicity and general maximal monotonicity are obtained.

Theorem 3.1.

Let Open image in new window be a real Hilbert space, and let Open image in new window be maximal Open image in new window -monotone. Then the following statements are mutually equivalent.

(i)An element Open image in new window is a solution to (1.1).

Proof.

It follows from the definition of the generalized resolvent operator corresponding to Open image in new window .

Note that Theorem 3.1 generalizes [2, Lemma  2] to the case of a maximal Open image in new window -monotone mapping.

Next, we present a generalization to the relaxed proximal point algorithm [3] based on the maximal Open image in new window -monotonicity.

Algorithm 3.2 (see [4]).

Let Open image in new window be a set-valued maximal Open image in new window -monotone mapping on Open image in new window with Open image in new window , and let the sequence Open image in new window be generated by the iterative procedure

are scalar sequences such that Open image in new window .

Algorithm 3.3.

Let Open image in new window be a set-valued maximal Open image in new window -monotone mapping on Open image in new window with Open image in new window , and let the sequence Open image in new window be generated by the iterative procedure

are scalar sequences such that Open image in new window .

For Open image in new window in Algorithm 3.2, we have the following.

Algorithm 3.4.

Let Open image in new window be a set-valued maximal Open image in new window -monotone mapping on Open image in new window with Open image in new window , and let the sequence Open image in new window be generated by the iterative procedure

are scalar sequences.

In the following result [4], we observe that Theorems 1.1 and 1.3 are unified and are generalized to the case of the Open image in new window -maximal monotonicity and super-relaxed proximal point algorithm. Also, we notice that this result in certain respects demonstrates the importance of the firm nonexpansiveness rather than of the nonexpansiveness.

Theorem 3.5 (see [4]).

Let Open image in new window be a real Hilbert space. Let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be a zero of Open image in new window . Let Open image in new window be Open image in new window -strongly monotone. Furthermore, assume (for Open image in new window )
Let the sequence Open image in new window be generated by the iterative procedure

where Open image in new window , Open image in new window , Open image in new window , Open image in new window , Open image in new window and Open image in new window .

Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window .

In addition, suppose that the sequence Open image in new window is generated by Algorithm 3.2 as well, and that Open image in new window is Open image in new window -Lipschitz continuous at 0, that is, there exists a unique solution Open image in new window to Open image in new window (equivalently, Open image in new window ) and for constants Open image in new window and Open image in new window , one has

are scalar sequences such that Open image in new window and Open image in new window .

Then the sequence Open image in new window converges linearly to a unique solution Open image in new window with rate

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window Open image in new window , and Open image in new window .

Proof.

Therefore, Open image in new window . Then, in light of Theorem 3.1, any solution to (1.1) is a fixed point of Open image in new window , and hence a zero of Open image in new window .

Next, the proof of (3.17) follows from a regular manipulation, and the following equality:
Before we start establishing linear convergence of the sequence Open image in new window , we express Open image in new window in light of Algorithm 3.2 as

Now we begin verifying the boundedness of the sequence Open image in new window leading to Open image in new window .

Next, we estimate using Proposition 2.10 (for Open image in new window )
Since under the assumptions Open image in new window , it follows that

where Open image in new window .

Moreover,
Now we find the estimate leading to the boundedness of the sequence Open image in new window ,

Thus, the sequence Open image in new window is bounded.

We further examine the estimate

where Open image in new window .

that is, Open image in new window .

Now we turn our attention (using the previous argument) to linear convergence of the sequence Open image in new window . Since Open image in new window , it implies for Open image in new window large that Open image in new window . Moreover, Open image in new window for Open image in new window and Open image in new window . Therefore, in light of (3.19), by taking Open image in new window and Open image in new window , we have
Applying (3.17), we arrive at

where Open image in new window .

Since Open image in new window , we estimate using (3.32) and Open image in new window that

where Open image in new window .

Hence, we have

for Open image in new window and Open image in new window .

Since Algorithm 3.2 ensures
It follows that

for setting Open image in new window .

Theorem 3.6.

Let Open image in new window be a real Hilbert space, and let Open image in new window be maximal Open image in new window -monotone. Let Open image in new window be Open image in new window -strongly monotone. For an arbitrarily chosen initial point Open image in new window , let the sequence Open image in new window be bounded (in the sense that there exists at least one solution to Open image in new window ) and generated by Algorithm 3.3 as

satisfy Open image in new window , Open image in new window , Open image in new window , and Open image in new window .

In addition, one assumes (for Open image in new window )

Then the sequence Open image in new window converges weakly to a solution of (1.1).

Proof.

The proof is similar to that of the first part of Theorem 3.5 on applying the generalized representation lemma.

Theorem 3.7.

Let Open image in new window be a real Hilbert space. Let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be a zero of Open image in new window . Let Open image in new window be Open image in new window -strongly monotone. Let the sequence Open image in new window be generated by the iterative procedure

where Open image in new window , Open image in new window , Open image in new window , Open image in new window , Open image in new window Open image in new window and Open image in new window .

Furthermore, assume (for Open image in new window )

Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window .

In addition, assume that the sequence Open image in new window is generated by Algorithm 3.4 as well, and that Open image in new window is Open image in new window -Lipschitz continuous at 0, that is, there exists a unique solution Open image in new window to Open image in new window (equivalently, Open image in new window ) and for constants Open image in new window and Open image in new window , one has

are scalar sequences such that Open image in new window and Open image in new window .

Then the sequence Open image in new window converges linearly to a unique solution Open image in new window with rate

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window and Open image in new window .

Proof.

The proof is similar to that of Theorem 3.5.

4. Some Specializations

Finally, we examine some significant specializations of Theorem 3.5 in this section. Let us start with Open image in new window and Open image in new window and applying Proposition 2.11.

Theorem 4.1.

Let Open image in new window be a real Hilbert space. Let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be a zero of Open image in new window . Let Open image in new window be Open image in new window -strongly monotone. Furthermore, assume
Let the sequence Open image in new window be generated by the iterative procedure

where Open image in new window , Open image in new window , Open image in new window , Open image in new window , Open image in new window and Open image in new window .

Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window .

In addition, suppose that the sequence Open image in new window is generated by Algorithm 3.2 as well, and that Open image in new window is Open image in new window -Lipschitz continuous at 0, that is, there exists a unique solution Open image in new window to Open image in new window (equivalently, Open image in new window ) and for constants Open image in new window and Open image in new window , one has

are scalar sequences such that Open image in new window and Open image in new window .

Then the sequence Open image in new window converges linearly to a unique solution Open image in new window with rate

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window Open image in new window and Open image in new window .

Proof.

We need to include the proof for the sake of the completeness. Suppose that Open image in new window is a zero of Open image in new window . For all Open image in new window , we set

Therefore, Open image in new window . Then, in light of Theorem 3.1, any solution to (1.1) is a fixed point of Open image in new window , and hence a zero of Open image in new window .

Next, the proof of (4.4) follows from a regular manipulation, and the following equality:
Before we start establishing linear convergence of the sequence Open image in new window , we express Open image in new window in light of Algorithm 3.2 as

Now we begin verifying the boundedness of the sequence Open image in new window leading to Open image in new window .

Next, we estimate using Proposition 2.10 (for Open image in new window )
Since under the assumptions Open image in new window , it follows that

where Open image in new window .

Moreover,
Now we find the estimate leading to the boundedness of the sequence Open image in new window ,

Thus, the sequence Open image in new window is bounded.

We further examine the estimate

where Open image in new window .

that is, Open image in new window .

Now we turn our attention (using the previous argument) to linear convergence of the sequence Open image in new window . Since Open image in new window , it implies for Open image in new window large that Open image in new window . Moreover, Open image in new window for Open image in new window and Open image in new window . Therefore, in light of (4.6), by taking Open image in new window and Open image in new window , we have
Applying (4.4), we arrive at

where Open image in new window .

Since Open image in new window , we estimate using (4.1) and Open image in new window that

where Open image in new window .

Hence, we have

for Open image in new window and Open image in new window .

Since Algorithm 3.2 ensures
It follows that

for setting Open image in new window .

Second we examine Theorem 3.5 when Open image in new window and Open image in new window , but in this case there is no need to include the proof.

Theorem 4.2.

Let Open image in new window be a real Hilbert space. Let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be a zero of Open image in new window . Let Open image in new window be strongly monotone. Furthermore, assume (for Open image in new window )
Let the sequence Open image in new window be generated by the iterative procedure

where Open image in new window , Open image in new window , Open image in new window , Open image in new window , Open image in new window and Open image in new window .

Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window .

Then one has
In addition, suppose that the sequence Open image in new window is generated by Algorithm 3.2 as well, and that Open image in new window is Open image in new window -Lipschitz continuous at 0, that is, there exists a unique solution Open image in new window to Open image in new window (equivalently, Open image in new window ) and for constants Open image in new window and Open image in new window , one has

are scalar sequences such that Open image in new window and Open image in new window .

Then the sequence Open image in new window converges linearly to a unique solution Open image in new window with rate

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window and Open image in new window .

Finally, we consider the case when Open image in new window in Theorem 3.5, especially using Proposition 2.13. In this situation, the inclusion of the complete proof seems to be appropriate.

Theorem 4.3.

Let Open image in new window be a real Hilbert space. Let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be a zero of Open image in new window . Let Open image in new window be strongly monotone. Furthermore, assume
Let the sequence Open image in new window be generated by the iterative procedure

where Open image in new window , Open image in new window , Open image in new window , Open image in new window , Open image in new window and Open image in new window .

Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window .

Then one has
In addition, suppose that the sequence Open image in new window is generated by Algorithm 3.2 as well, and that Open image in new window is Open image in new window -Lipschitz continuous at 0, that is, there exists a unique solution Open image in new window to Open image in new window (equivalently, Open image in new window ) and for constants Open image in new window and Open image in new window , one has

are scalar sequences such that Open image in new window and Open image in new window .

Then the sequence Open image in new window converges linearly to a unique solution Open image in new window with rate

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window Open image in new window , and Open image in new window .

Proof.

We need to include the proof for the sake of the completeness. Suppose that Open image in new window is a zero of Open image in new window . For all Open image in new window , we set

Therefore, Open image in new window . Then, in light of Theorem 3.1, any solution to (1.1) is a fixed point of Open image in new window , and hence a zero of Open image in new window .

Next, the proof of (4.38) follows from a regular manipulation, and the following equality:
Before we start establishing linear convergence of the sequence Open image in new window , we express Open image in new window in light of Algorithm 3.2 as

Now we begin examining the boundedness of the sequence Open image in new window leading to Open image in new window .

Next, we estimate using Proposition 2.13 that
Since under the assumptions Open image in new window , it follows that
Moreover,
Now we find the estimate leading to the boundedness of the sequence Open image in new window ,

Thus, the sequence Open image in new window is bounded.

We further examine the estimate

where Open image in new window .

that is, Open image in new window . Now we find the estimate leading to the boundedness of the sequence Open image in new window ,

Thus, the sequence Open image in new window is bounded.

We further examine the estimate

where Open image in new window .

that is, Open image in new window .

Now we turn our attention (using the previous argument) to linear convergence of the sequence Open image in new window . Since Open image in new window , it implies for Open image in new window large that Open image in new window . Moreover, Open image in new window for Open image in new window and Open image in new window . Therefore, in light of (4.40), by taking Open image in new window and Open image in new window , we have
Applying (4.38), we arrive at

where Open image in new window .

Since Open image in new window , we estimate using (4.35) and Open image in new window that
Hence, we have

for Open image in new window and Open image in new window .

Since Algorithm 3.2 ensures
It follows that

for setting Open image in new window .

Note that if we set Open image in new window in Theorem 4.3, we get a result connecting [2] to the case of a linear convergence setting, but the algorithm remains overrelaxed (or superrelaxed). In this context, we state the following results before we start examining Theorem 4.7, the main result on linear convergence in the maximal monotone setting. Note that based on Proposition 4.6, notions of cocoercivity and firm nonexpansiveness coincide, though it is well known that they may differ in usage much depending on the context.

Theorem 4.4.

Let Open image in new window be a real Hilbert space, and let Open image in new window be maximal monotone. Then the following statements are mutually equivalent.

(i)An element Open image in new window is a solution to (1.1).

Proof.

It follows from the definition of the generalized resolvent operator corresponding to Open image in new window .

Next, we present the super-relaxed Proximal point algorithm based on the maximal monotonicity.

Algorithm 4.5.

Let Open image in new window be a set-valued maximal monotone mapping on Open image in new window with Open image in new window , and let the sequence Open image in new window be generated by the iterative procedure

are scalar sequences such that Open image in new window .

Proposition 4.6.

Let Open image in new window be a real Hilbert space, and let Open image in new window be maximal monotone. Then, for Open image in new window , one has

Theorem 4.7.

Let Open image in new window be a real Hilbert space. Let Open image in new window be maximal monotone, and let Open image in new window be a zero of Open image in new window . Let the sequence Open image in new window be generated by the iterative procedure

where Open image in new window , Open image in new window , Open image in new window , Open image in new window , Open image in new window , and Open image in new window .

Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window .

Then one has
In addition, suppose that the sequence Open image in new window is generated by Algorithm 4.5, and that Open image in new window is Open image in new window -Lipschitz continuous at 0, that is, there exists a unique solution Open image in new window to Open image in new window (equivalently, Open image in new window ) and for constants Open image in new window and Open image in new window , one has

are scalar sequences such that Open image in new window and Open image in new window .

Then the sequence Open image in new window converges linearly to a unique solution Open image in new window with rate

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window , and Open image in new window .

Proof.

We need to include the proof for the sake of the completeness. Suppose that Open image in new window is a zero of Open image in new window . For all Open image in new window , we set

Therefore, Open image in new window . Then, in light of Theorem 4.4, any solution to (1.1) is a fixed point of Open image in new window , and hence a zero of Open image in new window .

Next, the proof of (4.74) follows from applying the regular manipulation, and the following equality:
Before we start establishing linear convergence of the sequence Open image in new window , we express Open image in new window in light of Algorithm 4.5 as

Now we begin examining the boundedness of the sequence Open image in new window leading to Open image in new window .

Next, we estimate using Proposition 4.6 that
Since under the assumptions Open image in new window , it follows that
Moreover,
Now we find the estimate leading to the boundedness of the sequence Open image in new window ,

Therefore, the sequence Open image in new window is bounded.

We further examine the estimate

where Open image in new window .

that is, Open image in new window . Now we find the estimate leading to the boundedness of the sequence Open image in new window ,

Thus, the sequence Open image in new window is bounded.

We further examine the estimate

where Open image in new window .

that is, Open image in new window .

Now we turn our attention (using the previous argument) to linear convergence of the sequence Open image in new window . Since Open image in new window , it implies for Open image in new window large that Open image in new window . Moreover, Open image in new window for Open image in new window and Open image in new window . Therefore, in light of (4.76), by taking Open image in new window and Open image in new window , we have
Applying (4.74), we arrive at

where Open image in new window .

Hence, we have

for Open image in new window and Open image in new window .

Since Algorithm 4.5 ensures
It follows that

for setting Open image in new window .

References

  1. 1.
    Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization 1976,14(5):877–898. 10.1137/0314056MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Eckstein J, Bertsekas DP: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming 1992,55(3):293–318. 10.1007/BF01581204MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Verma RU: On the generalized proximal point algorithm with applications to inclusion problems. Journal of Industrial and Management Optimization 2009,5(2):381–390.MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Agarwal RP, Verma RU: The over-relaxed proximal point algorithm and nonlinear variational inclusion problems. Nonlinear Functional Analysis and Applications 2009.,14(4):Google Scholar
  5. 5.
    Barbu V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Nordhoff, Leyden, The Nethedands; 1976:352.CrossRefGoogle Scholar
  6. 6.
    Boikanyo OA, Morosanu G: Modified Rockafellar's algorithms. Mathematical Sciences Research Journal. In pressGoogle Scholar
  7. 7.
    Bertsekas DP: Necessary and sufficient condition for a penalty method to be exact. Mathematical Programming 1975,9(1):87–99. 10.1007/BF01681332MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Bertsekas DP: Constrained Optimization and Lagrange Multiplier Methods, Computer Science and Applied Mathematics. Academic Press, New York, NY, USA; 1982:xiii+395.Google Scholar
  9. 9.
    Douglas J Jr., Rachford HH Jr.: On the numerical solution of heat conduction problems in two and three space variables. Transactions of the American Mathematical Society 1956, 82: 421–439. 10.1090/S0002-9947-1956-0084194-4MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Eckstein J: Splitting methods for monotone operators with applications to parallel optimization, Doctoral dissertation. Department of Civil Engineering, Massachusetts Institute of Technology, Cambridge, Mass, USA; 1989.Google Scholar
  11. 11.
    Eckstein J: Nonlinear proximal point algorithms using Bregman functions, with applications to convex programming. Mathematics of Operations Research 1993,18(1):202–226. 10.1287/moor.18.1.202MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Eckstein J: Approximate iterations in Bregman-function-based proximal algorithms. Mathematical Programming 1998,83(1):113–123. 10.1007/BF02680553MathSciNetMATHGoogle Scholar
  13. 13.
    Eckstein J, Ferris MC: Smooth methods of multipliers for complementarity problems. Mathematical Programming 1999,86(1):65–90. 10.1007/s101070050080MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Ferris MC: Finite termination of the proximal point algorithm. Mathematical Programming 1991,50(3):359–366. 10.1007/BF01594944MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM Journal on Control and Optimization 1991,29(2):403–419. 10.1137/0329022MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Martinet B: Régularisation d'inéquations variationnelles par approximations successives. Revue Française d'Informatique et de Recherche Opérationnelle, Série Rouge 1970,4(3):154–158.MathSciNetMATHGoogle Scholar
  17. 17.
    Minty GJ: Monotone (nonlinear) operators in Hilbert space. Duke Mathematical Journal 1962, 29: 341–346. 10.1215/S0012-7094-62-02933-2MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Moroşanu G: Nonlinear Evolution Equations and Applications, Mathematics and Its Applications (East European Series). Volume 26. D. Reidel, Dordrecht, The Netherlands; 1988:xii+340.MATHGoogle Scholar
  19. 19.
    Moudafi A: Mixed equilibrium problems: sensitivity analysis and algorithmic aspect. Computers & Mathematics with Applications 2002,44(8–9):1099–1108. 10.1016/S0898-1221(02)00218-3MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Moudafi A, Théra M: Finding a zero of the sum of two maximal monotone operators. Journal of Optimization Theory and Applications 1997,94(2):425–448. 10.1023/A:1022643914538MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Pang J-S: Complementarity problems. In Handbook of Global Optimization, Nonconvex Optimization and Its Applications. Volume 2. Edited by: Horst R, Pardalos P. Kluwer Academic Publishers, Dordrecht, The Netherlands; 1995:271–338.CrossRefGoogle Scholar
  22. 22.
    Robinson SM: Composition duality and maximal monotonicity. Mathematical Programming 1999,85(1):1–13. 10.1007/s101070050043MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Robinson SM: Linear convergence of epsilon-subgradient descent methods for a class of convex functions. Mathematical Programming 1999, 86: 41–50. 10.1007/s101070050078MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Rockafellar RT: On the maximal monotonicity of subdifferential mappings. Pacific Journal of Mathematics 1970, 33: 209–216.MathSciNetCrossRefMATHGoogle Scholar
  25. 25.
    Rockafellar RT: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Mathematics of Operations Research 1976,1(2):97–116. 10.1287/moor.1.2.97MathSciNetCrossRefMATHGoogle Scholar
  26. 26.
    Rockafellar RT, Wets RJ-B: Variational Analysis. Springer, Berlin, Germany; 2004.MATHGoogle Scholar
  27. 27.
    Solodov MV, Svaiter BF: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Mathematics of Operations Research 2000,25(2):214–230. 10.1287/moor.25.2.214.12222MathSciNetCrossRefMATHGoogle Scholar
  28. 28.
    Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Mathematical Programming 2000,87(1):189–202.MathSciNetMATHGoogle Scholar
  29. 29.
    Takahashi W: Approximating solutions of accretive operators by viscosity approximation methods in Banach spaces. In Applied Functional Analysis. Yokohama, Yokohama, Japan; 2007:225–243.Google Scholar
  30. 30.
    Tossings P: The perturbed proximal point algorithm and some of its applications. Applied Mathematics and Optimization 1994,29(2):125–159. 10.1007/BF01204180MathSciNetCrossRefMATHGoogle Scholar
  31. 31.
    Tseng P: Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM Journal on Control and Optimization 1991,29(1):119–138. 10.1137/0329006MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Tseng P: Alternating projection-proximal methods for convex programming and variational inequalities. SIAM Journal on Optimization 1997,7(4):951–965. 10.1137/S1052623495279797MathSciNetCrossRefMATHGoogle Scholar
  33. 33.
    Tseng P: A modified forward-backward splitting method for maximal monotone mappings. SIAM Journal on Control and Optimization 2000,38(2):431–446. 10.1137/S0363012998338806MathSciNetCrossRefMATHGoogle Scholar
  34. 34.
    Verma RU: A fixed-point theorem involving Lipschitzian generalised pseudo-contractions. Proceedings of the Royal Irish Academy. Section A 1997,97(1):83–86.MathSciNetMATHGoogle Scholar
  35. 35.
    Verma RU: New class of nonlinear -monotone mixed variational inclusion problems and resolvent operator technique. Journal of Computational Analysis and Applications 2006,8(3):275–285.MathSciNetMATHGoogle Scholar
  36. 36.
    Verma RU: Nonlinear -monotone variational inclusions systems and the resolvent operator technique. Journal of Applied Functional Analysis 2006,1(2):183–189.MathSciNetMATHGoogle Scholar
  37. 37.
    Verma RU: -monotonicity and its role in nonlinear variational inclusions. Journal of Optimization Theory and Applications 2006,129(3):457–467. 10.1007/s10957-006-9079-7MathSciNetCrossRefMATHGoogle Scholar
  38. 38.
    Verma RU: -monotone nonlinear relaxed cocoercive variational inclusions. Central European Journal of Mathematics 2007,5(2):386–396. 10.2478/s11533-007-0005-5MathSciNetCrossRefMATHGoogle Scholar
  39. 39.
    Verma RU: Approximation solvability of a class of nonlinear set-valued variational inclusions involving -monotone mappings. Journal of Mathematical Analysis and Applications 2008,337(2):969–975. 10.1016/j.jmaa.2007.01.114MathSciNetCrossRefMATHGoogle Scholar
  40. 40.
    Verma RU: Nonlinear Approximation Solvability Involving Regular and Demiregular Convergence. International Publications (USA), Orlando, Fla, USA; 1994.Google Scholar
  41. 41.
    Verma RU: General projection systems and relaxed cocoercive nonlinear variational inequalities. The ANZIAM Journal 2007,49(2):205–212. 10.1017/S1446181100012785MathSciNetCrossRefMATHGoogle Scholar
  42. 42.
    Verma RU: General proximal point algorithmic models and nonlinear variational inclusions involving RMM mappings. accepted to Journal of Informatics and Mathematical SciencesGoogle Scholar
  43. 43.
    Verma RU: General proximal point algorithm involving -maximal accretiveness framework in Banach spaces. Positivity 2009,13(4):771–782. 10.1007/s11117-008-2268-xMathSciNetCrossRefMATHGoogle Scholar
  44. 44.
    Verma RU: The generalized relaxed proximal point algorithm involving -maximal-relaxed accretive mappings with applications to Banach spaces. Mathematical and Computer Modelling 2009,50(7–8):1026–1032. 10.1016/j.mcm.2009.04.012MathSciNetCrossRefMATHGoogle Scholar
  45. 45.
    Yosida K: Functional Analysis. Springer, Berlin, Germany; 1965.CrossRefMATHGoogle Scholar
  46. 46.
    Yosida K: On the differentiability and representation of one-parameter semigroups of linear operators. Journal of Mathematical Society of Japan 1948, 1: 15–21. 10.2969/jmsj/00110015MathSciNetCrossRefMATHGoogle Scholar
  47. 47.
    Xu H-K: Iterative algorithms for nonlinear operators. Journal of the London Mathematical Society 2002,66(1):240–256. 10.1112/S0024610702003332MathSciNetCrossRefMATHGoogle Scholar
  48. 48.
    Zeidler E: The Ljusternik-Schnirelman theory for indefinite and not necessarily odd nonlinear operators and its applications. Nonlinear Analysis: Theory, Methods & Applications 1980,4(3):451–489. 10.1016/0362-546X(80)90085-1MathSciNetCrossRefMATHGoogle Scholar
  49. 49.
    Zeidler E: Ljusternik-Schnirelman theory on general level sets. Mathematische Nachrichten 1986, 129: 235–259. 10.1002/mana.19861290121MathSciNetCrossRefMATHGoogle Scholar
  50. 50.
    Zeidler E: Nonlinear Functional Analysis and Its Applications—Part 1: Fixed-Point Theorems. Springer, New York, NY, USA; 1986:xxi+897.CrossRefGoogle Scholar
  51. 51.
    Zeidler E: Nonlinear Functional Analysis and Its Applications—Part 2 A: Linear Monotone Operators. Springer, New York, NY, USA; 1990:xviii+467.CrossRefMATHGoogle Scholar
  52. 52.
    Zeidler E: Nonlinear Functional Analysis and Its Applications—Part 2 B: Nonlinear Monotone Operators. Springer, New York, NY, USA; 1990.CrossRefMATHGoogle Scholar
  53. 53.
    Zeidler E: Nonlinear Functional Analysis and Its Applications—Part 3: Variational Methods and Optimization. Springer, New York, NY, USA; 1985:xxii+662.CrossRefGoogle Scholar
  54. 54.
    Zolezzi T: Continuity of generalized gradients and multipliers under perturbations. Mathematics of Operations Research 1985,10(4):664–673. 10.1287/moor.10.4.664MathSciNetCrossRefGoogle Scholar
  55. 55.
    Zoretti L: Un théorème de la théorie des ensembles. Bulletin de la Société Mathématique de France 1909, 37: 116–119.MathSciNetMATHGoogle Scholar

Copyright information

© R. P. Agarwal and R. U. Verma. 2009

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Authors and Affiliations

  1. 1.Department of Mathematical SciencesFlorida Institute of TechnologyMelbourneUSA
  2. 2.Department of Mathematics and StatisticsKing Fahd University of Petroleum and MineralsDhahranSaudi Arabia
  3. 3.International Publications (USA)OrlandoUSA

Personalised recommendations