# Super-Relaxed ( Open image in new window )-Proximal Point Algorithms, Relaxed ( Open image in new window )-Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions

- 754 Downloads

## Abstract

We glance at recent advances to the general theory of maximal (set-valued) monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed ( Open image in new window )-proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal ( Open image in new window )-monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar (1976), while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar (1976) to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas (1992). Even for the linear convergence analysis for the overrelaxed (or super-relaxed) ( Open image in new window )-proximal point algorithm, the fundamental model for Rockafellar's case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal ( Open image in new window )-monotonicity, and then applying to first-order evolution equations/inclusions.

## Keywords

Iterative Procedure Maximal Monotone Real Hilbert Space Resolvent Operator Proximal Point Algorithm## 1. Introduction and Preliminaries

where Open image in new window is a set-valued mapping on Open image in new window .

As a matter of fact, Rockafellar did demonstrate the weak convergence and strong convergence separately in two theorems, but for the strong convergence a further imposition of the Lipschitz continuity of Open image in new window at 0 plays the crucial part. Let us recall these results.

Theorem 1.1 (see [1]).

where Open image in new window , Open image in new window , and Open image in new window is bounded away from zero. Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window .

Remark 1.2.

The situation changes when Open image in new window if the convex function Open image in new window attains its minimum nonuniquely.

Next we look, unlike Theorem 1.1, at [1, Theorem 2] in which Rockafellar achieved a linear convergence of the sequence by considering the Lipschitz continuity of Open image in new window at 0 instead.

Theorem 1.3 (see [1]).

As a specialization, we have

That means, the proximal point algorithm for Open image in new window is a minimizing method for Open image in new window .

There is an abundance of literature on proximal point algorithms with applications mostly followed by the work of Rockafellar [1], but we focus greatly on the work of Eckstein and Bertsekas [2], where they have relaxed the proximal point algorithm in the following form and applied to the Douglas-Rachford splitting method. Now let us have a look at the relaxed proximal point algorithm introduced and studied in [2].

Algorithm 1.4.

are scalar sequences.

As a matter of fact, Eckstein and Bertsekas [2] applied Algorithm 1.4 to approximate a weak solution to (1.1). In other words, they established Theorem 1.1 using the relaxed proximal point algorithm instead.

Theorem 1.5 (see [2, Theorem 3]).

then the sequence Open image in new window converges weakly to a zero of Open image in new window .

takes care of the Lipschitz continuity issue.

As we look back into the literature, general maximal monotonicity has played a greater role to studying convex programming as well as variational inequalities/inclusions. Later it turned out that one of the most fundamental algorithms applied to solve these problems was the proximal point algorithm. In [2], Eckstein and Bertsekas have shown that much of the theory of the relaxed proximal point algorithm and related algorithms can be passed along to the Douglas-Rachford splitting method and its specializations, for instance, the alternating direction method of multipliers.

Just recently, Verma [3] generalized the relaxed proximal point algorithm and applied to the approximation solvability of variational inclusion problems of the form (1.1). Recently, a great deal of research on the solvability of inclusion problems is carried out using resolvent operator techniques, that have applications to other problems such as equilibria problems in economics, optimization and control theory, operations research, and mathematical programming.

In this survey, we first discuss in detail the history of proximal point algorithms with their applications to general nonlinear variational inclusion problems, and then we recall some significant developments, especially the relaxation of proximal point algorithms with applications to the Douglas-Rachford splitting method. At the second stage, we turn our attention to over-relaxed proximal point algorithms and their contribution to the linear convergence. We start with some introductory materials to the over-relaxed Open image in new window -proximal point algorithm based on the notion of maximal Open image in new window -monotonicity, and recall some investigations on approximation solvability of a general class of nonlinear inclusion problems involving maximal Open image in new window -monotone mappings in a Hilbert space setting. As a matter fact, we examine the convergence analysis of the over-relaxed Open image in new window -proximal point algorithm for solving a class of nonlinear inclusions. Also, several results on the generalized firm nonexpansiveness and generalized resolvent mapping are given. Furthermore, we explore the real impact of recently obtained results on the celebrated work of Rockafellar, most importantly in the case of over-relaxed (or super-relaxed) proximal point algorithms. For more details, we refer the reader [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55].

where Open image in new window is the Yosida regularization of Open image in new window , while there is an equivalent form Open image in new window , that is characterized as the Yosida approximation of Open image in new window with parameter Open image in new window . It seems in certain ways that it is easier to solve the Yosida inclusion than (1.1). In other words, Open image in new window provides better solvability conditions under right choice for Open image in new window than Open image in new window itself. To prove this assertion, let us recall the following existence theorem.

Theorem 1.6.

Let Open image in new window be a set-valued maximal monotone mapping on Open image in new window . Then the following statements are equivalent.

(i)An element Open image in new window is a solution to Open image in new window .

(ii) Open image in new window .

where the Lipschitz constant is Open image in new window .

Proof.

This completes the proof.

Indeed, the Yosida approximation Open image in new window and its equivalent form Open image in new window are related to this identity. Let us consider

On the other hand, we have the inverse resolvent identity that lays the foundation of the Yosida approximation.

Lemma 1.7 (see [26, Lemma 12.14]).

Proof.

which is the required assertion.

are single valued, in fact maximal monotone and nonexpansive.

The contents for the paper are organized as follows. Section 1 deals with a general historical development of the relaxed proximal point algorithm and its variants in conjunction with maximal Open image in new window -monotonicity, and with the approximation solvability of a class of nonlinear inclusion problems using the convergence analysis for the proximal point algorithm as well as for the relaxed proximal point algorithm. Section 2 introduces and derives some results on unifying maximal Open image in new window -monotonicity and generalized firm nonexpansiveness of the generalized resolvent operator. In Section 3, the role of the over-relaxed Open image in new window -proximal point algorithm is examined in detail in terms of its applications to approximating the solution of the inclusion problem (1.1). Finally, Section 4 deals with some important specializations that connect the results on general maximal monotonicity, especially to several aspects of the linear convergence.

## 2. General Maximal *η*-Monotonicity

Definition 2.1.

Let Open image in new window be a multivalued mapping on Open image in new window . The map Open image in new window is said to be

Definition 2.2.

Let Open image in new window be a mapping on Open image in new window . The map Open image in new window is said to be

In light of Definitions 2.1(vii) and 2.2(ii), notions of cocoerciveness and firm nonexpansiveness coincide, but differ in applications much depending on the context.

Definition 2.3.

A map Open image in new window is said to be

Definition 2.4.

Let Open image in new window be a multivalued mapping on Open image in new window , and let Open image in new window be another mapping. The map Open image in new window is said to be

Definition 2.5.

A map Open image in new window is said to be maximal Open image in new window -monotone if

(1) Open image in new window is Open image in new window -monotone,

(2) Open image in new window for Open image in new window .

Proposition 2.6.

Let Open image in new window be a Open image in new window -strongly monotone mapping, and let Open image in new window be a maximal Open image in new window -monotone mapping. Then Open image in new window is maximal Open image in new window -monotone for Open image in new window , where Open image in new window is the identity mapping.

Proof.

The proof follows on applying Definition 2.5.

Proposition 2.7 (see [4]).

Let Open image in new window be Open image in new window -strongly monotone, and let Open image in new window be maximal Open image in new window -monotone. Then generalized resolvent operator Open image in new window is single valued, where Open image in new window is the identity mapping.

Proof.

Since Open image in new window is Open image in new window -strongly monotone, it implies Open image in new window . Thus, Open image in new window is single valued.

Definition 2.8.

Proposition 2.9 (see [4]).

Proof.

Proposition 2.10 (see [4]).

Let Open image in new window be a real Hilbert space, let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be Open image in new window -strongly monotone.

Proof.

When Open image in new window and Open image in new window in Proposition 2.10, we have the following.

Proposition 2.11.

Let Open image in new window be a real Hilbert space, let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be Open image in new window -strongly monotone.

For Open image in new window and Open image in new window in Proposition 2.10, we find a result of interest as follows.

Proposition 2.12.

Let Open image in new window be a real Hilbert space, let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be strongly monotone.

For Open image in new window in Proposition 2.10, we have the following result.

Proposition 2.13.

Let Open image in new window be a real Hilbert space, let Open image in new window be maximal Open image in new window -monotone, and let Open image in new window be strongly monotone.

## 3. The Over-Relaxed (*η*)-Proximal Point Algorithm

This section deals with the over-relaxed Open image in new window -proximal point algorithm and its application to approximation solvability of the inclusion problem (1.1) based on the maximal Open image in new window -monotonicity. Furthermore, some results connecting the Open image in new window -monotonicity and corresponding resolvent operator are established, that generalize the results on the firm nonexpansiveness [2], while the auxiliary results on maximal Open image in new window -monotonicity and general maximal monotonicity are obtained.

Theorem 3.1.

Let Open image in new window be a real Hilbert space, and let Open image in new window be maximal Open image in new window -monotone. Then the following statements are mutually equivalent.

(i)An element Open image in new window is a solution to (1.1).

Proof.

It follows from the definition of the generalized resolvent operator corresponding to Open image in new window .

Note that Theorem 3.1 generalizes [2, Lemma 2] to the case of a maximal Open image in new window -monotone mapping.

Next, we present a generalization to the relaxed proximal point algorithm [3] based on the maximal Open image in new window -monotonicity.

Algorithm 3.2 (see [4]).

are scalar sequences such that Open image in new window .

Algorithm 3.3.

are scalar sequences such that Open image in new window .

For Open image in new window in Algorithm 3.2, we have the following.

Algorithm 3.4.

are scalar sequences.

In the following result [4], we observe that Theorems 1.1 and 1.3 are unified and are generalized to the case of the Open image in new window -maximal monotonicity and super-relaxed proximal point algorithm. Also, we notice that this result in certain respects demonstrates the importance of the firm nonexpansiveness rather than of the nonexpansiveness.

Theorem 3.5 (see [4]).

where Open image in new window , Open image in new window , Open image in new window , Open image in new window , Open image in new window and Open image in new window .

Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window .

are scalar sequences such that Open image in new window and Open image in new window .

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window Open image in new window , and Open image in new window .

Proof.

Therefore, Open image in new window . Then, in light of Theorem 3.1, any solution to (1.1) is a fixed point of Open image in new window , and hence a zero of Open image in new window .

Now we begin verifying the boundedness of the sequence Open image in new window leading to Open image in new window .

where Open image in new window .

Thus, the sequence Open image in new window is bounded.

where Open image in new window .

that is, Open image in new window .

where Open image in new window .

where Open image in new window .

for Open image in new window and Open image in new window .

for setting Open image in new window .

Theorem 3.6.

satisfy Open image in new window , Open image in new window , Open image in new window , and Open image in new window .

Then the sequence Open image in new window converges weakly to a solution of (1.1).

Proof.

The proof is similar to that of the first part of Theorem 3.5 on applying the generalized representation lemma.

Theorem 3.7.

where Open image in new window , Open image in new window , Open image in new window , Open image in new window , Open image in new window Open image in new window and Open image in new window .

Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window .

are scalar sequences such that Open image in new window and Open image in new window .

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window and Open image in new window .

Proof.

The proof is similar to that of Theorem 3.5.

## 4. Some Specializations

Finally, we examine some significant specializations of Theorem 3.5 in this section. Let us start with Open image in new window and Open image in new window and applying Proposition 2.11.

Theorem 4.1.

where Open image in new window , Open image in new window , Open image in new window , Open image in new window , Open image in new window and Open image in new window .

Suppose that the sequence Open image in new window is bounded in the sense that there exists at least one solution to Open image in new window .

are scalar sequences such that Open image in new window and Open image in new window .

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window Open image in new window and Open image in new window .

Proof.

Therefore, Open image in new window . Then, in light of Theorem 3.1, any solution to (1.1) is a fixed point of Open image in new window , and hence a zero of Open image in new window .

Now we begin verifying the boundedness of the sequence Open image in new window leading to Open image in new window .

where Open image in new window .

Thus, the sequence Open image in new window is bounded.

where Open image in new window .

that is, Open image in new window .

where Open image in new window .

where Open image in new window .

for Open image in new window and Open image in new window .

for setting Open image in new window .

Second we examine Theorem 3.5 when Open image in new window and Open image in new window , but in this case there is no need to include the proof.

Theorem 4.2.

where Open image in new window , Open image in new window , Open image in new window , Open image in new window , Open image in new window and Open image in new window .

are scalar sequences such that Open image in new window and Open image in new window .

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window and Open image in new window .

Finally, we consider the case when Open image in new window in Theorem 3.5, especially using Proposition 2.13. In this situation, the inclusion of the complete proof seems to be appropriate.

Theorem 4.3.

are scalar sequences such that Open image in new window and Open image in new window .

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window Open image in new window , and Open image in new window .

Proof.

Therefore, Open image in new window . Then, in light of Theorem 3.1, any solution to (1.1) is a fixed point of Open image in new window , and hence a zero of Open image in new window .

Now we begin examining the boundedness of the sequence Open image in new window leading to Open image in new window .

Thus, the sequence Open image in new window is bounded.

where Open image in new window .

Thus, the sequence Open image in new window is bounded.

where Open image in new window .

that is, Open image in new window .

where Open image in new window .

for Open image in new window and Open image in new window .

for setting Open image in new window .

Note that if we set Open image in new window in Theorem 4.3, we get a result connecting [2] to the case of a linear convergence setting, but the algorithm remains overrelaxed (or superrelaxed). In this context, we state the following results before we start examining Theorem 4.7, the main result on linear convergence in the maximal monotone setting. Note that based on Proposition 4.6, notions of cocoercivity and firm nonexpansiveness coincide, though it is well known that they may differ in usage much depending on the context.

Theorem 4.4.

Let Open image in new window be a real Hilbert space, and let Open image in new window be maximal monotone. Then the following statements are mutually equivalent.

(i)An element Open image in new window is a solution to (1.1).

Proof.

It follows from the definition of the generalized resolvent operator corresponding to Open image in new window .

Next, we present the super-relaxed Proximal point algorithm based on the maximal monotonicity.

Algorithm 4.5.

are scalar sequences such that Open image in new window .

Proposition 4.6.

Theorem 4.7.

where Open image in new window , Open image in new window , Open image in new window , Open image in new window , Open image in new window , and Open image in new window .

are scalar sequences such that Open image in new window and Open image in new window .

where Open image in new window , Open image in new window , and sequences Open image in new window and Open image in new window satisfy Open image in new window , Open image in new window , Open image in new window , and Open image in new window .

Proof.

Therefore, Open image in new window . Then, in light of Theorem 4.4, any solution to (1.1) is a fixed point of Open image in new window , and hence a zero of Open image in new window .

Now we begin examining the boundedness of the sequence Open image in new window leading to Open image in new window .

Therefore, the sequence Open image in new window is bounded.

where Open image in new window .

Thus, the sequence Open image in new window is bounded.

where Open image in new window .

that is, Open image in new window .

where Open image in new window .

for Open image in new window and Open image in new window .

for setting Open image in new window .

## References

- 1.Rockafellar RT:
**Monotone operators and the proximal point algorithm.***SIAM Journal on Control and Optimization*1976,**14**(5):877–898. 10.1137/0314056MathSciNetCrossRefMATHGoogle Scholar - 2.Eckstein J, Bertsekas DP:
**On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators.***Mathematical Programming*1992,**55**(3):293–318. 10.1007/BF01581204MathSciNetCrossRefMATHGoogle Scholar - 3.Verma RU:
**On the generalized proximal point algorithm with applications to inclusion problems.***Journal of Industrial and Management Optimization*2009,**5**(2):381–390.MathSciNetCrossRefMATHGoogle Scholar - 4.Agarwal RP, Verma RU:
**The over-relaxed proximal point algorithm and nonlinear variational inclusion problems.***Nonlinear Functional Analysis and Applications*2009.,**14**(4):Google Scholar - 5.Barbu V:
*Nonlinear Semigroups and Differential Equations in Banach Spaces*. Nordhoff, Leyden, The Nethedands; 1976:352.CrossRefGoogle Scholar - 6.Boikanyo OA, Morosanu G:
**Modified Rockafellar's algorithms.***Mathematical Sciences Research Journal*. In pressGoogle Scholar - 7.Bertsekas DP:
**Necessary and sufficient condition for a penalty method to be exact.***Mathematical Programming*1975,**9**(1):87–99. 10.1007/BF01681332MathSciNetCrossRefMATHGoogle Scholar - 8.Bertsekas DP:
*Constrained Optimization and Lagrange Multiplier Methods, Computer Science and Applied Mathematics*. Academic Press, New York, NY, USA; 1982:xiii+395.Google Scholar - 9.Douglas J Jr., Rachford HH Jr.:
**On the numerical solution of heat conduction problems in two and three space variables.***Transactions of the American Mathematical Society*1956,**82:**421–439. 10.1090/S0002-9947-1956-0084194-4MathSciNetCrossRefMATHGoogle Scholar - 10.Eckstein J:
*Splitting methods for monotone operators with applications to parallel optimization, Doctoral dissertation*. Department of Civil Engineering, Massachusetts Institute of Technology, Cambridge, Mass, USA; 1989.Google Scholar - 11.Eckstein J:
**Nonlinear proximal point algorithms using Bregman functions, with applications to convex programming.***Mathematics of Operations Research*1993,**18**(1):202–226. 10.1287/moor.18.1.202MathSciNetCrossRefMATHGoogle Scholar - 12.Eckstein J:
**Approximate iterations in Bregman-function-based proximal algorithms.***Mathematical Programming*1998,**83**(1):113–123. 10.1007/BF02680553MathSciNetMATHGoogle Scholar - 13.Eckstein J, Ferris MC:
**Smooth methods of multipliers for complementarity problems.***Mathematical Programming*1999,**86**(1):65–90. 10.1007/s101070050080MathSciNetCrossRefMATHGoogle Scholar - 14.Ferris MC:
**Finite termination of the proximal point algorithm.***Mathematical Programming*1991,**50**(3):359–366. 10.1007/BF01594944MathSciNetCrossRefMATHGoogle Scholar - 15.Güler O:
**On the convergence of the proximal point algorithm for convex minimization.***SIAM Journal on Control and Optimization*1991,**29**(2):403–419. 10.1137/0329022MathSciNetCrossRefMATHGoogle Scholar - 16.Martinet B:
**Régularisation d'inéquations variationnelles par approximations successives.***Revue Française d'Informatique et de Recherche Opérationnelle, Série Rouge*1970,**4**(3):154–158.MathSciNetMATHGoogle Scholar - 17.Minty GJ:
**Monotone (nonlinear) operators in Hilbert space.***Duke Mathematical Journal*1962,**29:**341–346. 10.1215/S0012-7094-62-02933-2MathSciNetCrossRefMATHGoogle Scholar - 18.Moroşanu G:
*Nonlinear Evolution Equations and Applications, Mathematics and Its Applications (East European Series)*.*Volume 26*. D. Reidel, Dordrecht, The Netherlands; 1988:xii+340.MATHGoogle Scholar - 19.Moudafi A:
**Mixed equilibrium problems: sensitivity analysis and algorithmic aspect.***Computers & Mathematics with Applications*2002,**44**(8–9):1099–1108. 10.1016/S0898-1221(02)00218-3MathSciNetCrossRefMATHGoogle Scholar - 20.Moudafi A, Théra M:
**Finding a zero of the sum of two maximal monotone operators.***Journal of Optimization Theory and Applications*1997,**94**(2):425–448. 10.1023/A:1022643914538MathSciNetCrossRefMATHGoogle Scholar - 21.Pang J-S:
**Complementarity problems.**In*Handbook of Global Optimization, Nonconvex Optimization and Its Applications*.*Volume 2*. Edited by: Horst R, Pardalos P. Kluwer Academic Publishers, Dordrecht, The Netherlands; 1995:271–338.CrossRefGoogle Scholar - 22.Robinson SM:
**Composition duality and maximal monotonicity.***Mathematical Programming*1999,**85**(1):1–13. 10.1007/s101070050043MathSciNetCrossRefMATHGoogle Scholar - 23.Robinson SM:
**Linear convergence of epsilon-subgradient descent methods for a class of convex functions.***Mathematical Programming*1999,**86:**41–50. 10.1007/s101070050078MathSciNetCrossRefMATHGoogle Scholar - 24.Rockafellar RT:
**On the maximal monotonicity of subdifferential mappings.***Pacific Journal of Mathematics*1970,**33:**209–216.MathSciNetCrossRefMATHGoogle Scholar - 25.Rockafellar RT:
**Augmented Lagrangians and applications of the proximal point algorithm in convex programming.***Mathematics of Operations Research*1976,**1**(2):97–116. 10.1287/moor.1.2.97MathSciNetCrossRefMATHGoogle Scholar - 26.Rockafellar RT, Wets RJ-B:
*Variational Analysis*. Springer, Berlin, Germany; 2004.MATHGoogle Scholar - 27.Solodov MV, Svaiter BF:
**An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions.***Mathematics of Operations Research*2000,**25**(2):214–230. 10.1287/moor.25.2.214.12222MathSciNetCrossRefMATHGoogle Scholar - 28.Solodov MV, Svaiter BF:
**Forcing strong convergence of proximal point iterations in a Hilbert space.***Mathematical Programming*2000,**87**(1):189–202.MathSciNetMATHGoogle Scholar - 29.Takahashi W:
**Approximating solutions of accretive operators by viscosity approximation methods in Banach spaces.**In*Applied Functional Analysis*. Yokohama, Yokohama, Japan; 2007:225–243.Google Scholar - 30.Tossings P:
**The perturbed proximal point algorithm and some of its applications.***Applied Mathematics and Optimization*1994,**29**(2):125–159. 10.1007/BF01204180MathSciNetCrossRefMATHGoogle Scholar - 31.Tseng P:
**Applications of a splitting algorithm to decomposition in convex programming and variational inequalities.***SIAM Journal on Control and Optimization*1991,**29**(1):119–138. 10.1137/0329006MathSciNetCrossRefMATHGoogle Scholar - 32.Tseng P:
**Alternating projection-proximal methods for convex programming and variational inequalities.***SIAM Journal on Optimization*1997,**7**(4):951–965. 10.1137/S1052623495279797MathSciNetCrossRefMATHGoogle Scholar - 33.Tseng P:
**A modified forward-backward splitting method for maximal monotone mappings.***SIAM Journal on Control and Optimization*2000,**38**(2):431–446. 10.1137/S0363012998338806MathSciNetCrossRefMATHGoogle Scholar - 34.Verma RU:
**A fixed-point theorem involving Lipschitzian generalised pseudo-contractions.***Proceedings of the Royal Irish Academy. Section A*1997,**97**(1):83–86.MathSciNetMATHGoogle Scholar - 35.Verma RU:
**New class of nonlinear -monotone mixed variational inclusion problems and resolvent operator technique.***Journal of Computational Analysis and Applications*2006,**8**(3):275–285.MathSciNetMATHGoogle Scholar - 36.Verma RU:
**Nonlinear -monotone variational inclusions systems and the resolvent operator technique.***Journal of Applied Functional Analysis*2006,**1**(2):183–189.MathSciNetMATHGoogle Scholar - 37.Verma RU:
**-monotonicity and its role in nonlinear variational inclusions.***Journal of Optimization Theory and Applications*2006,**129**(3):457–467. 10.1007/s10957-006-9079-7MathSciNetCrossRefMATHGoogle Scholar - 38.Verma RU:
**-monotone nonlinear relaxed cocoercive variational inclusions.***Central European Journal of Mathematics*2007,**5**(2):386–396. 10.2478/s11533-007-0005-5MathSciNetCrossRefMATHGoogle Scholar - 39.Verma RU:
**Approximation solvability of a class of nonlinear set-valued variational inclusions involving -monotone mappings.***Journal of Mathematical Analysis and Applications*2008,**337**(2):969–975. 10.1016/j.jmaa.2007.01.114MathSciNetCrossRefMATHGoogle Scholar - 40.Verma RU:
*Nonlinear Approximation Solvability Involving Regular and Demiregular Convergence*. International Publications (USA), Orlando, Fla, USA; 1994.Google Scholar - 41.Verma RU:
**General projection systems and relaxed cocoercive nonlinear variational inequalities.***The ANZIAM Journal*2007,**49**(2):205–212. 10.1017/S1446181100012785MathSciNetCrossRefMATHGoogle Scholar - 42.Verma RU:
**General proximal point algorithmic models and nonlinear variational inclusions involving RMM mappings.**accepted to*Journal of Informatics and Mathematical Sciences*Google Scholar - 43.Verma RU:
**General proximal point algorithm involving -maximal accretiveness framework in Banach spaces.***Positivity*2009,**13**(4):771–782. 10.1007/s11117-008-2268-xMathSciNetCrossRefMATHGoogle Scholar - 44.Verma RU:
**The generalized relaxed proximal point algorithm involving -maximal-relaxed accretive mappings with applications to Banach spaces.***Mathematical and Computer Modelling*2009,**50**(7–8):1026–1032. 10.1016/j.mcm.2009.04.012MathSciNetCrossRefMATHGoogle Scholar - 45.Yosida K:
*Functional Analysis*. Springer, Berlin, Germany; 1965.CrossRefMATHGoogle Scholar - 46.Yosida K:
**On the differentiability and representation of one-parameter semigroups of linear operators.***Journal of Mathematical Society of Japan*1948,**1:**15–21. 10.2969/jmsj/00110015MathSciNetCrossRefMATHGoogle Scholar - 47.Xu H-K:
**Iterative algorithms for nonlinear operators.***Journal of the London Mathematical Society*2002,**66**(1):240–256. 10.1112/S0024610702003332MathSciNetCrossRefMATHGoogle Scholar - 48.Zeidler E:
**The Ljusternik-Schnirelman theory for indefinite and not necessarily odd nonlinear operators and its applications.***Nonlinear Analysis: Theory, Methods & Applications*1980,**4**(3):451–489. 10.1016/0362-546X(80)90085-1MathSciNetCrossRefMATHGoogle Scholar - 49.Zeidler E:
**Ljusternik-Schnirelman theory on general level sets.***Mathematische Nachrichten*1986,**129:**235–259. 10.1002/mana.19861290121MathSciNetCrossRefMATHGoogle Scholar - 50.Zeidler E:
*Nonlinear Functional Analysis and Its Applications—Part 1: Fixed-Point Theorems*. Springer, New York, NY, USA; 1986:xxi+897.CrossRefGoogle Scholar - 51.Zeidler E:
*Nonlinear Functional Analysis and Its Applications—Part 2 A: Linear Monotone Operators*. Springer, New York, NY, USA; 1990:xviii+467.CrossRefMATHGoogle Scholar - 52.Zeidler E:
*Nonlinear Functional Analysis and Its Applications—Part 2 B: Nonlinear Monotone Operators*. Springer, New York, NY, USA; 1990.CrossRefMATHGoogle Scholar - 53.Zeidler E:
*Nonlinear Functional Analysis and Its Applications—Part 3: Variational Methods and Optimization*. Springer, New York, NY, USA; 1985:xxii+662.CrossRefGoogle Scholar - 54.Zolezzi T:
**Continuity of generalized gradients and multipliers under perturbations.***Mathematics of Operations Research*1985,**10**(4):664–673. 10.1287/moor.10.4.664MathSciNetCrossRefGoogle Scholar - 55.Zoretti L:
**Un théorème de la théorie des ensembles.***Bulletin de la Société Mathématique de France*1909,**37:**116–119.MathSciNetMATHGoogle Scholar

## Copyright information

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.