# Some algorithms for classes of split feasibility problems involving paramonotone equilibria and convex optimization

## Abstract

In this paper, we first introduce a new algorithm which involves projecting each iteration to solve a split feasibility problem with paramonotone equilibria and using unconstrained convex optimization. The strong convergence of the proposed algorithm is presented. Second, we also revisit this split feasibility problem and replace the unconstrained convex optimization by a constrained convex optimization. We introduce some algorithms for two different types of objective function of the constrained convex optimization and prove some strong convergence results of the proposed algorithms. Third, we apply our algorithms for finding an equilibrium point with minimal environmental cost for a model in electricity production. Finally, we give some numerical results to illustrate the effectiveness and advantages of the proposed algorithms.

## Keywords

Split feasibility problem Equilibria Constrained convex optimization Practical model## MSC

47H05 47H07 47H10 54H25## 1 Introduction and the problem statement

Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces with inner product \(\langle\cdot,\cdot\rangle\) and reduced norm \(\|\cdot\|\), *C* and *Q* be nonempty closed convex subsets in \(H_{1}\) and \(H_{2}\), respectively.

*split feasibility problem*(shortly, SFP) in Euclidean space, which is formulated as follows:

*equilibrium problem*(shortly, EP)

Recently, Yen et al. [37] investigated the following *split feasibility problem* involving paramonotone equilibria and convex optimization (shortly, SEO):

### Problem 1.1

Find \(x^{*}\in C\) such that \(f(x^{*},y)\geq0\) for all \(y\in C\) and \(g(Ax^{*})\leq g(z)\) for all \(z\in H_{2}\),

where *g* is a proper lower semi-continuous convex function on \(H_{2}\). Also, they introduced the following algorithm to solve Problem 1.1:

### Algorithm 1.1

In Algorithm 1.1, \(\operatorname{prox}_{\lambda g}\) denotes proximal mapping of the convex function *g* with \(\lambda> 0\), and the parameters \(\{a_{k}\}\), \(\{\delta_{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\) and \(\{\rho_{k}\}\) are taken as in Algorithm 3.1 (see below Sect. 3).

Note that Algorithm 1.1 involves two exact projections onto the feasible set *C*, which limits the applicability of the method, especially when such projections are hard to compute. It is well known that only in a few specific instances the projection onto a convex set has an explicit formula. When the feasible set *C* is a general closed convex set, we must solve a nontrivial quadratic problem in order to compute the projection onto *C*.

In this paper, by expanding the domain of function *f*, we introduce a new algorithm which just involves a projection onto *C*. Also, we revisit Problem 1.1 and replace the unconstrained convex optimization by a constrained convex optimization. Further, we introduce two iterative algorithms to solve the new model and prove some strong convergence results of the proposed algorithms.

The paper is organized as follows: Sect. 2 deals with some definitions and lemmas for the main results in this paper. In Sect. 3, we introduce a new algorithm, which involves a projection in each iteration. In Sect. 4, we introduce two algorithms and study their convergence. In Sect. 5, we provide a practical model for an electricity market and some computational results for the model.

## 2 Preliminaries

The following definitions and lemmas are useful for the validity and convergence of the algorithms.

### Definition 2.1

*H*be a Hilbert space, \(T:H\rightarrow H\) be a mapping and let \(K\subseteq H\).

- (i)
*T*is said to be*nonexpansive*iffor all \(x,y\in H\).$$ \Vert Tx-Ty \Vert \leq \Vert x-y \Vert $$ - (ii)
*T*is said to be*firmly nonexpansive*iffor all \(x,y\in H\), or$$ \Vert Tx-Ty \Vert \leq\langle x-y,Tx-Ty\rangle $$for all \(x,y\in H\).$$ 0\leq\bigl\langle Tx-Ty,(I-T)x-(I-T)y\bigr\rangle $$ - (iii)
*T*is said to be*Lipschitz continuous*with Lipschitz constant*L*iffor all \(x,y\in H\).$$ \Vert Tx-Ty \Vert \leq L \Vert x-y \Vert $$ - (iv)
*T*is said to be*α-averaged*ifwhere \(\alpha\in(0,1)\) and \(S:H\rightarrow H\) is a nonexpansive mapping.$$ T=(1-\alpha)I+\alpha S, $$

### Lemma 2.1

([1, Proposition 4.4])

*Let H be a Hilbert space and*\(T: H\rightarrow H\)

*be a mapping*.

*Then the following are equivalent*:

- (i)
*T**is firmly nonexpansive*; - (ii)
\(I-T\)

*is firmly nonexpansive*.

### Lemma 2.2

*The composition of finitely many averaged mappings is averaged*. *In particular*, *if*\(T_{1}\)*is*\(\alpha_{1}\)-*averaged and*\(T_{2}\)*is*\(\alpha_{2}\)-*averaged*, *where*\(\alpha_{1},\alpha_{2}\in(0,1)\), *then the composition*\(T_{1}T_{2}\)*is**α*-*averaged*, *where*\(\alpha=\alpha _{1}+\alpha_{2}-\alpha_{1}\alpha_{2}\).

It is easy to show that firmly nonexpansive mappings are \(\frac{1}{2}\)-averaged, and averaged mappings are nonexpansive.

*T*, i.e.,

*H*be a real Hilbert space and

*K*be a nonempty convex closed subset of

*H*. For each point \(x\in{H}\), there exists a unique nearest point in

*K*, denoted by \(P_{K}(x)\), such that

*the metric projection*of

*H*onto

*K*. It is well known that \(P_{K}\) is a nonexpansive mapping of

*H*onto

*K*and even a firmly nonexpansive mapping. So, \(P_{K}\) is also \(\frac{1}{2}\)-averaged, which is captured in the following lemma:

### Lemma 2.3

*For any*\(x,y\in H\)

*and*\(z\in K\),

*the following hold*:

- (i)
\(\Vert P_{K}(x)-P_{K}(y)\Vert^{2}\leq\Vert x-y \Vert\);

- (ii)
\(\Vert P_{K}(x)-z\Vert^{2}\leq\Vert x-z\Vert ^{2}-\Vert P_{K}(x)-x\Vert^{2}\).

Some characterizations of the metric projection \(P_{K}\) are given by the two properties in the following lemma:

### Lemma 2.4

*Let*\(x\in H\)

*and*\(z\in K\).

*Then*\(z=P_{K}(x)\)

*if and only if*\(P_{K}(x)\in K\)

*and*

*for all*\(x\in H\)

*and*\(y\in K\).

### Lemma 2.5

*Let*

*C*

*be a nonempty closed convex subset in a Hilbert space*

*H*

*and*\(P_{C}(x)\)

*be the metric projection of*

*x*

*onto*

*C*.

*Then we have*

- (i)
\(\langle x-y,P_{C}(x)-P_{C}(y)\rangle\geq\|P _{C}(x)-P_{C}(y)\|^{2}\)

*for all*\(x,y\in C\); - (ii)
\(\|z^{k}-y^{k}\|\leq\beta_{k}\).

### Lemma 2.6

*Let*\(\{v^{k}\}\)*and*\(\{\delta_{k}\}\)*be the nonnegative sequences of real numbers satisfying*\(v^{k+1}\leq v^{k}+\delta_{k}\)*with*\(\sum_{k=1}^{\infty}\delta_{k}<+\infty\). *Then the sequence*\(\{v^{k}\}\)*is convergent*.

### Lemma 2.7

*Let*

*H*

*be a real Hilbert space*, \(\{a_{k}\}\)

*be a sequence of real numbers such that*\(0< a< a_{k}< b<1\)

*for all*\(k\geq1\)

*and*\(\{v^{k}\}\), \(\{w^{k}\}\)

*be the sequences in*

*H*

*such that*

*and*,

*for some*\(c>0\),

*Then*\(\lim_{k\rightarrow+\infty}\|v^{k}-w^{k}\|=0\).

### Definition 2.2

([28])

*normal cone*of

*K*at \(v\in K\), denote by \(N_{K}\), is defined as follows:

### Definition 2.3

([1, Definition 16.1])

*subdifferential set*of a convex function

*c*at a point

*x*is defined as follows:

*indicator function*of the set

*K*, i.e.,

Let \(f:H\times H\rightarrow\mathbb{R}\) be a bifunction. We need the following assumptions on \(f(x,y)\) for our algorithms and convergence:

(A1) For each \(x\in C\), \(f(x,x)=0\) and \(f(x,\cdot)\) is lower semi-continuous and convex on *C*;

*C*, where \(\partial_{2}^{\epsilon}f(x,x)\) denotes

*ϵ*-subdifferential of the convex function \(f(x,\cdot)\) at

*x*, that is,

*f*is pseudo-monotone on

*C*with respect to every solution of the

*EP*, that is, \(f(x,x^{*})\leq0\) for any \(x\in C\), \(x^{*}\in\operatorname{Sol}(EP)\) and

*f*satisfies the following condition, which is called the

*para-monotonicity property*:

(A4) For all \(x \in K\), \(f(\cdot,x)\) is weakly upper semi-continuous on *C*.

## 3 A new algorithm for Problem 1.1 and its convergence analysis

In this section we give a new algorithm for Problem 1.1 and analyze its convergence.

*g*with \(\lambda> 0\), denoted by \(\operatorname{prox}_{\lambda g}\), is defined as the unique solution of the strongly convex programming problem:

*Ax*solves \(P(u)\) with \(u = Ax\). Note that, even though

*g*may not be differentiable,

*h*is always differentiable and \(\nabla h(x) = A^{*}(I-\operatorname{prox} _{\lambda g})Ax\) (see, for example, [28]).

### Algorithm 3.1

*δ*,

*ξ*and the real sequences \(\{a_{k}\}\), \(\{\delta_{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\), \(\{\rho_{k}\}\) satisfying the following conditions: for each \(k\in\mathbb{N}\),

*Step* 1. Choose \(x^{1}\in C\) and let \(k:=1\).

*Step*

*k*. Have \(x^{k}\in C\) and take

### Remark 3.1

It is obvious that Algorithm 3.1 involves only one projection onto *C* per each iteration. Note that the domain of function *f* is \(H\times H\).

### Lemma 3.1

([24])

*Let*

*S*

*be the set of solutions of Problem*1.1

*and*\(y \in S\).

*If*\(\nabla h(x^{k})\neq0\),

*then*

### Lemma 3.2

([29])

*For each*\(k\geq1\),

*the following inequalities hold*:

- (i)
\(\alpha_{k}\|\eta_{k}\|\leq\beta_{k}\);

- (ii)
\(\|z^{k}-y^{k}\|\leq\beta_{k}\).

### Lemma 3.3

*Let*\(y\in S\).

*Then*,

*for each*\(k\geq1\)

*such that*\(\nabla h(x^{k}) \neq0\),

*we have*

*and*,

*for each*\(k\geq1\)

*such that*\(\nabla h(x^{k})=0\),

*we have*

*where*\(A_{k}=2(1-a_{k})(\alpha_{k}\epsilon_{k}+\beta_{k}^{2})\).

### Proof

Now, we consider two cases:

*Case*1. If \(\nabla h(x^{k})\neq0\), then, thanks to Lemma 3.1, we have

*Case*2. If \(\nabla h(y_{k})=0\), then, by the definition of \(y^{k}\), we can write \(y^{k}=x^{k}\). Now, by the same argument as in Case 1, we have

### Theorem 3.1

*Suppose that Problem*1.1*admits a solution*. *Then*, *under Assumptions* (A1)*–*(A4), *the sequence*\(\{x^{k}\}\)*generated by Algorithm*3.1*strongly converges to a solution of Problem*1.1.

### Proof

*Claim*1. The sequence \(\{\|x^{k}-y\|^{2}\}\) is convergent for all \(y\in S\). Indeed, let \(y\in S\). Since \(y\in \operatorname{Sol}(E P)\) and

*f*is pseudomonotone on

*C*with respect to every solution of \((E P)\), we have

*Claim*2. \(\limsup_{k\rightarrow\infty}f(y^{k},y)=0\) for all \(y\in S\). By Lemma 3.3, for each \(k\geq1\), we have

*y*is a solution, by pseudomonotonicity of

*f*, we have \(-f(y^{k},y)\geq0\), which together with \(0< a< a_{k}< b<1\) implies

*Claim*3. For any \(y\in S\), suppose that \(\{y^{k_{j}} \}\) is the subsequence of \(\{y^{k}\}\) such that

*f*is pseudomonotone, we have \(f(y^{*},y)\leq0\) and so \(f(y^{*},y)=0\). Again, by pseudomonotonicity of

*f*, \(f(y,y^{*}) \leq0\) and hence \(f(y^{*},y)=f(y,y^{*})=0\). Then, by paramonotonicity (Assumption (A3)), we can conclude that \(y^{*}\) is also a solution of \((EP)\).

*Claim*4. Every weakly cluster point

*x̄*of the sequence \(\{x^{k}\}\) satisfies \(\bar{x}\in K\) and \(A\bar{x}\in \mathop {\operatorname {argmin}}g\). Let

*x̄*be a weakly cluster point of \(\{x^{k}\}\) and \(\{x^{k_{j}}\}\) be a subsequence of \(\{x^{k}\}\) weakly converging to

*x̄*. Then \(\bar{x}\in K\). From Lemma 3.3, if \(\nabla h(x^{k})\neq0\), then we have

*h*is Lipschitz continuous with constant \(\|A\|^{2}\), we see that \(\|\nabla h(x^{k})\|^{2}\) is bounded. So, \(h(x^{k})\rightarrow0\) as \(k\in N_{1}\) and \(k\rightarrow\infty\). Note that \(h(x^{k})=0\) for \(k\notin N_{1}\). Consequently, we have

*h*,

*Ax̄*is a fixed point of the proximal mapping of

*g*. Thus

*Ax̄*is a minimizer of

*g*. From (8) and the fact that \(\|\nabla h(x^{k})\|^{2}\) is bounded, it follows that

*x̄*.

*Claim*5. \(\lim_{k\rightarrow+\infty}x^{k}= \lim_{k\rightarrow+\infty}y^{k}=\lim_{k\rightarrow+\infty}P(x^{k})=x ^{*}\), where \(x^{*}\) is a weakly cluster point of the sequence satisfying (7). From Claims 3 and 4, we can deduce that \(x^{*}\) belongs to

*S*. By Claim 1, we can assume that

## 4 Algorithms and convergence analysis

In [37], Yen et al. presented an application of Problem 1.1 to a model of electricity production, in which *z* denotes the quantity of the materials and \(g(z)\) is the total environmental fee that companies have to pay for environmental pollution while using materials *z* for production. So, from \(x\in C\), it follows that \(z=Ax\in\{z: z=Ax, x\in C\}\).

*Q*is a nonempty closed convex set of \(H_{2}\). Therefore, it is necessary to replace the unconstrained convex optimization problem \(\min_{x\in H_{2}}g(x)\) with the

*constrained convex optimization*as follows:

### Problem 4.1

Find \(x^{*}\in C\) such that \(f(x^{*},y)\geq0\) for all \({y}\in C\) such that \(Ax^{*}\in Q\) and \(g(Ax^{*})\leq g(z)\) for all \(z\in Q\),

In this section, we discuss two cases that the function *g* is differentiable or non-differentiable. The corresponding algorithms and their convergence are provided next.

### 4.1 The case when *g* is differentiable

We need to make the following assumption on the mapping *g*:

*g*is

*L*-Lipschitz differentiable with \(L>0\), i.e.,

*variational inequality problem*:

*fixed point problem*:

### Algorithm 4.1

Take the real sequences \(\{a_{k}\}\), \(\{\delta_{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\) and \(\{\rho_{k}\}\) as in Algorithm 3.1.

*Step* 1. Choose \(x^{1}\in C\) and let \(k:=1\).

*Step*

*k*. Have \(x^{k}\in C\) and take

Now, we need the following lemmas to prove the convergence of Algorithm 4.1:

### Lemma 4.1

([8, Lemma 6.2])

*Assume that a mapping*\(g:H_{2}\rightarrow H_{2}\)

*satisfies Assumption*(B)

*and*\(\nu\in(0,\frac{2}{L})\).

*Let*\(y\in\varGamma\).

*If*\(\|l(x^{k})\|\neq0\),

*then it follows that*

### Proof

*Ay*is a fixed point of

*T*. From the proof of [32, Theorem 4.1], it follows that

*T*is \(\frac{2+\nu L}{4}\)-averaged and so it is nonexpansive. By (13) and Lemma 2.3(i), we have

*T*and (2), we have

### Remark 4.1

From (15), it follows that \(l(x)=0\) implies \(h(x)=0\).

Using Lemma 4.1 and following the lines of the proof of Lemma 3.3, we have the following:

### Lemma 4.2

*Let*\(y\in\varGamma\).

*Then*,

*for each*\(k\geq1\)

*such that*\(l(x^{k}) \neq0\),

*we have*

*and*,

*for each*\(k\geq1\)

*such that*\(l(x^{k})=0\),

*we have*

*where*\(A_{k}=(1-a_{k})(\alpha_{k}\epsilon_{k}+\beta_{k}^{2})\).

Next we establish the convergence of Algorithm 4.1.

### Theorem 4.1

*Under Assumptions* (A1)*–*(A4) *and* (B), *the sequence*\(\{x^{k}\}\)*generated by Algorithm*4.1*strongly converges to a solution of Problem*4.1.

The proof of Theorem 4.1 is similar with that of Theorem 3.1, so here we omit it.

The only thing to note about the proof of Theorem 4.1 is that from \(h(\bar{x})=0 \) it follows that *Ax̄* is a fixed point of \(P_{Q}(I-\nu\nabla g)\). Thus *Ax̄* is a solution of (9).

### 4.2 The case when *g* is non-differentiable

*Moreau–Yosida approximate*of the function

*g*with the parameter

*λ*. It is easy to see that the solution of (16) converges to that of \(\min_{x\in Q}g(x)\) as \(\lambda\rightarrow\infty\).

For the mapping defined in (16), we have following result.

### Lemma 4.3

*The constrained optimization problem*

*is equivalent to the fixed point formulation*

*where*\(\nu\in(0,+\infty)\).

### Proof

*x*is \(N_{C}(x)\). The inclusion (20) in turn yields (18). This completes the proof. □

Similar to Algorithm 3.1, using Lemma 4.3, we introduce the following algorithm:

### Algorithm 4.2

Take the real sequences \(\{a_{k}\}\), \(\{\delta_{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\) and \(\{\rho_{k}\}\) as in Algorithm 3.1. Take a positive parameter *ν*.

*Step* 1. Choose \(x^{1}\in C\) and let \(k:=1\).

*Step*

*k*. Have \(x^{k}\in C\) and take

### Remark 4.2

We need the following lemmas for the proof of the convergence of Algorithm 4.2.

### Lemma 4.4

*Let*\(\nu\in(0,1]\). *Then operator*\(P_{Q}(I-\nu(I-\operatorname{prox}_{\lambda g}))\)*is nonexpansive*.

### Proof

By the fact that \(\operatorname{prox}_{\lambda g}\) is firmly nonexpansive and Lemma 2.1, \(I-\operatorname{prox}_{\lambda g}\) and \(\nu(I-\operatorname{prox}_{\lambda g})\) are also firmly nonexpansive. So, using Lemma 2.1 again, \(I-\nu(I-\operatorname{prox}_{\lambda g})\) is firmly nonexpansive. Thus, from Lemma 2.2, it follows that \(P_{Q}(I-\nu (I-\operatorname{prox} _{\lambda g}))\) is \(\frac{3}{4}\)-averaged and hence nonexpansive. This completes the proof. □

Using Lemma 4.4 and following the proof of Theorem 4.1, we obtain the convergence result of Algorithm 4.2.

### Theorem 4.2

*Let*\(\nu\in(0,1]\). *Then*, *under Assumptions* (A1)*–*(A4), *the sequence*\(\{x^{k}\}\)*generated by Algorithm*4.2*strongly converges to a solution of Problem*4.1.

The proof of Theorem 4.2 is similar to that of Theorem 3.1, so here we omit it.

One thing to note about proof of Theorem 4.2 is that from \(h( \bar{x})=0 \) it follows that *Ax̄* is a fixed point of \(P_{Q}(I-\nu(I-\operatorname{prox}_{\lambda g}))\). Thus, by Lemma 4.3, *Ax̄* is a solution of (17).

## 5 Numerical examples

In this section, we provide two numerical examples to compare different algorithms. All programs are written in Matlab version 7.0 and performed on a desktop PC with Intel(R) Core(TM) i5-4200U CPU @ 2.30 GHz, RAM 4.00 GB.

### Example 5.1

First, we consider an equilibrium-optimization model which was investigated by Yen et al. [37]. This model can be regarded as an extension of a Nash–Cournot oligopolistic equilibrium model in electricity markets. The latter model has been investigated in some research papers (see, for example, [10, 27]).

*n*companies. Let

*x*denote the vector whose entry \(x_{i}\) stands for the power generated by company

*i*. Following Contreras et al. [10], we suppose that the price \(p_{i}(s)\) is a decreasing affine function of

*s*with \(s = \sum_{i=1}^{n} x_{i}\), that is,

*i*is given by

*i*.

Suppose that \(C_{i}\) is the strategy set of company *i*, that is, condition \(x_{i}\in C_{i}\) must be satisfied for each *i*. Then the strategy set of the model is \(C:=C_{1}\times C_{2}\times\cdots \times C_{n}\).

Actually, each company seeks to maximize its profit by choosing the corresponding production level under the presumption that the production of the other companies are parametric input. A commonly used approach to this model is based upon the famous Nash equilibrium concept.

*equilibrium point*of the model if

In [37], Yen et al. extended this equilibrium model by additionally assuming that the companies use some materials to produce electricity.

Let \(a_{l,i}\) denote the quantity of material *l*\((l = 1,\dots, m)\) for producing one unit of electricity by company *i*\((i=1, \dots,n)\). Let *A* be the matrix whose entries are \(a_{l,i}\). Then entry *l* of the vector *Ax* is the quantity of material *l* for producing *x*. Using materials for production may cause environmental pollution, for which companies have to pay a fee. Suppose that \(g(Ax)\) is the total environmental fee for producing *x*.

*Q*. This problem can be formulated as the

*split feasibility problem*of the following form:

Suppose that, for every *i*, cost \(c_{i}\) for production and environmental fee *g* are increasing convex functions. The convexity assumption here means that both the cost and fee for producing a unit of product increase as the quantity of the product gets larger.

*f*given by (22) can be formulated as follows:

*i*, problem (23) is equivalent to the following

*variational inequality problem*:

In [37], the author denoted by \(g(z)\) the total environmental fee. It is unreasonable. Firstly, the total environmental fee should be included in the cost, that is, it is a part of \(c_{i}(x_{i})\). Secondly, it is supposed that the companies behave as players in an oligopolistic market, but at the same time they are subordinated to the centralized planning decision in order to minimize the total environmental fee for the whole system. That is, the model is not concordant with the real system behavior.

It may be reasonable to denote by \(g(z)\) the restriction for the emission of contaminants. To protect the environment, governments generally adopt policies to restrict emissions of contaminants.

Assume that the production of electricity brings *p* contaminants and governments require that the quantity of contaminants brought by the production of one unit of electricity is in a given region. We use a set \(K\subset\mathbb{R}^{p}\) to denote this region.

Let \(b_{k,l}\) denote the quantity of the contaminant *k*\((k = 1, \dots, p)\) for consuming one unit of material *l*\((l = 1,\dots, m)\). Let *B* be the matrix whose entries are \(b_{k,l}\). Then entry *k* of vector *Bz* is the quantity of contaminant *k* for consuming one unit of material \(z_{l}\)\((l = 1,\dots, m)\). So, the quantity of contaminant *k*\((k = 1,\dots, p)\) for producing one unit of electricity is entry *k* of \(BAx\), and \(BAx\) should be in the set *K*, i.e., \(BAx\in K\). We get \(Bz\in K\) when letting \(z=Ax\).

*A*were randomly generated in the interval [0, 5]. In the bifunction \(f(x,y)\) defined by (23), (24) and (25), the parameters \(\alpha=0.5\) and \(b_{i}\), \(p_{i}\) and \(q_{i}\) for each \(i = 1,\dots, n\) were generated randomly in the interval \((0, 1]\), \([1, 3]\), and \([1, 3]\), respectively. In the function \(g(z)\), we take \(B\in\mathbb{R}^{p}\times\mathbb{R}^{m}\), and its elements are generated randomly in \((0,1)\).

Since function \(g(z)\) is differentiable, we use Algorithm 4.1 to solve Problem 4.1 and compare it with Algorithms 1.1 and 3.1. In Algorithms 1.1 and 3.1 we substitute \(\operatorname{prox}_{\lambda g}\) with \(I-\nu\nabla g\) and do not consider the constraint set *Q*.

*k*, as well as error1\((k):= \|x^{k}-x ^{k-1}\|\) and error2\((k):= \|Ax^{k}-P_{Q}(Ay^{k})\|\), respectively. We solve the model with \(m=15\) and take \(n=10\) as the number of companies.

From Figs. 1 and 2, we have two conclusions as follows:

(a) The “error1” of Algorithm 4.1 is smaller than that of Algorithms 1.1 and 3.1 and the “error1” of Algorithm 3.1 is slightly smaller than that of Algorithm 1.1.

(b) The “error2” of Algorithm 4.1 decreases with the iteration number *k*, while the “error2” of Algorithms 1.1 and 3.1 increases with the iteration number *k*. The “error2” of Algorithm 4.1 is smaller than those of Algorithms 1.1 and 3.1.

Next we give a numerical procedure in an infinite-dimensional space and compare Algorithm 4.1 with a numerical algorithm which is based on the Halpern modification of [8, Algorithm 6.1] as follows:

### Algorithm 5.1

*L*is the spectral radius of the operators \(A^{*}A\), denoted by \(\rho(A^{*}A)\). The parameter

*λ*depends on the constants of the inverse strong monotonicity of ∇

*g*and

*f*.

According to the condition of the convergence of Halpern-type algorithm, we assume that \(\lim_{k\rightarrow\infty} \tau_{k}=0\) and \(\sum_{k=1}^{\infty}\tau_{k}=\infty\).

### Example 5.2

As shown in [30], *F* is monotone and *L*-Lipschitz-continuous with \(L=2\). Let \(f(x(t), y(t))=\langle Fx(t),y(t)-x(t)\rangle\), \(g(x)(t)=\frac{1}{2}\|x(t)\|^{2}\) and \((Ax)(t)=3x(t)\) for all \(x\in H\).

*g*and

*f*are unknown. Take \(\tau_{k}=\frac{1}{k+1}\) and \(\gamma=\frac{0.9}{\rho(A^{*}A)}\) for Algorithm 5.1. We use \(\operatorname{error}=\frac{1}{2}\| P_{C}(x^{k})-x^{k}\|^{2}+ \frac{1}{2}\|P_{Q}(Ax^{k})-Ax^{k}\|^{2}\) to measure the error of the

*k*th iteration.

## 6 Conclusions

We first introduce a new algorithm, which involves a projection of each iteration, and show its strong convergence. We also improve the model proposed in [37] by adding a constraint to the minimization problem of the total environmental fee. Two algorithms are introduced to approximate the solution and their strong convergence is analyzed.

## Notes

### Acknowledgements

We sincerely thank Prof. S. He for his helpful discussion and the reviewers for their valuable suggestions and useful comments that have led to the present improved version of the original manuscript.

### Availability of data and materials

Data sharing not applicable to this article as no datasets were generated during the current study.

### Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

### Funding

The first author was supported by the scientific research project of Tianjin Municipal Education Commission (No. 2018KJ253). The fifth author was supported by the Theoretical and Computational Science (TaCS) Center under Computational and Applied Science for Smart Innovation Cluster (CLASSIC), Faculty of Science, KMUTT. The authors acknowledge the financial support provided by King Mongkut’s University of Technology Thonburi through the “KMUTT 55th Anniversary Commemorative Fund”. Furthermore, Poom Kumam was supported by he Thailand Research Fund (TRF) and the King Mongkut’s University of Technology Thonburi (KMUTT) under the TRF Research Scholar Award (Grant No. RSA6080047).

### Competing interests

The authors declare that they have no competing interests.

## References

- 1.Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011) CrossRefGoogle Scholar
- 2.Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud.
**63**, 123–145 (1994) MathSciNetzbMATHGoogle Scholar - 3.Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl.
**20**, 103–120 (2004) MathSciNetCrossRefGoogle Scholar - 4.Ceng, L.C.: Approximation of common solutions of a split inclusion problem and a fixed-point problem. J. Appl. Numer. Optim.
**1**, 1–12 (2019) Google Scholar - 5.Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol.
**51**, 2353–2365 (2006) CrossRefGoogle Scholar - 6.Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl.
**21**, 2071–2084 (2005) MathSciNetCrossRefGoogle Scholar - 7.Censor, Y., Elving, T.: A multiprojections algorithm using Bregman projections in a product spaces. Numer. Algorithms
**8**, 221–239 (1994) MathSciNetCrossRefGoogle Scholar - 8.Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms
**59**, 301–323 (2012) MathSciNetCrossRefGoogle Scholar - 9.Combettes, P.L.: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization
**53**, 475–504 (2004) MathSciNetCrossRefGoogle Scholar - 10.Contreras, J., Klusch, M., Krawczyk, J.B.: Numerical solution to Nash–Cournot equilibria in coupled constraint electricity markets. IEEE Trans. Power Syst.
**19**, 195–206 (2004) CrossRefGoogle Scholar - 11.Crombez, G.: A geometrical look at iterative methods for operators with fixed points. Numer. Funct. Anal. Optim.
**26**, 157–175 (2005) MathSciNetCrossRefGoogle Scholar - 12.Crombez, G.: A hierarchical presentation of operators with fixed points on Hilbert spaces. Numer. Funct. Anal. Optim.
**27**, 259–277 (2006) MathSciNetCrossRefGoogle Scholar - 13.Dong, Q.L., Cho, Y.J., Zhong, L.L., Rassias, T.M.: Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim.
**70**, 687–704 (2018) MathSciNetCrossRefGoogle Scholar - 14.Dong, Q.L., He, S., Zhao, J.: Solving the split equality problem without prior knowledge of operator norms. Optimization
**64**, 1887–1906 (2015) MathSciNetCrossRefGoogle Scholar - 15.Dong, Q.L., Lu, Y.Y., Yang, J.: The extragradient algorithm with inertial effects for solving the variational inequality. Optimization
**65**, 2217–2226 (2016) MathSciNetCrossRefGoogle Scholar - 16.Dong, Q.L., Tang, Y.C., Cho, Y.J., Rassias, T.M.: “Optimal” choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim.
**71**, 341–360 (2018) MathSciNetCrossRefGoogle Scholar - 17.Dong, Q.L., Yao, Y., He, S.: Weak convergence theorems of the modified relaxed projection algorithms for the split feasibility problem in Hilbert spaces. Optim. Lett.
**8**, 1031–1046 (2014) MathSciNetCrossRefGoogle Scholar - 18.Dong, Q.L., Yuan, H.B., Cho, Y.J., Rassias, T.M.: Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett.
**12**, 87–102 (2018) MathSciNetCrossRefGoogle Scholar - 19.Fan, K.: Fixed point and minimax theorems in locally convex topological linear spaces. Proc. Natl. Acad. Sci. USA
**38**, 121–126 (1952) MathSciNetCrossRefGoogle Scholar - 20.He, S.N., Tian, H.L.: Selective projection methods for solving a class of variational inequalities. Numer. Algorithms
**80**, 617–634 (2019) MathSciNetCrossRefGoogle Scholar - 21.He, S.N., Tian, H.L., Xu, H.K.: The selective projection method for convex feasibility and split feasibility problems. J. Nonlinear Convex Anal.
**19**, 1199–1215 (2018) MathSciNetGoogle Scholar - 22.Konnov, I.V.: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2000) zbMATHGoogle Scholar
- 23.Konnov, I.V.: The method of pairwise variations with tolerances for linearly constrained optimization problems. J. Nonlinear Var. Anal.
**1**, 25–41 (2017) zbMATHGoogle Scholar - 24.Moudafi, A., Thakur, B.S.: Solving proximal split feasibility problems without prior knowledge of operator norms. Optim. Lett.
**8**, 2099–2110 (2014) MathSciNetCrossRefGoogle Scholar - 25.Muu, L.D., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal.
**18**, 1159–1166 (1992) MathSciNetCrossRefGoogle Scholar - 26.Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim.
**1**, 123–231 (2013) Google Scholar - 27.Quoc, T.D., Muu, L.D.: Iterative methods for solving monotone equilibrium problems via dual gap functions. Comput. Optim. Appl.
**51**, 709–728 (2012) MathSciNetCrossRefGoogle Scholar - 28.Rockafellar, T.R., Wets, R.: Variational Analysis. Springer, Berlin (1998) CrossRefGoogle Scholar
- 29.Santos, P., Scheimberg, S.: An inexact subgradient algorithm for equilibrium problems. Comput. Appl. Math.
**30**, 91–107 (2011) MathSciNetzbMATHGoogle Scholar - 30.Shehu, Y., Dong, Q.L., Jiang, D.: Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization
**68**, 385–409 (2019) MathSciNetCrossRefGoogle Scholar - 31.Xiao, Y.B., Huang, N.J., Cho, Y.J.: A class of generalized evolution variational inequalities in Banach spaces. Appl. Math. Lett.
**25**, 914–920 (2012) MathSciNetCrossRefGoogle Scholar - 32.Xu, H.K.: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl.
**150**, 360–378 (2011) MathSciNetCrossRefGoogle Scholar - 33.Yao, Y., Leng, L., Postolache, M., Zheng, X.: Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal.
**18**, 875–882 (2017) MathSciNetzbMATHGoogle Scholar - 34.Yao, Y., Liou, Y.C., Yao, J.C.: Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl.
**10**, 843–854 (2017) MathSciNetCrossRefGoogle Scholar - 35.Yao, Y., Yao, J.C., Liou, Y.C., Postolache, M.: Iterative algorithms for split common fixed points of demicontractive operators without priori knowledge of operator norms. Carpath. J. Math.
**34**, 459–466 (2018) MathSciNetGoogle Scholar - 36.Yao, Y.H., Postolache, M., Liou, Y.C.: Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl.
**2013**, Article ID 201 (2013) MathSciNetCrossRefGoogle Scholar - 37.Yen, L.H., Muu, L.D., Huyen, N.T.T.: An algorithm for a class of split feasibility problems: application to a model in electricity production. Math. Methods Oper. Res.
**84**, 549–565 (2016) MathSciNetCrossRefGoogle Scholar - 38.Zhao, J.: Solving split equality fixed-point problem of quasi-nonexpansive mappings without prior knowledge of operators norms. Optimization
**64**, 2619–2630 (2015) MathSciNetCrossRefGoogle Scholar - 39.Zhao, J., Zong, H.: Iterative algorithms for solving the split feasibility problem in Hilbert spaces. J. Fixed Point Theory Appl.
**21**, 11 (2018) MathSciNetCrossRefGoogle Scholar

## Copyright information

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.