Mathematical Programming

, Volume 169, Issue 1, pp 95–118

# Accelerating the DC algorithm for smooth functions

• Francisco J. Aragón Artacho
• Ronan M. T. Fleming
• Phan T. Vuong
Open Access
Full Length Paper Series B

## Abstract

We introduce two new algorithms to minimise smooth difference of convex (DC) functions that accelerate the convergence of the classical DC algorithm (DCA). We prove that the point computed by DCA can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of DCA together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analysed under the Łojasiewicz property of the objective function. We apply our algorithms to a class of smooth DC programs arising in the study of biochemical reaction networks, where the objective function is real analytic and thus satisfies the Łojasiewicz property. Numerical tests on various biochemical models clearly show that our algorithms outperform DCA, being on average more than four times faster in both computational time and the number of iterations. Numerical experiments show that the algorithms are globally convergent to a non-equilibrium steady state of various biochemical networks, with only chemically consistent restrictions on the network topology.

## Keywords

DC function DC programming DC algorithm Łojasiewicz property Biochemical reaction networks

## Mathematics Subject Classification

65K05 65K10 90C26 92C42

## 1 Introduction

Many problems arising in science and engineering applications require the development of algorithms to minimise a nonconvex function. If a nonconvex function admits a decomposition, this may be exploited to tailor specialised optimisation algorithms. Our main focus is the following optimisation problem
\begin{aligned} \underset{x\in \mathbb {R}^{m}}{\text {minimise}}\;\phi (x):=f_{1}(x)-f_{2}(x), \end{aligned}
(1)
where $$f_{1},f_{2}:\mathbb {R}^{m}\rightarrow \mathbb {R}$$ are continuously differentiable convex functions and
\begin{aligned} \inf _{x\in \mathbb {R}^{m}}\phi (x)>-\infty . \end{aligned}
(2)
In our case, as we shall see in Sect. 4, this problem arises in the study of biochemical reaction networks. In general, $$\phi$$ is a nonconvex function. The function in problem (1) belongs to two important classes of functions: the class of functions that can be decomposed as a sum of a convex function and a differentiable function (composite functions) and the class of functions that are representable as difference of convex functions (DC functions).

In 1981, Fukushima and Mine [7, 17] introduced two algorithms to minimise a composite function. In both algorithms, the main idea is to linearly approximate the differentiable part of the composite function at the current point and then minimise the resulting convex function to find a new point. The difference between the new and current points provides a descent direction with respect to the composite function, when it is evaluated at the current point. The next iteration is then obtained through a line search procedure along this descent direction. Algorithms for minimising composite functions have been extensively investigated and found applications to many problems such as: inverse covariance estimate, logistic regression, sparse least squares and feasibility problems, see e.g. [9, 14, 15, 19] and the references quoted therein.

In 1986, Pham Dinh and El Bernoussi [23] introduced an algorithm to minimise DC functions. In its simplified form, the Difference of Convex functions Algorithm (DCA) linearly approximates the concave part of the objective function ($$-f_{2}$$ in (1)) at the current point and then minimises the resulting convex approximation to the DC function to find the next iteration, without recourse to a line search. The main idea is similar to Fukushima–Mine approach but was extended to the non-differentiable case. This algorithm has been extensively studied by Le Thi, Pham Dinh and their collaborators, see e.g. [10, 12, 13, 22]. DCA has been successfully applied in many fields, such as machine learning, financial optimisation, supply chain management and telecommunication [6, 10, 24]. Nowadays, DC programming plays an important role in nonconvex programming and DCA is commonly used because of its key advantages: simplicity, inexpensiveness and efficiency [10]. Some results related to the convergence rate for special classes of DC programs have been also established [11, 13].

In this paper we introduce two new algorithms to find stationary points of DC programs, called Boosted Difference of Convex function Algorithms (BDCA), which accelerate DCA with a line search using an Armijo type rule. The first algorithm directly uses a backtracking technique, while the second uses a quadratic interpolation of the objective function together with backtracking. Our algorithms are based on both DCA and the proximal point algorithm approach of Fukushima–Mine. First, we compute the point generated by DCA. Then, we use this point to define the search direction. This search direction coincides with the one employed by Fukushima–Mine in [7]. The key difference between their method and ours is the starting point used for the line search: in our algorithms we use the point generated by DCA, instead of using the previous iteration. This scheme works thanks to the fact that the defined search direction is not only a descent direction for the objective function at the previous iteration, as observed by Fukushima–Mine, but is also a descent direction at the point generated by DCA. Unfortunately, as shown in Remark 1, this scheme cannot be extended in general for nonsmooth functions, as the defined search direction might be an ascent direction at the point generated by DCA.

Moreover, it is important to notice that the iterations of Fukushima–Mine and BDCA never coincide, as the largest step size taken in their algorithm is equal to one (which gives the DCA iteration). In fact, for smooth functions, the iterations of Fukushima–Mine usually coincide with the ones generated by DCA, as the step size equal to one is normally accepted by their Armijo rule.

We should point out that DCA is a descent method without line search. This is something that is usually claimed to be advantageous in the large-scale setting. Our purpose here is the opposite: we show that a line search can increase the performance even for high-dimensional problems.

Further, we analyse the rate of convergence under the Łojasiewicz property [16] of the objective function. It should be mentioned that the Łojasiewicz property is recently playing an important role for proving the convergence of optimisation algorithms for analytic cost functions, see e.g. [1, 3, 4, 13].

We have performed numerical experiments in functions arising in the study of biochemical reaction networks. We show that the problem of finding a steady state of these networks, which plays a crucial role in the modelling of biochemical reaction systems, can be reformulated as a minimisation problem involving DC functions. In fact, this is the main motivation and starting point of our work: when one applies DCA to find a steady state of these systems, the rate of convergence is usually quite slow. As these problems commonly involve hundreds of variables (even thousands in the most complex systems, as Recon 21), the speed of convergence becomes crucial. In our numerical tests we have compared BDCA and DCA for finding a steady state in various biochemical network models of different size. On average, DCA needed five times more iterations than BDCA to achieve the same accuracy, and what is more relevant, our implementation of BDCA was more than four times faster than DCA to achieve the same accuracy. Thus, we prove both theoretically and numerically that BDCA results more advantageous than DCA. Luckily, the objective function arising in these biochemical reaction networks is real analytic, a class of functions which is known to satisfy the Łojasiewicz property [16]. Therefore, the above mentioned convergence analysis results can be applied in this setting.

The rest of this paper is organised as follows. In Sect. 2, we recall some preliminary facts used throughout the paper and we present the main optimisation problem. Sect. 3 describes our main results, where the new algorithms (BDCA) and their convergence analysis for solving DC programs are established. A DC program arising in biochemical reaction network problems is introduced in Sect. 4. Numerical results comparing BDCA and DCA on various biochemical network models are reported in Sect. 5. Finally, conclusions are stated in the last section.

## 2 Preliminaries

Throughout this paper, the inner product of two vectors $$x,y\in \mathbb {R}^{m}$$ is denoted by $$\langle x,y\rangle$$, while $$\Vert \cdot \Vert$$ denotes the induced norm, defined by $$\Vert x\Vert =\sqrt{\langle x,x\rangle }$$. The nonnegative orthant in $$\mathbb {R}^{m}$$ is denoted by $$\mathbb {R}_{+}^{m}=[0,\infty )^{m}$$ and $$\mathbb {B}(x,r)$$ denotes the closed ball of center x and radius $$r>0$$. The gradient of a differentiable function $$f:\mathbb {R}^{m}\rightarrow \mathbb {R}^{n}$$ at some point $$x\in \mathbb {R}^{m}$$ is denoted by $$\nabla f(x)\in \mathbb {R}^{m\times n}$$.

Recall that a function $$f:\mathbb {R}^{m}\rightarrow \mathbb {R}$$ is said to be convex if
\begin{aligned} f\left( \lambda x+(1-\lambda )y\right) \le \lambda f(x) +(1-\lambda )f(y)\quad \text {for all }x,y\in \mathbb {R}^{m}\text { and }\lambda \in (0,1). \end{aligned}
Further, f is called strongly convex with modulus $$\sigma >0$$ if
\begin{aligned}&f\left( \lambda x+(1-\lambda )y\right) \le \lambda f(x)+(1-\lambda )f(y)\\&\quad -\frac{1}{2}\sigma \lambda (1-\lambda )\Vert x-y\Vert ^{2}\quad \text {for all }x,y\in \mathbb {R}^{m}\text { and }\lambda \in (0,1), \end{aligned}
or, equivalently, when $$f-\frac{\sigma }{2}\Vert \cdot \Vert ^{2}$$ is convex. The function f is said to be coercive if  $$f(x)\rightarrow +\infty$$ whenever $$\left\| x\right\| \rightarrow +\infty .$$
On the other hand, a function $$F:\mathbb {R}^{m}\rightarrow \mathbb {R}^{m}$$ is said to be monotone when
\begin{aligned} \left\langle F(x)-F(y),x-y\right\rangle \ge 0\quad \text {for all }x,y\in \mathbb {R}^{m}. \end{aligned}
Further, F is called strongly monotone with modulus $$\sigma >0$$ when
\begin{aligned} \left\langle F(x)-F(y),x-y\right\rangle \ge \sigma \Vert x-y\Vert ^{2}\quad \text {for all }x,y\in \mathbb {R}^{m}. \end{aligned}
The function F is called Lipschitz continuous if there is some constant $$L\ge 0$$ such that
\begin{aligned} \Vert F(x)-F(y)\Vert \le L\Vert x-y\Vert ,\quad \text {for all }x,y\in \mathbb {R}^{m}. \end{aligned}
F is called locally Lipschitz continuous if for every x in $$\mathbb {R}^{m}$$, there exists a neighbourhood U of x such that F restricted to U is Lipschitz continuous.

We have the following well-known result.

### Proposition 1

Let $$f:\mathbb {R}^{m}\rightarrow \mathbb {R}$$ be a differentiable function. Then f is (strongly) convex if and only if $$\nabla f$$ is (strongly) monotone.

To establish our convergence results, we will make use of the Łojasiewicz property, defined next.

### Definition 1

Let $$f:\mathbb {R}^{n}\rightarrow \mathbb {R}$$ be a differentiable function.

1. (i)
The function f is said to have the Łojasiewicz property if for any critical point $$\bar{x}$$, there exist constants $$M>0,\varepsilon >0$$ and $$\theta \in [0,1)$$ such that
\begin{aligned} |f(x)- f(\bar{x})|^{\theta }{{\le }}M\left\| \nabla f(x)\right\| ,\quad \text {for all }x\in \mathbb {B}(\bar{x},\varepsilon ), \end{aligned}
(3)
where we adopt the convention $$0^{0}=0$$. The constant $$\theta$$ is called Łojasiewicz exponent of f at $$\bar{x}.$$

2. (ii)

The function f is said to be real analytic if for every $$x\in \mathbb {R}^{n}$$, f may be represented by a convergent power series in some neighbourhood of x.

### Proposition 2

[16] Every real analytic function $$f:\mathbb {R}^{n}\rightarrow \mathbb {R}$$ satisfies the Łojasiewicz property with exponent $$\theta \in \left[ 0,1\right)$$.

Problem (1) can be easily transformed into an equivalent problem involving strongly convex functions. Indeed, choose any $$\rho >0$$ and consider the functions $$g(x):=f_{1}(x)+\frac{\rho }{2}\Vert x\Vert ^{2}$$ and $$h(x):=f_{2}(x)+\frac{\rho }{2}\Vert x\Vert ^{2}$$. Then g and h are strongly convex functions with modulus $$\rho$$ and $$g(x)-h(x)=\phi (x)$$, for all $$x\in \mathbb {R}^{m}$$. In this way, we obtain the equivalent problem
\begin{aligned} \left( \mathscr {P}\right) \ {{\underset{x\in \mathbb {R}^{m}}{\text {minimise}}}}\;\phi (x)=g(x)-h(x). \end{aligned}
(4)
The key step to solve $$\left( \mathscr {P}\right)$$ with DCA is to approximate the concave part $$-h$$ of the objective function $$\phi$$ by its affine majorisation and then minimise the resulting convex function. The algorithm proceeds as follows.
In [7] Fukushima and Mine adapted their original algorithm reported in [17] by adding a proximal term $$\frac{\rho }{2}\left\| x-x_{k}\right\| ^{2}$$ to the objective of the convex optimisation subproblem. As a result they obtain an optimisation subproblem that is identical to the one in Step 2 of DCA, when one transforms (1) into (4) by adding $$\frac{\rho }{2}\Vert x\Vert ^{2}$$ to each convex function. In contrast to DCA, Fukushima–Mine algorithm [7] also includes a line search along the direction $$d_{k}:=y_{k}-x_{k}$$ to find the smallest nonnegative integer $$l_{k}$$ such that the Armijo type rule
\begin{aligned} \phi \left( x_{k}+\beta ^{l_{k}}d_{k}\right) \le \phi (x_{k})-\alpha \beta ^{l_{k}}\left\| d_{k}\right\| ^{2} \end{aligned}
(5)
is satisfied, where $$\alpha >0$$ and $$0<\beta <1$$. Thus, when $$l_{k}=0$$ satisfies (5), i.e. when
\begin{aligned} \phi (y_{k})\le \phi (x_{k})-\alpha \left\| d_{k}\right\| ^{2}, \end{aligned}
one has $$x_{k+1}=y_{k}$$ and the iterations of both algorithms coincide. As we shall see in Proposition 3, this is guaranteed to happen if $$\alpha \le \rho$$.

## 3 Boosted DC Algorithms

Let us introduce our first algorithm to solve $$(\mathscr {P})$$, which we call a Boosted DC Algorithm with Backtracking. The algorithm is a combination of Algorithm 1 and the algorithm of Fukushima–Mine [7].

The next proposition shows that the solution of $$(\mathscr {P}_{k})$$, which coincides with the DCA subproblem in Algorithm 1, provides a decrease in the value of the objective function. For the sake of completeness, we include its short proof.

### Proposition 3

For all $$k\in \mathbb {N}$$, it holds that
\begin{aligned} \phi (y_{k})\le \phi (x_{k})-\rho \Vert d_{k}\Vert ^{2}. \end{aligned}
(6)

### Proof

Since $$y_{k}$$ is the unique solution of the strongly convex problem $$\left( \mathscr {P}_{k}\right)$$, we have
\begin{aligned} \nabla g(y_{k})=\nabla h(x_{k}), \end{aligned}
(7)
which implies
\begin{aligned} g(x_{k})-g(y_{k})\ge \left\langle \nabla h(x_{k}),x_{k}-y_{k}\right\rangle +\frac{\rho }{2}\Vert x_{k}-y_{k}\Vert ^{\text {2}}. \end{aligned}
On the other hand, the strong convexity of h implies
\begin{aligned} h(y_{k})-h(x_{k})\ge \left\langle \nabla h(x_{k}),y_{k}-x_{k}\right\rangle +\frac{\rho }{2}\Vert y_{k}-x_{k}\Vert ^{\text {2}}. \end{aligned}
Adding the two previous inequalities, we have
\begin{aligned} g(x_{k})-g(y_{k})+h(y_{k})-h(x_{k})\ge \rho \Vert x_{k}-y_{k}\Vert ^{2}, \end{aligned}
which implies (6). $$\square$$

If $$\lambda _{k}=0$$, the iterations of BDCA-Backtracking coincide with those of DCA, since the latter sets $$x_{k+1}:=y_{k}$$. Next we show that $$d_{k}=y_{k}-x_{k}$$ is a descent direction for $$\phi$$ at $$y_{k}$$. Thus, one can achieve a larger decrease in the value of $$\phi$$ by moving along this direction. This simple fact, which permits an improvement in the performance of DCA, constitutes the key idea of our algorithms.

### Proposition 4

For all $$k\in \mathbb {N}$$, we have
\begin{aligned} \left\langle \nabla \phi (y_{k}),d_{k}\right\rangle \le -\rho ||d_{k}||^{2}; \end{aligned}
(8)
that is, $$d_{k}$$ is a descent direction for $$\phi$$ at $$y_{k}$$.

### Proof

The function h is strongly convex with constant $$\rho$$. This implies that $$\nabla h$$ is strongly monotone with constant $$\rho$$; whence,
\begin{aligned} \langle \nabla h(x_{k})-\nabla h(y_{k}),x_{k}-y_{k}\rangle \ge \rho \Vert x_{k}-y_{k}\Vert ^{\text {2}}. \end{aligned}
Further, since $$y_{k}$$ is the unique solution of the strongly convex problem $$\left( \mathscr {P}_{k}\right)$$, we have
\begin{aligned} \nabla h(x_{k})=\nabla g(y_{k}), \end{aligned}
which implies,
\begin{aligned} \left\langle \nabla \phi (y_{k}),d_{k}\right\rangle =\left\langle \nabla g(y_{k})-\nabla h(y_{k}),d_{k}\right\rangle \le -\rho \Vert d_{k}\Vert ^{\text {2}}, \end{aligned}
and completes the proof. $$\square$$

### Remark 1

In general, Proposition 4 does not remain valid when g is not differentiable. In fact, the direction $$d_k$$ might be an ascent direction, in which case Step 4 in Algorithm 2 could become an infinite loop. For instance, consider $$g(x)=|x|+\frac{1}{2}x^2+\frac{1}{2}x$$ and $$h(x)=\frac{1}{2}x^2$$ for $$x\in \mathbb {R}$$. If $$x_0=\frac{1}{2}$$, one has
\begin{aligned} \left( \mathscr {P}_{0}\right) \underset{x\in \mathbb {R}}{\text {minimise}}\; |x|+\frac{1}{2}x^2+\frac{1}{2}x-\frac{1}{2}x, \end{aligned}
whose unique solution is $$y_0=0$$. Then, the one-sided directional derivative of $$\phi$$ at $$y_0$$ in the direction $$d_0=y_0-x_0=-\frac{1}{2}$$ is given by
\begin{aligned} \phi '(y_0;d_0)=\lim _{t\downarrow 0} \frac{\phi \left( 0+t(-1/2)\right) -\phi (0)}{t}=\frac{1}{4}. \end{aligned}
Thus, $$d_0$$ is an ascent direction for $$\phi$$ at $$y_0$$ (actually, $$y_0$$ is the global minimum of $$\phi$$).

As a corollary, we deduce that the backtracking Step 4 of Algorithm 2 terminates finitely when $$\rho >\alpha$$.

### Corollary 1

Suppose that $$\rho >\alpha$$. Then, for all $$k\in \mathbb {N},$$ there is some $$\delta _{k}>0$$ such that
\begin{aligned} \phi \left( y_{k}+\lambda d_{k}\right) \le \phi (y_{k})-\alpha \lambda \Vert d_{k}\Vert ^{2},\quad \text {for all }\lambda \in [0,\delta _{k}]. \end{aligned}
(9)

### Proof

If $$d_{k}=0$$ there is nothing to prove. Otherwise, by the mean value theorem, there is some $${t_\lambda }\in (0,1)$$ such that
\begin{aligned} \phi \left( y_{k}+\lambda d_{k}\right) -\phi (y_{k})&=\left\langle \nabla \phi \left( y_{k}+{t_\lambda }\lambda d_{k}\right) ,\lambda d_{k}\right\rangle \\&=\lambda \left\langle \nabla \phi (y_{k}),d_{k}\right\rangle +\lambda \left\langle \nabla \phi (y_{k}+{t}_\lambda \lambda d_{k})-\nabla \phi (y_{k}),d_{k}\right\rangle \\&\le -\rho \lambda \Vert d_{k}\Vert ^{2}+\lambda \Vert \nabla \phi \left( y_{k}+{t_\lambda }\lambda d_{k}\right) -\nabla \phi (y_{k})\Vert \Vert d_{k}\Vert . \end{aligned}
As $$\nabla \phi$$ is continuous at $$y_{k}$$, there is some $$\delta >0$$ such that
\begin{aligned} \Vert \nabla \phi (z)-\nabla \phi (y_{k})\Vert \le (\rho -\alpha )\Vert d_{k}\Vert \text { whenever }\Vert z-y_{k}\Vert \le \delta . \end{aligned}
Since $$\Vert y_{k}+{t_\lambda }\lambda d_{k}-y_{k}\Vert ={t_\lambda }\lambda \Vert d_{k}\Vert \le \lambda \Vert d_{k}\Vert$$, then for all $$\lambda \in \left( 0,\frac{\delta }{\Vert d_{k}\Vert }\right)$$, we deduce
\begin{aligned} \phi (y_{k}+\lambda d_{k})-\phi (y_{k})\le -\rho \lambda \Vert d_{k}\Vert ^{2}+(\rho -\alpha )\lambda \Vert d_{k}\Vert ^{2}=-\alpha \lambda \Vert d_{k}\Vert ^{2}, \end{aligned}
and the proof is complete. $$\square$$

### Remark 2

Notice that $$y_{k}+\lambda d_{k}=x_{k}+(1+\lambda )d_{k}$$. Therefore, Algorithm 2 uses the same direction as the Fukushima–Mine algorithm [7], where $$x_{k+1}=x_{k}+\beta ^{l}d_{k}=\beta ^{l}y_{k}+\left( 1-\beta ^{l}\right) x_{k}$$ for some $$0<\beta <1$$ and some nonnegative integer l. The iterations would be the same if $$\beta ^{l}=\lambda +1$$. Nevertheless, as $$0<\beta <1$$, the step size $$\lambda =\beta ^{l}-1$$ chosen in the Fukushima–Mine algorithm [7] is always less than or equal to zero, while in Algorithm 2, only step sizes $$\lambda \in \,]0,\bar{\lambda }]$$ are explored. Moreover, observe that the Armijo type rule (5), as used in [7], searches for an $$l_{k}$$ such that $$\phi (x_{k}+\beta ^{l_{k}}d_{k})<\phi (x_{k})$$, whereas Algorithm 2 searches for a $$\lambda _{k}$$ such that $$\phi (y_{k}+\lambda _{k}d_{k})<\phi (y_{k})$$. We know from (6) and (9) that
\begin{aligned} \phi \left( y_{k}+\lambda d_{k}\right) \le \phi (y_{k})-\alpha \lambda \Vert d_{k}\Vert ^{2}\le \phi (x_{k})-(\rho +\alpha \lambda )\Vert d_{k}\Vert ^{2}; \end{aligned}
thus, Algorithm 2 results in a larger decrease in the value of $$\phi$$ at each iteration than DCA, which sets $$\lambda :=0$$ and $$x_{k+1}:=y_{k}$$. Therefore, a faster convergence of Algorithm 2 compared with DCA is expected, see Figs. 1 and 3.

### Remark 3

In a personal communication, Christian Kanzow pointed out that the assumption $$\rho >\alpha$$ can be removed if one replaces the step size rule (9) by $$\phi \left( y_{k}+\lambda d_{k}\right) \le \phi (y_{k})-\alpha \lambda ^2\Vert d_{k}\Vert ^{2}.$$ It can be easily checked that the convergence theory in the rest of the paper remains valid with some small adjustments.

The following convergence results were inspired by Attouch and Bolte [3], which in turn were adapted from the original ideas of Łojasiewicz; see also [5, Section 3.2].

### Proposition 5

For any $$x_{0}\in \mathbb {R}^{m}$$, either Algorithm 2 returns a stationary point of $$\left( \mathscr {P}\right)$$ or it generates an infinite sequence such that the following holds.
1. (i)

$$\phi (x_{k})$$ is monotonically decreasing and convergent to some $$\phi ^{*}$$.

2. (ii)

Any limit point of $$\{x_{k}\}$$ is a stationary point of $$\left( \mathscr {P}\right)$$. If in addition, $$\phi$$ is coercive then there exits a subsequence of $$\{x_{k}\}$$ which converges to a stationary point of $$\left( \mathscr {P}\right)$$.

3. (iii)

$$\sum _{k=0}^{\infty }\Vert d_{k}\Vert ^{2}<\infty$$ and $$\sum _{k=0}^{\infty }\Vert x_{k+1}-x_{k}\Vert ^{2}<\infty$$.

### Proof

Because of (7), if Algorithm 2 stops at Step 3 and returns $$x_{k}$$, then $$x_{k}$$ must be a stationary point of $$\left( \mathscr {P}\right)$$. Otherwise, by Proposition 3 and Step 4 of Algorithm 2, we have
\begin{aligned} \phi (x_{k+1})\le \phi (y_{k})-\alpha \lambda _{k}\Vert d_{k}\Vert ^{2}\le \phi (x_{k})-(\alpha \lambda _{k}+\rho )\Vert d_{k}\Vert ^{2}. \end{aligned}
(10)
Hence, as the sequence $$\{\phi (x_{k})\}$$ is monotonically decreasing and bounded from below by (2), it converges to some $$\phi ^{*}$$, which proves (i). Consequently, we have
\begin{aligned} \phi (x_{k+1})-\phi (x_{k})\rightarrow 0. \end{aligned}
Thus, by (10), one has $$\Vert d_{k}\Vert ^{2}=\Vert y_{k}-x_{k}\Vert ^{2}\rightarrow 0.$$
Let $$\bar{x}$$ be any limit point of $$\{x_{k}\}$$, and let $$\{x_{k_{i}}\}$$ be a subsequence of $$\{x_{k}\}$$ converging to $$\bar{x}$$. Since $$\Vert y_{k_{i}}-x_{k_{i}}\Vert \rightarrow 0$$, one has
\begin{aligned} y_{k_{i}}\rightarrow \bar{x}. \end{aligned}
Taking the limit as $$i\rightarrow \infty$$ in (7), as $$\nabla h$$ and $$\nabla g$$ are continuous, we have $$\nabla h(\bar{x})=\nabla g(\bar{x})$$.

If $$\phi$$ is coercive, since the sequence $$\{\phi (x_{k})\}$$ is convergent, then the sequence $$\{x_{k}\}$$ is bounded. This implies that there exits a subsequence of $$\{x_{k}\}$$ converging to $$\bar{x}$$, a stationary point of $$\left( \mathscr {P}\right)$$, which proves (ii).

To prove (iii), observe that (10) implies that
\begin{aligned} (\alpha \lambda _{k}+\rho )\Vert d_{k}\Vert ^{2}\le \phi (x_{k})-\phi (x_{k+1}). \end{aligned}
(11)
Summing this inequality from 0 to N, we obtain
\begin{aligned} \sum _{k=0}^{N}(\alpha \lambda _{k}+\rho )\Vert d_{k}\Vert ^{2}\le \phi (x_{0})-\phi (x_{N+1})\le \phi (x_{0})-\inf _{x\in \mathbb {R}^{m}}\phi (x), \end{aligned}
(12)
whence, taking the limit when $$N\rightarrow \infty ,$$
\begin{aligned} \sum _{k=0}^{\infty }\rho \Vert d_{k}\Vert ^{2}\le \sum _{k=0}^{\infty }(\alpha \lambda _{k}+\rho )\Vert d_{k}\Vert ^{2}\le \phi (x_{0})-\inf _{x\in \mathbb {R}^{m}}\phi (x)<\infty , \end{aligned}
so we have $$\sum _{k=0}^{\infty }\Vert d_{k}\Vert ^{2}<\infty$$. Since
\begin{aligned} x_{k+1}-x_{k}=y_{k}-x_{k}+\lambda _{k}d_{k}=(1+\lambda _{k})d_{k}, \end{aligned}
we obtain
\begin{aligned} \sum _{k=0}^{\infty }\Vert x_{k+1}-x_{k}\Vert ^{2}=\sum _{k=0}^{\infty }(1+\lambda _{k})^{2}\Vert d_{k}\Vert ^{2}\le (1+\bar{\lambda })^{2}\sum _{k=0}^{\infty }\Vert d_{k}\Vert ^{2}<\infty , \end{aligned}
and the proof is complete. $$\square$$

We will employ the following useful lemma to obtain bounds on the rate of convergence of the sequences generated by Algorithm 2. This result appears within the proof of [3, Theorem 2] for specific values of $$\alpha$$ and $$\beta$$. See also [13, Theorem 3.3], or very recently, [15, Theorem 3].

### Lemma 1

Let $$\left\{ s_{k}\right\}$$ be a sequence in $$\mathbb {R}_{+}$$ and let $$\alpha ,\beta$$ be some positive constants. Suppose that $$s_{k}\rightarrow 0$$ and that the sequence satisfies
\begin{aligned} s_{k}^{\alpha }\le \beta (s_{k}-s_{k+1}),\quad \text {for all }k\text { sufficiently large.} \end{aligned}
(13)
Then
1. (i)

if $$\alpha =0$$, the sequence $$\left\{ s_{k}\right\}$$ converges to 0 in a finite number of steps;

2. (ii)

if $$\alpha \in \left( 0,1\right]$$, the sequence $$\left\{ s_{k}\right\}$$ converges linearly to 0 with rate $$1-\frac{1}{\beta }$$;

3. (iii)
if $$\alpha >1$$, there exists $$\eta >0$$ such that
\begin{aligned} s_{k}\le \eta k^{-\frac{1}{\alpha -1}},\quad \text {for all }k\text { sufficiently large.} \end{aligned}

### Proof

If $$\alpha =0$$, then (13) implies
\begin{aligned} 0\le s_{k+1}\le s_{k}-\frac{1}{\beta }, \end{aligned}
and (i) follows.
Assume that $$\alpha \in (0,1]$$. Since $$s_{k}\rightarrow 0$$, we have that $$s_{k}<1$$ for all k large enough. Thus, by (13), we have
\begin{aligned} s_{k}\le s_{k}^{\alpha }\le \beta (s_{k}-s_{k+1}). \end{aligned}
Therefore, $$s_{k+1}\le \left( 1-\frac{1}{\beta }\right) s_{k}$$; i.e., $$\left\{ s_{k}\right\}$$ converges linearly to 0 with rate $$1-\frac{1}{\beta }$$.
Suppose now that $$\alpha >1$$. If $$s_{k}=0$$ for some k, then (13) implies $$s_{k+1}=0$$. Then the sequence converges to zero in a finite number of steps, and thus (iii) trivially holds. Hence, we will assume that $$s_{k}>0$$ and that (13) holds for all $$k\ge N$$, for some positive integer N. Consider the decreasing function $$\varphi :(0,+\infty )\rightarrow \mathbb {R}$$ defined by $$\varphi (s):=s^{-\alpha }$$. By (13), for $$k\ge N$$, we have
\begin{aligned} \frac{1}{\beta }\le \left( s_{k}-s_{k+1}\right) \varphi (s_{k})\le \int _{s_{k+1}}^{s_{k}}\varphi (t)dt=\frac{s_{k+1}^{1-\alpha }-s_{k}^{1-\alpha }}{\alpha -1}. \end{aligned}
As $$\alpha -1>0$$, this implies that
\begin{aligned} s_{k+1}^{1-\alpha }-s_{k}^{1-\alpha }\ge \frac{\alpha -1}{\beta }, \end{aligned}
for all $$k\ge N$$. Thus, summing for k from N to $$j-1\ge N$$, we have
\begin{aligned} s_{j}^{1-\alpha }-s_{N}^{1-\alpha }\ge \frac{\alpha -1}{\beta }(j-N), \end{aligned}
which gives, for all $$j\ge N+1$$,
\begin{aligned} s_{j}\le \left( s_{N}^{1-\alpha }+\frac{\alpha -1}{\beta }(j-N)\right) ^{\frac{1}{1-\alpha }}. \end{aligned}
Therefore, there is some $$\eta >0$$ such that
\begin{aligned} s_{j}\le \eta j^{-\frac{1}{\alpha -1}},\quad \text {for all }k\text { sufficiently large,} \end{aligned}
which completes the proof. $$\square$$

### Theorem 1

Suppose that $$\nabla g$$ is locally Lipschitz continuous and $$\phi$$ satisfies the Łojasiewicz property with exponent $$\theta \in \left[ 0,1\right)$$. For any $$x_{0}\in \mathbb {R}^{m}$$, consider the sequence $$\left\{ x_{k}\right\}$$ generated by Algorithm 2. If the sequence $$\{x_{k}\}$$ has a cluster point $$x^{*}$$, then the whole sequence converges to $$x^{*},$$ which is a stationary point of $$\left( \mathscr {P}\right)$$. Moreover, denoting $$\phi ^{*}:=\phi (x^{*})$$, the following estimations hold:
1. (i)

if $$\theta =0$$ then the sequences $$\{x_{k}\}$$ and $$\{\phi (x_{k})\}$$ converge in a finite number of steps to $$x^{*}$$ and $$\phi ^{*}$$, respectively;

2. (ii)

if $$\theta \in \left( 0,\frac{1}{2}\right]$$ then the sequences $$\{x_{k}\}$$ and $$\{\phi (x_{k})\}$$ converge linearly to $$x^{*}$$ and $$\phi ^{*}$$, respectively;

3. (iii)
if $$\theta \in \left( \frac{1}{2},1\right)$$ then there exist some positive constants $$\eta _{1}$$ and $$\eta _{2}$$ such that
\begin{aligned}&\Vert x_{k}-x^{*}\Vert \le \eta _{1}k^{-\frac{1-\theta }{2\theta -1}},\\&\quad \phi (x_{k})-\phi ^{*}\le \eta _{2}k^{-\frac{1}{2\theta -1}}, \end{aligned}
for all large k.

### Proof

By Proposition 5, we have $$\lim _{k\rightarrow \infty }\phi (x_{k})=\phi ^{*}$$ . If $$x^{*}$$ is a cluster point of $$\left\{ x_{k}\right\}$$, then there exists a subsequence $$\{x_{k_{i}}\}$$ of $$\{x_{k}\}$$ that converges to $$x^{*}$$. By continuity of $$\phi$$, we have that
\begin{aligned} \phi (x^{*})=\lim _{i\rightarrow \infty }\phi (x_{k_{i}})=\lim _{k\rightarrow \infty }\phi (x_{k})=\phi ^{*}. \end{aligned}
Hence, $$\phi$$ is finite and has the same value $$\phi ^{*}$$ at every cluster point of $$\{x_{k}\}$$. If $$\phi (x_{k})=\phi ^{*}$$ for some $$k>1$$, then $$\phi (x_{k})=\phi (x_{k+p})$$ for any $$p\ge 0$$, since the sequence $$\phi (x_{k})$$ is decreasing. Therefore, $$x_{k}=x_{k+p}$$ for all $$p\ge 0$$ and Algorithm 2 terminates after a finite number of steps. From now on, we assume that $$\phi (x_{k})>\phi ^{*}$$ for all k.
As $$\phi$$ satisfies the Łojasiewicz property, there exist $$M>0,\varepsilon _{1}>0$$ and $$\theta \in [0,1)$$ such that
\begin{aligned} |\phi (x) -\phi (x^{*})|^{\theta } {{\le }}M\left\| \nabla \phi (x)\right\| ,\quad \forall x\in \mathbb {B}(x^{*},\varepsilon _{1}). \end{aligned}
(14)
Further, as $$\nabla g$$ is locally Lipschitz around $$x^{*}$$, there are some constants $$L\ge 0$$ and $$\varepsilon _{2}>0$$ such that
\begin{aligned} \Vert \nabla g(x)-\nabla g(y)\Vert \le L\Vert x-y\Vert ,\quad \forall x,y\in \mathbb {B}(x^{*},\varepsilon _{2}). \end{aligned}
(15)
Let $$\varepsilon :=\frac{1}{2}\min \left\{ \varepsilon _{1},\varepsilon _{2}\right\} >0$$. Since $$\lim _{i\rightarrow \infty }x_{k_{i}}=x^{*}$$ and $$\lim _{i\rightarrow \infty }\phi (x_{k_{i}})=\phi ^{*}$$, we can find an index N large enough such that
\begin{aligned} \Vert x_{N}-x^{*}\Vert +\frac{ML\left( 1+\bar{\lambda }\right) }{(1-\theta )\rho }\left( \phi (x_{N})-\phi ^{*}\right) ^{1-\theta }<\varepsilon . \end{aligned}
(16)
By Proposition 5(iii), we know that $$d_{k}=y_{k}-x_{k}\rightarrow 0$$. Then, taking a larger N if needed, we can assure that
\begin{aligned} \Vert y_{k}-x_{k}\Vert \le \varepsilon ,\quad \forall k\ge N. \end{aligned}
We now prove that, for all $$k\ge N$$, whenever $$x_{k}\in \mathbb {B}(x^{*},\varepsilon )$$ it holds
\begin{aligned} \Vert x_{k+1}-x_{k}\Vert&\le \frac{ML\left( 1+\lambda _{k}\right) }{(1-\theta )(\alpha \lambda _{k}+\rho )}\left[ \left( \phi (x_{k})-\phi ^{*}\right) ^{1-\theta }-\left( \phi (x_{k+1})-\phi ^{*}\right) ^{1-\theta }\right] \end{aligned}
(17)
Indeed, consider the concave function $$\gamma :(0,+\infty )\rightarrow (0,+\infty )$$ defined as $$\gamma (t):=t^{1-\theta }$$. Then, we have
\begin{aligned} \gamma (t_{1})-\gamma (t_{2})\ge \nabla \gamma (t_{1})^{T}(t_{1}-t_{2}),\quad \forall t_{1},t_{2>0.} \end{aligned}
Substituting in this inequality $$t_{1}$$ by $$\left( \phi (x_{k})-\phi ^{*}\right)$$ and $$t_{2}$$ by $$\left( \phi (x_{k+1})-\phi ^{*}\right)$$ and using (14) and then (11), one has
\begin{aligned} \left( \phi (x_{k})-\phi ^{*}\right) ^{1-\theta }-\left( \phi (x_{k+1})-\phi ^{*}\right) ^{1-\theta }&\ge \frac{1-\theta }{\left( \phi (x_{k})-\phi ^{*}\right) ^{\theta }}\left( \phi (x_{k})-\phi (x_{k+1})\right) \nonumber \\&\ge \frac{1-\theta }{M\left\| \nabla \phi (x_{k})\right\| }\left( \alpha \lambda _{k}+\rho \right) \Vert y_{k}-x_{k}\Vert ^{2}\nonumber \\&=\frac{\left( 1-\theta \right) \left( \alpha \lambda _{k}+\rho \right) }{M\left( 1+\lambda _{k}\right) ^{2}\left\| \nabla \phi (x_{k})\right\| }\Vert x_{k+1}-x_{k}\Vert ^{2}. \end{aligned}
(18)
On the other hand, since $$\nabla g(y_{k})=\nabla h(x_{k})$$ and
\begin{aligned} \Vert y_{k}-x^{*}\Vert \le \Vert y_{k}-x_{k}\Vert +\Vert x_{k}-x^{*}\Vert \le 2\varepsilon \le \varepsilon _{2}, \end{aligned}
using (15), we obtain
\begin{aligned} \left\| \nabla \phi (x_{k})\right\|&=\left\| \nabla g(x_{k})-\nabla h(x_{k})\right\| =\left\| \nabla g(x_{k})-\nabla g(y_{k})\right\| \nonumber \\&\le L\left\| x_{k}-y_{k}\right\| =\frac{L}{(1+\lambda _{k})}\left\| x_{k+1}-x_{k}\right\| . \end{aligned}
(19)
Combining (18) and (19), we obtain (17).
From (17), as $$\lambda _{k}\in (0,\bar{\lambda ]}$$, we deduce
\begin{aligned} \Vert x_{k+1}-x_{k}\Vert \le \frac{ML\left( 1+\bar{\lambda }\right) }{(1-\theta )\rho }\left[ \left( \phi (x_{k})-\phi ^{*}\right) ^{1-\theta }-\left( \phi (x_{k+1})-\phi ^{*}\right) ^{1-\theta }\right] , \end{aligned}
(20)
for all $$k\ge N$$ such that $$x_{k}\in \mathbb {B}(x^{*},\varepsilon ).$$
We prove by induction that $$x_{k}\in \mathbb {B}(x^{*},\varepsilon )$$ for all $$k\ge N$$. Indeed, from (16) the claim holds for $$k=N$$. We suppose that it also holds for $$k=N,N+1,\ldots ,N+p-1$$, with $$p\ge 1$$. Then (20) is valid for $$k=N,N+1,\ldots ,N+p-1$$. Therefore
\begin{aligned} \left\| x_{N+p}-x^{*}\right\|&\le \left\| x_{N}-x^{*}\right\| +\sum _{i=1}^{p}\left\| x_{N+i}-x_{N+i-1}\right\| \\&\le \left\| x_{N}-x^{*}\right\| \\&\quad +\frac{ML\left( 1+\bar{\lambda }\right) }{(1-\theta )\rho }\sum _{i=1}^{p}\left[ \left( \phi (x_{N+i-1})-\phi ^{*}\right) ^{1-\theta }-\left( \phi (x_{N+i})-\phi ^{*}\right) ^{1-\theta }\right] \\&\le \left\| x_{N}-x^{*}\right\| +\frac{ML\left( 1+\bar{\lambda }\right) }{(1-\theta )\rho }\left( \phi (x_{N})-\phi ^{*}\right) ^{1-\theta }<\varepsilon , \end{aligned}
where the last inequality follows from (16).
Adding (20) from $$k=N$$ to P one has
\begin{aligned} \sum _{k=N}^{P}\Vert x_{k+1}-x_{k}\Vert \le \frac{ML\left( 1+\bar{\lambda }\right) }{(1-\theta )\rho }\left( \phi (x_{N})-\phi ^{*}\right) ^{1-\theta }. \end{aligned}
(21)
Taking the limit as $$P\rightarrow \infty$$, we can conclude that
\begin{aligned} \sum _{k=1}^{\infty }\Vert x_{k+1}-x_{k}\Vert <\infty . \end{aligned}
(22)
This means that $$\{x_{k}\}$$ is a Cauchy sequence. Therefore, since $$x^{*}$$ is a cluster point of $$\{x_{k}\}$$, the whole sequence $$\{x_{k}\}$$ converges to $$x^{*}$$. By Proposition 5, $$x^{*}$$ must be a stationary point of $$\left( \mathscr {P}\right)$$.
For $$k\ge N$$, it follows from (14), (15) and (11) that
\begin{aligned} (\phi (x_{k})-\phi ^{*})^{2\theta }&\le M^{2}\left\| \nabla \phi (x_{k})\right\| ^{2}\nonumber \\&\le M^{2}\left\| \nabla g(x_{k})-\nabla h(x_{k})\right\| ^{2}=M^{2}\left\| \nabla g(x_{k})-\nabla g(y_{k})\right\| ^{2}\nonumber \\&\le M^{2}L^{2}\left\| x_{k}-y_{k}\right\| ^{2}\le \frac{M^{2}L^{2}}{\alpha \lambda _{k}+\rho }\left[ \phi (x_{k})-\phi (x_{k+1})\right] \nonumber \\&\le \delta \left[ \left( \phi (x_{k})-\phi ^{*}\right) -\left( \phi (x_{k+1})-\phi ^{*}\right) \right] , \end{aligned}
(23)
where $$\delta :=\frac{M^{2}L^{2}}{\rho }>0$$. By applying Lemma 1 with $$s_{k}:=\phi (x_{k})-\phi ^{*}$$, $$\alpha :=2\theta$$ and $$\beta :=\delta$$, statements (i)–(iii) regarding the sequence $$\left\{ \phi (x_{k})\right\}$$ easily follow from (23).
We know that $$s_{i}:=\sum _{k=i}^{\infty }\Vert x_{k+1}-x_{k}\Vert$$ is finite by (22). Notice that $$\Vert x_{i}-x^{*}\Vert \le s_{i}$$ by the triangle inequality. Therefore, the rate of convergence of $$x_{i}$$ to $$x^{*}$$ can be deduced from the convergence rate of $$s_{i}$$ to 0. Adding (20) from i to P with $$N\le i\le P$$, we have
\begin{aligned} s_{i}=\lim _{P\rightarrow \infty }\sum _{k=i}^{P}\Vert x_{k+1}-x_{k}\Vert \le K_{1}\left( \phi (x_{i})-\phi ^{*}\right) ^{1-\theta }, \end{aligned}
where $$K_{1}:=\frac{ML\left( 1+\bar{\lambda }\right) }{\left( 1-\theta \right) \rho }>0$$. Then by (14) and (15), we get
\begin{aligned} s_{i}^{\frac{\theta }{1-\theta }}&\le MK_{1}^{\frac{\theta }{1-\theta }}\Vert \nabla \phi (x_{i})\Vert \le MLK_{1}^{\frac{\theta }{1-\theta }}\Vert x_{i}-y_{i}\Vert \\&\le \frac{MLK_{1}^{\frac{\theta }{1-\theta }}}{1+\lambda _{i}}\Vert x_{i+1}-x_{i}\Vert \le MLK_{1}^{\frac{\theta }{1-\theta }}\Vert x_{i+1}-x_{i}\Vert \\&=MLK_{1}^{\frac{\theta }{1-\theta }}\left( s_{i}-s_{i+1}\right) \end{aligned}
Hence, taking $$K_{2}:=MLK_{1}^{\frac{\theta }{1-\theta }}>0$$, for all $$i\ge N$$ we have
\begin{aligned} s_{i}^{\frac{\theta }{1-\theta }}\le K_{2}\left( s_{i}-s_{i+1}\right) . \end{aligned}
By applying Lemma 1 with $$\alpha :=\frac{\theta }{1-\theta }$$ and $$\beta :=K_{2}$$, we see that the statements in (i)–(iii) regarding the sequence $$\left\{ x_{k}\right\}$$ hold. $$\square$$

### Example 1

Consider the function $$\phi (x)=\frac{1}{4}x^{4}-\frac{1}{2}x^{2}$$. The iteration given by DCA (Algorithm 1) satisfies
\begin{aligned} x_{k+1}^{3}-x_{k}=0; \end{aligned}
that is, $$x_{k+1}=\root 3 \of {x_{k}}$$. On the other hand, the iteration defined by Algorithm 2 is
\begin{aligned} \widetilde{x_{k+1}}=(1+\lambda _{k})\root 3 \of {\widetilde{x_{k}}}-\lambda _{k}\widetilde{x_{k}}. \end{aligned}
If $$x_{0}=\widetilde{x_{0}}=\frac{27}{125}$$, we have $$x_{1}=\frac{3}{5}$$, while $$\widetilde{x_{1}}=\frac{3}{5}(1+\lambda _{0})-\frac{27}{125}\lambda _{0}$$. For any $$\lambda _{0}\in \left( 0,\frac{25\sqrt{41}-75}{48}\right]$$, we have $$\phi (\widetilde{x_{1}})<\phi (x_{1})$$. The optimal step size is attained at $$\lambda _{\mathrm{opt}}=\frac{25}{24}$$ with $$x_{1}=1$$, which is the global minimiser of $$\phi$$.
Observe in Fig. 1 that the function
\begin{aligned} \phi _{k}(\lambda ):=\phi \left( y_{k}+\lambda d_{k}\right) \end{aligned}
behaves as a quadratic function nearby 0. Then, a quadratic interpolation of this function should give us a good candidate for choosing a step size close to the optimal one. Whenever $$\nabla \phi$$ is not too expensive to compute, it makes sense to construct a quadratic approximation of $$\phi$$ with an interpolation using three pieces of information: $$\phi _{k}(0)=\phi (y_{k})$$, $$\phi _{k}'(0)=\nabla \phi (y_{k})^{T}d_{k}$$ and $$\phi _{k}(\bar{\lambda })$$. This gives us the quadratic function
\begin{aligned} \varphi _{k}(\lambda ):=\left( \frac{\phi _{k}(\bar{\lambda })-\phi _{k}(0)-\bar{\lambda }\phi _{k}'(0)}{\bar{\lambda }^{2}}\right) \lambda ^{2}+\phi _{k}'(0)\lambda +\phi _{k}(0), \end{aligned}
(24)
see e.g. [20, Section 3.5]. When $$\phi _{k}(\bar{\lambda })>\phi _{k}(0)+\bar{\lambda }\phi _{k}'(0)$$, the function $$\varphi _{k}$$ has a global minimiser at
\begin{aligned} \widehat{\lambda _{k}}:=-\frac{\phi _{k}'(0)\bar{\lambda }^{2}}{2\left( \phi _{k}(\bar{\lambda })-\phi _{k}(0)-\phi _{k}'(0)\bar{\lambda }\right) }. \end{aligned}
(25)
This suggests the following modification of Algorithm 2.

### Corollary 2

The statements in Theorem 1 also apply to Algorithm 3.

### Proof

Just observe that the proof of Theorem 1 remains valid as long as the step sizes are bounded above by some constant and below by zero. Algorithm 3 uses the same directions than Algorithm 2, and the step sizes chosen by Algorithm 3 are bounded above by $$\lambda _{\max }$$ and below by zero. $$\square$$

Another option here would be to construct a quadratic approximation $$\psi _{k}$$ using $$\phi _{k}(-1)=\phi (x_{k})$$ instead of $$\phi _{k}(\bar{\lambda })$$. This interpolation is computationally less expensive, as it does not require the computation of $$\phi _{k}(\bar{\lambda })$$. Nevertheless, our numerical tests for the functions in Sect. 4 show that this approximation usually fits the function $$\phi _{k}$$ more poorly. In particular, this situation occurs in Example 1, as shown in Fig. 2.

One could also construct a cubic function that interpolates $$\phi _{k}(-1)$$, $$\phi _{k}(0)$$, $$\phi _{k}'(0)$$ and $$\phi _{k}(\bar{\lambda })$$, see [20, Section 3.5]. However, for the functions in Sect. 4, we have observed that this cubic function usually fits the function $$\phi _{k}$$ worse than the quadratic function $$\varphi _{k}$$ in (24).

### Remark 4

Observe that Algorithm 2 and Algorithm 3 still work well if we replace Step 2 by the following proximal step as in [18]
\begin{aligned} \left( \mathscr {P}_{k}\right) \underset{x\in \mathbb {R}^{m}}{\text { minimise}}\; g(x)-\langle \nabla h(x_{k}),x\rangle +\frac{1}{2c_{k}}\Vert x-x_{k}\Vert ^{\text {2}}, \end{aligned}
for some positive constants $$c_{k}$$.

### Example 2

(Finding zeroes of systems of DC functions)

Suppose that one wants for find a zero of a system of equations
\begin{aligned} p(x)=c(x),\quad x\in \mathbb {R}^{m} \end{aligned}
(26)
where $$p:\mathbb {R}^{m}\rightarrow \mathbb {R}_{+}^{m}$$ and $$c:\mathbb {R}^{m}\rightarrow \mathbb {R}_{+}^{m}$$ are twice continuously differentiable functions such that $$p_{i}:\mathbb {R}^{m}\rightarrow \mathbb {R}_{+}$$ and $$c_{i}:\mathbb {R}^{m}\rightarrow \mathbb {R}_{+}$$ are convex functions for all $$i=1,\ldots ,m$$. Then,
\begin{aligned} \Vert p(x)-c(x)\Vert ^{\text {2}}=2\left( \Vert p(x)\Vert ^{\text {2}}+\Vert c(x)\Vert ^{\text {2}}\right) -\Vert p(x)+c(x)\Vert ^{\text {2}}. \end{aligned}
Observe that all the components of p(x) and c(x) are nonnegative convex functions. Hence, both $$f_{1}(x):=2\left( \Vert p(x)\Vert ^{\text {2}}+\Vert c(x)\Vert ^{\text {2}}\right)$$ and $$f_{2}(x):=\Vert p(x)+c(x)\Vert ^{\text {2}}$$ are continuously differentiable convex functions, because they can be expressed as a finite combination of sums and products of nonnegative convex functions. Thus, we can either apply DCA or BDCA in order to find a solution to (26) by setting $$\phi (x):=f_{1}(x)-f_{2}(x).$$
Let $$f(x):=p(x)-c(x)$$ for $$x\in \mathbb {R}^{m}$$. Suppose that $$\bar{x}$$ is an accumulation point of the sequence $$\left\{ x_{k}\right\}$$ generated by either Algorithm 2 or Algorithm 3, and assume that $$\nabla f(\bar{x})$$ is nonsingular. Then, by Proposition 5, we must have $$\nabla f(\bar{x})f(\bar{x})=0_{m}$$, which implies that $$f(\bar{x})=0_{m}$$, as  $$\nabla f(\bar{x})$$ is nonsingular. Moreover, for all x close to $$\bar{x}$$, we have
\begin{aligned} |\phi (x)-\phi (\bar{x})|^{\frac{1}{2}}&=\phi (x)^{\frac{1}{2}}=\Vert f(x)\Vert =\left\| \left( \nabla f(x)\right) ^{-1}\nabla f(x)f(x)\right\| \\&\le \left\| \left( \nabla f(x)\right) ^{-1}\right\| \left\| \nabla f(x)f(x)\right\| =\frac{1}{2}\left\| \left( \nabla f(x)\right) ^{-1}\right\| \Vert \nabla \phi (x)\Vert \\&\le M\Vert \nabla \phi (x)\Vert , \end{aligned}
where $$\Vert \cdot \Vert$$ also denote the induced matrix norm and M is an upper bound of $$\frac{1}{2}\Vert \left( \nabla f(x)\right) ^{-1}\Vert$$ around $$\bar{x}$$. Thus, $$\phi$$ has the Łojasiewicz property at $$\bar{x}$$ with exponent $$\theta =\frac{1}{2}$$. Finally, for all $$\rho >0$$, the function $$g(x):=f_{1}(x)+\frac{\rho }{2}\Vert x\Vert ^{2}$$ is twice continuously differentiable, which in particular implies that $$\nabla g$$ is locally Lipschitz continuous. Therefore, either Theorem 1 or Corollary 2 guarantee the linear convergence of $$\left\{ x_{k}\right\}$$ to $$\bar{x}$$.

## 4 A DC problem in biochemistry

Consider a biochemical network with m molecular species and n reversible elementary reactions.2 Define forward and reverse stoichiometric matrices, $$F,R\in \mathbb {\mathbb {Z}}_{\ge 0}^{m\times n}$$, respectively, where $$F_{ij}$$ denotes the stoichiometry 3 of the $$i\mathrm{th}$$ molecular species in the $$j\mathrm{th}$$ forward reaction and $$R_{ij}$$ denotes the stoichiometry of the $$i\mathrm{th}$$ molecular species in the $$j\mathrm{th}$$ reverse reaction. We use the standard inner product in $$\mathbb {R}^{m}$$, i.e., $$\langle x,y\rangle =x^{T}y$$ for all $$x,y\in \mathbb {R}^{m}.$$ We assume that every reaction conserves mass, that is, there exists at least one positive vector $$l\in \mathbb {R}_{>0}^{m}$$ satisfying $$(R-F)^{T}l=0_{n}$$ [8] where $$R-F$$ represents net reaction stoichiometry. We assume the cardinality4 of each row of F and R is at least one, and the cardinality of each column of $$R-F$$ is at least two, usually three. Therefore, $$R-F$$ may be viewed as the incidence matrix of a directed hypergraph. The matrices F and R are sparse and the particular sparsity pattern depends on the particular biochemical network being modelled.

Let $$u\in \mathbb {R}_{>0}^{m}$$ denote a variable vector of molecular species concentrations. Assuming constant nonnegative elementary kinetic parameters $$k_{f},k_{r}\in \mathbb {R}_{\ge 0}^{n}$$, we presume elementary reaction kinetics for forward and reverse elementary reaction rates as $$s(k_{f},u):=\exp (\ln (k_{f})+F^{T}\ln (u))$$ and $$r(k_{r},u):=\exp (\ln (k_{r})+R^{T}\ln (u))$$, respectively, where $$\exp (\cdot )$$ and $$\ln (\cdot )$$ denote the respective componentwise functions. Then, the deterministic dynamical equation for time evolution of molecular species concentration is given by
\begin{aligned} \frac{du}{dt}\equiv & {} (R-F)\left( s\left( k_{f},u\right) -r\left( k_{r},u\right) \right) \end{aligned}
(27)
\begin{aligned}= & {} (R-F)\left( \exp \left( \ln \left( k_{f}\right) +F^{T}\ln (u)\right) -\exp \left( \ln \left( k_{r}\right) +R^{T}\ln (u)\right) \right) .\quad \quad \end{aligned}
(28)
Investigation of steady states plays a crucial role in the modelling of biochemical reaction systems. If one transforms (28) to logarithmic scale, by letting $$x\equiv \ln (u)\in \mathbb {R}^{m}$$, $$w\equiv [\ln (k_{f})^{T},\,\ln (k_{r})^{T}]^{T}\in \mathbb {R}^{2n}$$, then, up to a sign, the right-hand side of (28) is equal to the function
\begin{aligned} f(x):=\left( [F,\, R]-[R,\, F]\right) \exp \left( w+[F,\, R]^{T}x\right) , \end{aligned}
(29)
where $$\left[ \,\cdot ,\cdot \,\right]$$ stands for the horizontal concatenation operator. Thus, we shall focus on finding the points $$x\in \mathbb {R}^{m}$$ such that $$f(x)=0_m$$, which correspond to the steady states of the dynamical equation (27).
A point $$\bar{x}$$ will be a zero of the function f if and only if $$\Vert f(\bar{x})\Vert ^{2}=0$$. Denoting
\begin{aligned} p(x)&:=[F,R]\exp \left( w+[F,R]^{T}x\right) ,\\ c(x)&:=[R,F]\exp \left( w+[F,R]^{T}x\right) , \end{aligned}
one obtains, as in Example 2,
\begin{aligned} \Vert f(x)\Vert ^{2}=\Vert p(x)-c(x)\Vert ^{\text {2}}=2\left( \Vert p(x)\Vert ^{\text {2}}+\Vert c(x)\Vert ^{\text {2}}\right) -\Vert p(x)+c(x)\Vert ^{\text {2}}. \end{aligned}
Again, as all the components of p(x) and c(x) are positive and convex functions,5 both
\begin{aligned} f_{1}(x):=2\left( \Vert p(x)\Vert ^{\text {2}}+\Vert c(x)\Vert ^{\text {2}}\right) \quad \text { and} \quad f_{2}(x):=\Vert p(x)+c(x)\Vert ^{\text {2}} \end{aligned}
(30)
are convex functions. In addition to this, both $$f_{1}$$ and $$f_{2}$$ are smooth, having
\begin{aligned} \nabla f_{1}(x)= & {} 4\nabla p(x)p(x)+4\nabla c(x)c(x),\\ \nabla f_{2}(x)= & {} 2\left( \nabla p(x)+\nabla c(x)\right) \left( p(x)+c(x)\right) , \end{aligned}
see e.g. [20, pp. 245–246], with
\begin{aligned} \nabla p(x)= & {} [F,R]\text {EXP}\left( w+[F,R]^{T}x\right) [F,R]^{T},\\ \nabla c(x)= & {} [F,R]\text {EXP}\left( w+[F,R]^{T}x\right) [R,F]^{T}, \end{aligned}
where $$\text {EXP}\left( \cdot \right)$$ denotes the diagonal matrix whose entries are the elements in the vector $$\exp \left( \cdot \right)$$.
Setting $$\phi (x):=f_{1}(x)-f_{2}(x)$$, the problem of finding a zero of f is equivalent to the following optimisation problem:
\begin{aligned} \underset{x\in \mathbb {R}^{m}}{\text {minimise}}\;\phi (x):=f_{1}(x)-f_{2}(x). \end{aligned}
(31)
We now prove that $$\phi$$ satisfies the Łojasiewicz property. Denoting $$A:=[F,\, R]-[R,\, F]$$ and $$B:=[F,\, R]^{T}$$ we can write
\begin{aligned} \phi (x)&=f(x)^{T}f(x)=\exp \left( w+Bx\right) ^{T}A^{T}A\exp \left( w+Bx\right) ^{T}\\&=\exp \left( w+Bx\right) ^{T}Q\exp \left( w+Bx\right) ^{T}\\&=\sum _{j,k=1}^{2n}q_{j,k}\exp \left( w_{j}+w_{k}+\sum _{i=1}^{m}(b_{ji}+b_{ki})x_{i}\right) , \end{aligned}
where $$Q=A^{T}A.$$ Since $$b_{ij}$$ are nonnegative integers for all i and j, we conclude that the function $$\phi$$ is real analytic (see Proposition 2.2.2 and Proposition 2.2.8 in [21]). It follows from Proposition 2 that the function $$\phi$$ satisfies the Łojasiewicz property with some exponent $$\theta \in [0,1)$$.

Finally, as in Example 2, for all $$\rho >0$$, the function $$g(x):=f_{1}(x)+\frac{\rho }{2}\Vert x\Vert ^{2}$$ is twice continuously differentiable, which implies that $$\nabla g$$ is locally Lipschitz continuous. Therefore, either Theorem 1 or Corollary 2 guarantee the convergence of the sequence generated by BDCA, as long as the sequence is bounded.

### Remark 5

In principle, one cannot guarantee the linear convergence of BDCA applied to biochemical problems for finding steady states. Due to the mass conservation assumption, $$\exists l\in \mathbb {R}_{>0}^{m}$$ such that $$(R-F)^{T}l=0_{n}$$. This implies that $$\nabla f(x)$$ is singular for every $$x\in \mathbb {R}^{m}$$, because
\begin{aligned} \nabla f(x)l=[F,R]\text {EXP}\left( w+[F,R]^{T}x\right) [F-R,R-F]^{T}l=0_{m}. \end{aligned}
Therefore, the reasoning in Example 2 cannot be applied. However, one can still guarantee that any stationary point of $$\Vert f(x)\Vert ^{2}$$ is actually a steady state of the considered biochemical reaction network if the function f is strictly duplomonotone [2]. A function $$f:\mathbb {R}^{m}\rightarrow \mathbb {R}^{m}$$ is called duplomonotone with constant $$\bar{\tau }>0$$ if
\begin{aligned} \left\langle f(x)-f(x-\tau f(x)),f(x)\right\rangle \ge 0\quad \text {whenever }x\in \mathbb {R}^{m},0\le \tau \le \bar{\tau }, \end{aligned}
and strictly duplomonotone if this inequality is strict whenever $$f(x)\ne 0_m$$. If f is differentiable and strictly duplomonotone then $$\nabla f(x)f(x)=0_m$$ implies $$f(x)=0_m$$ [2]. We previously established that some stoichiometric matrices do give rise to strictly duplomonotone functions [2], and our numerical experiments, described next, do support the hypothesis that this is a pervasive property of many biochemical networks.
Table 1

Performance comparison of BDCA and DCA for finding a steady state of various biochemical reaction network models

Data

Instances

BDCA

DCA

Ratio (avg.)

Model name

m

n

avg.

avg.

time (s)

iterations

time (s)

DCA/BDCA

$$\phi (x_{0})$$

$$\phi (x_{\mathrm{end}})$$

min.

max.

avg.

min.

max.

avg.

min.

max.

avg.

iter.

time

Ecoli core

72

94

5.28e6

5.80

16.6

25.6

19.7

4101

6176

4861

68.0

104.7

86.8

4.9

4.4

L lactis MG1363

486

615

2.00e7

62.73

2926.1

4029.1

3424.4

4164

6362

5241

14522.4

18212.5

16670.1

5.2

4.9

Sc thermophilis

349

444

1.95e7

84.99

290.8

552.7

358.4

4021

6303

4873

1302.1

2003.8

1611.1

4.9

4.5

T Maritima

434

554

3.54e7

114.26

1333.0

2623.3

1919.7

3536

5839

4700

5476.2

12559.2

8517.1

4.7

4.4

iAF692

466

546

2.32e7

57.42

1676.8

2275.3

1967.4

4215

7069

5303

8337.0

11187.5

9466.3

5.3

4.8

iAI549

307

355

1.10e7

35.90

177.2

254.4

209.2

3670

5498

4859

665.1

1078.2

913.4

4.9

4.4

iAN840m

549

840

2.58e7

105.18

3229.1

6939.3

4720.6

4254

5957

4971

16473.3

28956.7

21413.2

5.0

4.5

iCB925

416

584

1.52e7

67.54

1830.7

2450.5

2133.4

3847

6204

5030

7358.2

11464.6

9886.8

5.0

4.6

iIT341

425

504

7.23e6

139.71

1925.2

2883.1

2301.8

3964

9794

5712

9433.8

20310.3

12262.0

5.7

5.3

iJR904

597

915

1.47e7

139.63

6363.1

9836.2

7623.0

4173

5341

4776

24988.5

43639.8

33620.6

4.4

4.8

iMB745

528

652

2.77e7

305.80

2629.1

5090.7

4252.3

3986

7340

5020

16437.8

25171.6

20269.3

5.0

4.8

iSB619

462

598

1.64e7

40.64

2406.7

5972.2

3323.5

2476

6064

4260

8346.1

25468.1

13966.9

4.3

4.2

iTH366

587

713

3.42e7

63.37

3310.2

5707.3

4464.2

4089

6363

4965

13612.7

30044.1

20715.5

5.0

4.6

iTZ479 v2

435

560

1.97e7

78.12

1211.4

2655.8

2216.4

3763

6181

4857

7368.1

12591.6

10119.8

4.9

4.6

For each model, we selected a random kinetic parameter $$w\in [-1,1]^{2n}$$ and we randomly chose 10 initial points $$x_{0}\in [-2,2]^{m}$$. For each $$x_{0}$$, BDCA was run 1000 iterations, while DCA was run until it reached the same value of $$\phi (x)$$ as obtained with BDCA

## 5 Numerical experiments

The codes are written in MATLAB and the experiments were performed in MATLAB version R2014b on a desktop Intel Core i7-4770 CPU @3.40GHz with 16GB RAM, under Windows 8.1 64-bit. The subproblems $$(\mathscr {P}_{k})$$ were approximately solved using the function fminunc with optimoptions(’fminunc’, ’Algorithm’, ’trust-region’, ’GradObj’, ’on’, ’Hessian’, ’on’, ’Display’, ’off’, ’TolFun’, 1e-8, ’TolX’, 1e-8).

In Table 1 we report the numerical results comparing DCA and BDCA with quadratic interpolation (Algorithm 3) for 14 models arising from the study of systems of biochemical reactions. The parameters used were $$\alpha =0.4$$, $$\beta =0.5$$, $$\bar{\lambda }=50$$ and $$\rho =100$$. We only provide the numerical results for Algorithm 3 because it normally gives better results than Algorithm 2 for biochemical models, as it is shown in Fig. 3. In Fig. 4 we show a comparison of the rate of convergence of DCA and BDCA with quadratic interpolation for two big models. In principle, a relatively large value of the parameter $$\rho$$ could slow down the convergence of DCA. This is not the case here: the behaviour of DCA is usually the same for values of $$\rho$$ between 0 and 100, see Figs. 3 and 4 (left). In fact, for big models, we observed that a value of $$\rho$$ between 50 and 100 normally accelerates the convergence of both DCA and BDCA, as shown in Fig. 4 (right). For these reasons, for the numerical results in Table 1, we applied both DCA and BDCA to the regularized version $$g(x)-h(x)$$ with $$g(x) =f_1 (x)+\frac{100}{2}\Vert x\Vert ^2$$ and $$h(x)=f_2(x)+\frac{100}{2}\Vert x\Vert ^2$$, where $$f_1$$ and $$f_2$$ are given by (30).

## 6 Concluding remarks

In this paper, we introduce two new algorithms for minimising smooth DC functions, which we term Boosted Difference of Convex function Algorithms (BDCA). Our algorithms combine DCA together with a line search, which utilises the point generated by DCA to define a search direction. This direction is also employed by Fukushima–Mine in [7], with the difference that our algorithms start searching for the new candidate from the point generated by DCA, instead of starting from the previous iteration. Thus, our main contribution comes from the observation that this direction is not only a descent direction for the objective function at the previous iteration, as observed by Fukushima–Mine, but is also a descent direction at the point defined by DCA. Therefore, with the slight additional computational effort of a line search one can achieve a significant decrease in the value of the function. This result cannot be directly generalized for nonsmooth functions, as shown in Remark 1. We prove that every cluster point of the algorithms are stationary points of the optimisation problem. Moreover, when the objective function satisfies the Łojasiewicz property, we prove global convergence of the algorithms and establish convergence rates.

We demonstrate that the important problem of finding a steady state in the dynamical modelling of systems of biochemical reactions can be formulated as an optimisation problem involving a difference of convex functions. We have performed numerical experiments, using models of systems of biochemical reactions from various species, in order to find steady states. The tests clearly show that our algorithm outperforms DCA, being able to achieve the same decrease in the value of the DC function while employing substantially less iterations and time. On average, DCA needed five times more iterations to achieve the same accuracy as BDCA. Furthermore, our implementation of BDCA was also more than four times faster than DCA. In fact, the slowest instance of BDCA was always at least three times faster than DCA. This substantial increase in the performance of the algorithms is especially relevant when the typical size of the problems is big, as is the case with all realistic biochemical network models.

## Footnotes

1. 1.

Recon 2 is the most comprehensive representation of human metabolism that is applicable to computational modelling [25]. This biochemical network model involves more than four thousand molecular species and seven thousand reversible elementary reactions.

2. 2.

An elementary reaction is a chemical reaction for which no intermediate molecular species need to be postulated in order to describe the chemical reaction on a molecular scale.

3. 3.

Reaction stoichiometry is a quantitative relationship between the relative quantities of molecular species involved in a single chemical reaction.

4. 4.

By cardinality we mean the number of nonzero components.

5. 5.

Note that p(x) is the rate of production of each molecule and c(x) is the rate of consumption of each molecule.

## Notes

### Acknowledgements

The authors wish to thank Christian Kanzow for pointing out Remark 3, and Aris Daniilidis for his helpful information on the Łojasiewicz exponent. The authors are also grateful to an anonymous referee for their pertinent and constructive comments.

## References

1. 1.
Absil, P.A., Mahony, R., Andrews, B.: Convergence of the iterates of descent methods for analytic cost functions. SIAM J. Optim. 16(2), 531–547 (2005)
2. 2.
Artacho Aragón, F.J., Fleming, R.M.T.: Globally convergent algorithms for finding zeros of duplomonotone mappings. Optim. Lett. 9(3), 569–584 (2015)
3. 3.
Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. 116(1–2), 5–16 (2009)
4. 4.
Bolte, J., Daniilidis, A., Lewis, A.: The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17(4), 1205–1223 (2007)
5. 5.
Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146(1–2), 459–494 (2013)
6. 6.
Collobert, R., Sinz, F., Weston, J., Bottou, L.: Trading convexity for scalability. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 201–208. ACM (2006)Google Scholar
7. 7.
Fukushima, M., Mine, H.: A generalized proximal point algorithm for certain non-convex minimization problems. Int. J. Syst. Sci. 12(8), 989–1000 (1981)
8. 8.
Gevorgyan, A., Poolman, M.G., Fell, D.A.: Detection of stoichiometric inconsistencies in biomolecular models. Bioinformatics 24(19), 2245–2251 (2008)
9. 9.
Huang, Y., Liu, H., Zhou, S.: A Barzilai–Borwein type method for stochastic linear complementarity problems. Numer. Algorithms 67(3), 477–489 (2014)
10. 10.
Le Thi, H.A., Pham Dinh, T.: The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems. Ann. Oper. Res. 133(1–4), 23–46 (2005)
11. 11.
Le Thi, H.A., Pham Dinh, T.: On solving linear complementarity problems by DC programming and DCA. Comput. Optim. Appl. 50(3), 507–524 (2011)
12. 12.
Le Thi, H.A., Pham Dinh, T., Muu, L.D.: Numerical solution for optimization over the efficient set by D.C. optimization algorithms. Oper. Res. Lett. 19(3), 117–128 (1996)
13. 13.
Le Thi, H.A., Huynh, V.N., Pham Dinh, T.: Convergence analysis of DC algorithm for DC programming with subanalytic data. Ann. Oper. Res. Technical Report, LMI, INSA-Rouen (2009)Google Scholar
14. 14.
Lee, J.D., Sun, Y., Saunders, M.A.: Proximal Newton type methods for minimizing composite functions. SIAM J. Optim. 24(3), 1420–1443 (2014)
15. 15.
Li, G., Pong, T.K.: Douglas–Rachford splitting for nonconvex optimization with application to nonconvex feasibility problems. Math. Progr. 159(1), 371–401 (2016)
16. 16.
Łojasiewicz, S.: Ensembles semi-analytiques. Institut des Hautes Etudes Scientifiques, Bures-sur-Yvette (Seine-et-Oise), France (1965)
17. 17.
Mine, H., Fukushima, M.: A minimization method for the sum of a convex function and a continuously differentiable function. J. Optim. Theory Appl. 33(1), 9–23 (1981)
18. 18.
Moudafi, A., Mainge, P.: On the convergence of an approximate proximal method for DC functions. J. Comput. Math. 24(4), 475–480 (2006)
19. 19.
Nesterov, Y.: Gradient methods for minimizing composite functions. Math. Program. 140(1), 125–161 (2013)
20. 20.
Nocedal, J., Wright, S.J.: Numerical optimization. In: Mikosch, T.V., Resnick, S.I., Robinson, S.M. (eds.) Springer Series in Operations Research and Financial Engineering, 2nd edn. Springer, New York (2006)Google Scholar
21. 21.
Parks, H.R., Krantz, S.G.: A Primer of Real Analytic Functions. Birkhäuser, Basel (1992)
22. 22.
Pham Dinh, T., Le Thi, H.A.: A DC optimization algorithm for solving the trust-region subproblem. SIAM J. Optim. 8(2), 476–505 (1998)
23. 23.
Pham Dinh, T., Souad, E.B.: Algorithms for solving a class of nonconvex optimization problems. Methods of subgradients. In: Hiriart-Urruty, J.-B. (ed.) FERMAT Days 85: Mathematics for Optimization, Volume 129 of North-Holland Mathematics Studies, pp. 249–271. Elsevier, Amsterdam (1986)Google Scholar
24. 24.
Schnörr, C., Schüle, T., Weber, S.: Variational reconstruction with DC-programming. In: Herman, G.T., Kuba, A. (eds.) Advances in Discrete Tomography and its Applications, pp. 227–243. Springer, Berlin (2007)Google Scholar
25. 25.
Thiele, I., et al.: A community-driven global reconstruction of human metabolism. Nat. Biotechnol. 31(5), 419–425 (2013)