Abstract
For the sparse signal reconstruction problem in compressive sensing, we propose a projection-type algorithm without any backtracking line search based on a new formulation of the problem. Under suitable conditions, global convergence and its linear convergence of the designed algorithm are established. The efficiency of the algorithm is illustrated through some numerical experiments on some sparse signal reconstruction problem.
Similar content being viewed by others
1 Introduction
A basic mathematical problem in compressive sensing (CS) is to recover a sparse signal vector \(x\in R^n\) from an undetermined linear system \(y=Ax\), where \(A\in R^{m\times n}\) (\(m\ll n\)) is the sensing matrix. A fundamental decoding model in CS is the following basis pursuit denoising problem, which can be mathematically formulated as
where \(\rho >0\) is the regularization parameter and \(\Vert x\Vert _1\) is the \(\ell _1\)-norm of the vector x, i.e., \(\Vert x\Vert _1=\sum \nolimits _{i=1}^n|x_i|\). For more information, see e.g.[6,7,8, 13, 18, 21, 23, 27, 30, 32, 33, 38, 40, 47, 50, 52,53,54,55, 57, 64, 69,70,72].Throughout this paper, we assume that the solution set of (1.1) is nonempty.
Obviously, function \(\Vert x\Vert _1\) is convex although it is not differential. For convex optimization problem (1.1), there are some standard methods such as smooth Newton-type methods and interior-point methods for solving the problem [2, 15, 19, 22, 25, 28, 29, 37, 43, 44, 46, 48, 49, 51, 60, 63, 65]. Cand\(\grave{\mathrm {e}}\)s et al. [3] developed a novel method for sparse signal recovery for more generic \(\ell _1-\)minimization. Yin et al. [66] proposed an efficient method for solving the \(\ell _1\)-minimization problem based on Bregman iterative regularization. Hale et al. [16] presented a framework for solving the large-scale \(\ell _1\)-regularized convex minimization problem based on operator splitting and continuation. However, these solvers are not tailored for large-scale cases of CS and they become inefficient as dimension n increases. To overcome this drawback, Figueiredo et al. [14] proposed a gradient projection-type algorithm with a backtracking line search for box-constrained quadratic programming formulation of (1.1). A similar algorithm based on conjugate gradient technique is proposed by Xiao and Zhu [61]. For more detail, see [4, 5, 9,10,11,12, 17, 20, 24, 26, 31, 34, 36, 39, 41, 42, 56, 58, 59, 62, 68]. Due to the high computing cost of the line search procedure, we propose a new type of projection algorithm for problem (1.1) without line search at each iteration in this paper which marginally decrease the computing cost of the algorithm.
The remainder of this paper is organized as follows. Some equivalent reformulations of problem (1.1) are established in Sect. 2. In Sect. 3, we propose a new projection-type algorithm without line search, and establish the global convergence of the new algorithm and its linear convergence rate. In Sect. 4, some numerical experiments on compressive sensing are given to illustrate the efficiency of the proposed method. Some concluding remarks are drawn in Sect. 5.
To end this section, some notations used in this paper are in order. We use \( R_+^n\) to denote the nonnegative quadrant in \( R^n\), and use \(x_{+}\) to denote the orthogonal projection of vector \(x\in R^n\) onto \( R^n_{+}\), that is, \((x_{+})_{i}:=\max \{x_{i},0\},~1\le i \le n\); the norm \(\Vert \cdot \Vert \) and \(\Vert \cdot \Vert _{1}\) denote the Euclidean 2-norm and 1-norm, respectively.
2 New formulation and algorithm
To propose a new projection-type algorithm for problem (1.1), we first establish a new equivalent reformulation. To this end, we define two nonnegative auxiliary variables \(\mu _i\) and \(\nu _i\) (\(i = 1,2,\ldots ,n\)) such that
Then, problem (1.1) can be reformulated as
where \(e\in R^n\) denotes the vector with all entries being 1, i.e., \(e=(1,1,\ldots ,1)^\top \). Based on this, the problem can be simplified as
where \(M=(A,-A)^\top (A,-A), p=(A,-A)^\top y-\rho (e^\top , e^\top )^\top \).
Obviously, the Hessian matrix M of the quadratic function \(f(\mu ;\nu )\) is positive semi-definite. By the optimization theory [1], we know that the stationary point of (2.2) coincides with its solution which also coincides with the solution set of the following linear variational inequality problem of finding \((\mu ;\nu )^*\in R_+^{2n}\) satisfying
Obviously, the solution set of (2.3), denoted by \(\Omega ^*\), is nonempty provided that the solution of (1.1) is nonempty.
To proceed, we give the definition of projection operator and some related properties. For a nonempty closed convex set \(K\subset R^{n} \) and vector \(x\in R^{n}\), the orthogonal projection of x onto K, i.e., \(\arg \min \{\Vert y-x\Vert |y\in K\}\), is denoted by \(P_{K}(x)\).
Proposition 2.1
[1, 67]. Let K be a closed convex subset of \(R^{n}\). For any \(x,y\in R^{n}\) and \(z \in K\), the following statements hold.
-
(i)
\(\langle P_{K}(x)-x,z-P_{K}(x)\rangle \ge 0\);
-
(ii)
\(\Vert P_{K}(x)-P_{K}(y)\Vert ^{2}\le \Vert x-y\Vert ^{2}-\Vert (P_{K}(x)-x)-(P_{K}(y)-y)\Vert ^{2}\);
-
(iii)
\(\Vert P_{K}(x)-x\Vert ^{2}\le \Vert x-z\Vert ^{2}-\Vert P_{K}(x)-z\Vert ^{2}\).
For problem (2.3) and \((\mu ;\nu )\in R^{2n}\), define the projection residue
where \(\beta >0\) is a constant, \(F(\mu ;\nu )=M(\mu ;\nu )-p\).
The projection residue is intimately related to the solution of (2.3) as shown in the following conclusion [35].
Proposition 2.2
\((\mu ;\nu )^*\) is a solution of (2.3) if and only if \(r((\mu ;\nu )^*,\beta )=0\) with some \(\beta >0\).
Proposition 2.3
For \(H=\{(\mu ;\nu )\in R^{2n}~|~\alpha ^\top (\mu ;\nu )-b\le 0\}\) and any \(z\notin H\), it holds that
where \(z, \alpha \in R^{2n}, \alpha \ne 0, b\in R\).
Based on the discussion above, we may formally state our algorithm.
Algorithm 3.1.
-
Step 0. Select any \(0<\beta <\frac{1}{\Vert M\Vert }, t\in [0,1], (\mu ;\nu )^0\in R^{2n}\). Let \(k:=0.\)
-
Step 1. Compute
$$\begin{aligned} z^{k}=\{(\mu ;\nu )^{k}-\beta F((\mu ;\nu )^{k})\}_+. \end{aligned}$$(2.6)If \(\Vert r((\mu ;\nu )^{k}, \beta )\Vert =0\), stop. Otherwise, go to Step 2.
-
Step 2. Compute
$$\begin{aligned} (\mu ;\nu )^{k+1}=P_{H_k}((\mu ;\nu )^{k}-\beta d((\mu ;\nu )^{k})), \end{aligned}$$(2.7)where
$$\begin{aligned} H_k:= & {} \{(\mu ;\nu )\in R^{2n}~|~[r((\mu ;\nu )^k,\beta )-\beta F((\mu ;\nu )^{k})]^\top [(\mu ;\nu )-z^k]\le 0\}, \end{aligned}$$(2.8)$$\begin{aligned} d((\mu ;\nu )^{k})= & {} \frac{t}{\beta }[r((\mu ;\nu )^k,\beta )-\beta F((\mu ;\nu )^{k})]+F(z^k). \end{aligned}$$(2.9) -
Step 3. Go to Step 1 by setting \(k:=k+1\).
In the algorithm, vector \((\mu ;\nu )^{k+1}\) is updated as follows: If
then \((\mu ;\nu )^{k}-\beta d((\mu ;\nu )^{k})\in H_k\) and we set
otherwise, \(r((\mu ;\nu )^k,\beta )-\beta F((\mu ;\nu )^{k})\ne 0\) and we set
For the half space \(H_k\), we claim that \(R^{2n}_+\subseteq H_k\). In fact, for any \((\mu ;\nu )\in R_+^{2n}\) and \(x=(\mu ;\nu )^{k}-\beta F((\mu ;\nu )^{k}),~ z=(\mu ;\nu ),\) by Proposition 2.3, one has
Thus, \((\mu ;\nu )\in H_k\).
3 Convergence
To establish the convergence and convergence rate of Algorithm 3.1, we need the following conclusions.
Lemma 3.1
For \(z^{k}\) and \(d((\mu ;\nu )^{k})\) defined in Algorithm 3.1, it holds that
where \((\mu ;\nu )^*\in \Omega ^*\).
Proof
Since matrix M is positive semi-definite, one has
Combining this with (2.3) yields
Then, by Proposition 2.1 (i), a direct computation gives
\(\square \)
Lemma 3.2
Suppose that Algorithm 3.1 generates an infinite sequence \( \{(\mu ;\nu )^{k}\}\). Then, for any \((\mu ;\nu )^*\in \Omega ^*,\) it holds that
Proof
By a direct computation, one has
where the first equality follows from (2.7), the first inequality follows from Proposition 2.1, the second inequality follows from (3.1), the third inequality follows from the fact that \((\mu ;\nu )^{k+1}\in H_{k}\), and the fourth inequality uses the Cauchy–Schwarz inequality. \(\square \)
Now, we are at the position to state our main results in this section.
Theorem 3.1
Suppose that Algorithm 3.1 generates an infinite sequence \( \{(\mu ;\nu )^{k}\}\), and the solution set of (1.1) is nonempty. Then, sequence \( \{(\mu ;\nu )^{k}\}\) converges to a solution of (2.3).
Proof
From (3.3), one has
Therefore, the sequence \(\{\Vert (\mu ;\nu )^{k}-(\mu ;\nu )^{*}\Vert \}\) is non-increasing and bounded. Hence, it converges. Consequently,
Thus, the sequence \(\{(\mu ;\nu )^{k}\}\) is bounded. Therefore, there exists convergent subsequence of \(\{(\mu ;\nu )^{k}\}\). The subsequence is denoted by \(\{(\mu ;\nu )^{k_{j}}\}\) and its limit by \((\hat{\mu };\hat{\nu })\). Then
Hence, \((\hat{\mu };\hat{\nu })\) is a solution of (2.3).
Set \((\mu ;\nu )^* = (\hat{\mu };\hat{\nu })\) in (3.3). Then, the sequence \(\{\Vert (\mu ;\nu )^k-(\hat{\mu };\hat{\nu })\Vert \}\) converges. Since \((\hat{\mu };\hat{\nu })\) is a limit point of subsequence \(\{(\mu ;\nu )^{k_{j}}\}\), it follows that \(\Vert (\mu ;\nu )^k-(\hat{\mu };\hat{\nu })\Vert \) converges to zero, i.e., that \(\{(\mu ;\nu )^k\}\) converges to \((\hat{\mu };\hat{\nu })\in \Omega ^*\). The desired result follows.\(\square \)
Theorem 3.2
The sequence \(\{x^k\}\) terminates in a finite number of steps at or converges globally to a solution of (1.1).
Proof
Assume that the sequence \(\{(\mu ;\nu )^k\}\) terminates in a finite number of steps at a solution of (2.3). Obviously, the sequence \(\{x^k\}\) terminates in a finite number of steps to a solution of (1.1).
In the following analysis, we assume that the sequence \(\{(\mu ;\nu )^k\}\) is an infinite sequence. From Theorem 3.1, we know that
Let \(\hat{x}=\hat{\mu }-\hat{\nu }\). Then a direct computation gives
where the second and third inequalities use the fact that
Thus, the sequence \(\{x^k\}\) converges globally to a solution of (1.1). \(\square \)
For (2.3), by a similar analysis to the proof of Theorem 4.1 in [45], we can obtain the following result.
Lemma 3.3
For any \((\mu ;\nu )\in R^{2n}\), Then, there exist constant \(\hat{\eta }>0\) and \((\mu ;\nu )^*\in \Omega ^*\) such that
where \(m(\mu ;\nu )=\Vert [-(\mu ;\nu )]_{+}\Vert +\Vert [-\beta F(\mu ;\nu )]_{+}\Vert +\beta [(\mu ;\nu )^\top F(\mu ;\nu )]_{+}.\)
Theorem 3.3
Suppose that \(0<\frac{1-\beta ^2\Vert M\Vert ^2}{\tau ^2}<1\) holds. Then, the sequence \(\{(\mu ;\nu )^{k}\}\) converges to a solution of (2.3) linearly, where \((\mu ;\nu )^{k}\) is generated by Algorithm 3.1.
Proof
From Theorem 3.1, one has
Hence, we can take \((\mu ;\nu )^* = (\hat{\mu };\hat{\nu })\) in (3.3). Thus,
(3.7) yields
Then by (3.10) and (3.11), one has
i.e.,
Since \(0<\frac{1-\beta ^2\Vert M\Vert ^2}{\tau ^2}<1\), one has \(0<1-\frac{1-\beta ^2\Vert M\Vert ^2}{\tau ^2}<1\). The desired result follows.\(\square \)
4 Numerical experiments
In this section, we provide some numerical tests to show the efficiency of the proposed method. In our numerical experiment, we set \(\rho =0.01\), \(n=2^{11}\), \(m=\mathrm {floor}(n/a)\), \(k=\mathrm {floor}(m/b)\), and the measurement matrix A is generated by Matlab scripts:
\(\texttt {[Q, R]=qr(A',0); A=Q'}.\)
The original signal \(\bar{x}\) is thus generated by \(\texttt {p=randperm(n); x(p(1:k))=randn(k,1)}\), and the observed signal y is generated by \(y=A\bar{x}+\bar{n}\), where \(\bar{n}\) is generated by a standard Gaussian distribution N(0, 1) and then it is normalized to the norm \(\sigma =0.01\) or 0.001. In our numerical experiments, the stopping criterion is
where \(f_k\) denotes the objective value of (1.1) at iteration \(x_k\). For Algorithm 3.1, we set \(t=0.4, \beta =0.8/\Vert M\Vert \). In addition, the initial points \(\mu _0=\max \{0,A^\top y\}\), \(\nu _0=\max \{0,-A^\top y\}\). For the conjugate gradient descent (denoted by CGD) method proposed recently by Xiao and Zhu in [61], we set \(\xi = 10, \sigma = 10^{-4}\) and \(\rho = 0.5\) in the line search (2.9) of CGD, and the initial points \(\mu _0, \nu _0\) are set the same as Algorithm 3.1. In each test, we calculate the relative error
where \(\tilde{x}\) denotes the recovery signal.
The numerical results are reported in Tables 1 and 2 from which we can see that Algorithm 3.1 is much better than CGD method for all \(\sigma \) and (a, b).
5 Conclusion
In this paper, we proposed a new projection-type algorithm for solving the compressive sensing (CS) without the backtracking line search. Its global convergence and linear convergence rate were established. Some numerical results were provided to illustrate the efficiency of the proposed method.
References
Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena, Boston, MA (1999)
Cai, J., Zheng, Z.: Inverse spectral problems for discontinuous Sturm-Liouville problems of Atkinson type. Appl. Math. Comput. 327, 22–34 (2018)
Candès, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(\ell _1-\) minimization. J Fourier Anal & Appl 14(5–6), 877–905 (2008)
Che, H.T., Wang, Y.J., Li, M.X.: A smoothing inexact Newton method for P-0 nonlinear complementarity problem. Front. Math. China 7, 1043–1058 (2012)
Chen, H.B., Chen, Y.N., Li, G.Y., Qi, L.Q.: A semidefinite program approach for computing the maximum eigenvalue of a class of structured tensors and its applications in hypergraphs and copositivity test. Numer. Linear Algebra Appl. 25, e2125 (2018)
Chen, H.B., Qi, L.Q., Song, Y.S.: Column Sufficient Tensors and Tensor Complementarity Problems. Front. Math. China 13(2), 255–276 (2018)
Chen, H.B., Wang, Y.J.: A family of higher-order convergent iterative methods for computing the Moore-Penrose inverse. Appl. Math. Comput. 218(8), 4012–4016 (2011)
Chen, H.B., Wang, Y.J.: On computing minimal H-eigenvalue of sign-structured tensors. Front. Math. China 12, 1289–1302 (2017)
Chen, H.B., Wang, Y.J., Wang, G.: Strong convergence of extragradient method for generalized variational inequalities in Hilbert space. J. Inequal. Appl. 2014, 1–11 (2014)
Chen, H.B., Wang, Y.J., Xu, Y.: An alternative extragradient projection method for quasi-equilibrium problems. J. Inequal. Appl. 2018, 26 (2018)
Chen, Q., Wang, D., Kang, X.: Twisted Partial coactions of Hopf algebras. Front. Math. China 12, 63–86 (2017)
Dong, A.J., Hou, C.J.: On some automorphisms of a class of Kadison-Singer algebras. Linear Algebra Appl. 436(7), 2037–2053 (2012)
Feng, D.X., Sun, M., Wang, X.Y.: A family of conjugate gradient methods for large-scale nonlinear equation. J. Inequal. Appl. 2017, 236 (2017)
Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J Selected Topics in Signal Processing 1(4), 586–597 (2007)
Gao, L.J., Wang, D.D., Wang, G.: Further results on exponential stability for impulsive switched nonlinear time delay systems with delayed impulse effects. Appl. Math. Comput. 268, 186–200 (2015)
Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for \(\ell _1-\)minimization: Methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)
Hou, C.J., Yuan, W.: Mininmal generating reflexive lattices of projections in finite von Neumann algebras. Math. Ann. 353(2), 499–517 (2012)
Hou, C.J., Zhang, H.Y.: A note on the diagonal maximality of operator algebras. Linear Algebra Appl. 436(7), 2406–2418 (2012)
Huan, L., Qu, B., Jiang, J.G.: Merit functions for general mixed quasi-variational inequalities. J. Appl. Math. Comput 33(1), 411–421 (2010)
Kong, D.Z., Liu, L.S., Wu, Y.H.: Isotonicity of the metric projection with applications to variational inequalities and fixed point theory in Banach spaces. J. Fixed Point Theory Appl. 19, 1889–1903 (2017)
Li, P.: Generalized convolution-type singular integral equations. Appl. Math. Comput 311, 314–323 (2017)
Li, M., Wang, J.: Exploring delayed Mittag-Leffler type matrix functions to study finite time stability of fractional delay differential equations. Appl. Math. Comput. 324, 254–265 (2018)
Lian, S.J.: Smoothing approximation to l1 exact penalty function for inequality constrained optimization. Appl. Math. Comput. 219(6), 3113–3121 (2012)
Lian, S.J., Duan, Y.Q.: Smoothing of the lower-order exactpenaltyfunction for inequality constrained optimization. J. Inequal. Appl. 2016, 185 (2016)
Lian, S.J., Zhang, L.S.: A simple smooth exact penalty function for smooth optimization problem. J. Syst. Sci. Complex. 25(5), 521–528 (2012)
Liu, W., Cui, J., Xin, J.: A block-centered finite difference method for an unsteady asymptotic coupled model in fractured media aquifer system. J. Comput. Appl. Math. 337, 319–340 (2018)
Liu, B.M., Li, J.L., Liu, L.S.: Nontrivial solutions for a boundary value problem with integral boundary conditions. Bound. Value Probl. 2014, 15 (2014)
Liu, J., Zhao, Z.: Multiple solutions for impulsive problems with non-autonomous perturbations. Appl. Math. Lett. 64, 143–149 (2017)
Liu, H.: A class of retarded Volterra–Fredholm type integral inequalities on time scales and their applications. J. Inequal. Appl. 2017, 293 (2017)
Liu, H.D., Meng, F.W.: Some new nonlinear integral inequalities with weakly singular kernel and their applications to FDEs. J. Inequal. Appl. 2015, 209 (2015)
Liu, H.D., Meng, F.W.: Some new generalized Volterra–Fredholm type discrete fractional sum inequalities and their applications. J. Inequal. Appl 2016, 213 (2016)
Liu, B.H., Qu, B., Zheng, N.: A successive projection algorithm for solving the multiple-sets split feasibility problem. Numer. Funct. Anal. Optim. 35, 1459–1466 (2014)
Ma, X., Wang, P., Wei, W.: Constant mean curvature surfaces and mean curvature flow with non-zero Neumann boundary conditions on strictly convex domains. J. Funct. Anal. 274, 252–277 (2018)
Meng, Q.: Weak Haagerup property for CCalgebras. Ann. Funct. Anal. 8, 502–511 (2017)
Noor, M.A.: General variational inequalities. Appl. Math. Lett. 1(2), 119–121 (1988)
Pan, X.T., Che, H.T., Wang, Y.J.: A high-accuracy compact conservative scheme for generalized regularized long-wave equation. Bound. Value Probl. 2015, 141 (2015)
Pan, X., Wang, Y.J., Zhang, L.M.: Numerical analysis of a pseudo-compact C-N conservative scheme for the Rosenau-KDV equation coupling with the Rosenau-RLW equation. Bound. Value Probl. 2015, 65 (2015)
Qu, B., Chang, H.X.: Remark on the successive projection algorithm for the multiple-sets split feasibility problem. Numer. Funct. Anal. Optim. 38(12), 1614–1623 (2017)
Qu, B., Liu, B.H., Zheng, N.: On the computation of the step-size for the CQ-like algorithms for the split feasibility problem. Appl. Math. Comput. 262, 218–223 (2015)
Sun, M., Wang, Y.J., Liu, J.: Generalized Peaceman–Rachford splitting method for multiple-block separable convex programming with applications to robust PCA. Calcolo 54, 77–94 (2017)
Shi, Z.J., Wang, S.Q.: Modified nonmonotone Armijo line search for descet method. Numer. Algorithms 57(10), 1–25 (2011)
Sun, F.L., Liu, L.S., Wu, Y.H.: Infinitely many sign-changing solutions for a class of biharmonic equation with p-Laplacian and Neumann boundary condition. Appl. Math. Lett. 73, 128–135 (2017)
Sun, Y., Liu, L.S., Wu, Y.H.: The existence and uniqueness of positive monotone solutions for a class of nonlinear Schrodinger equations on infinite domains. J. Comput. Appl. Math. 321, 478–486 (2017)
Sun, F.L., Liu, L.S., Wu, Y.H.: Finite time blow-up for a thin-film equation with initial data at arbitrary energy level. J. Math. Anal. Appl. 458, 9–20 (2018)
Sun, H.C., Wang, Y.J., Qi, L.Q.: Global error bound for the generalized linear complementarity problem over a polyhedral cone. J. Optim. Theory Appl. 142, 417–429 (2009)
Tomioka, R., Sugiyama, M.: Dual-augmented Lagrangian method for efficient sparse reconstruction. IEEE Signal Process Lett. 16(12), 1067–1070 (2009)
Wang, B.: Trigonometric collocation methods based on Lagrange basis polynomials for multi-frequency oscillatory second order differential equations. J. Comput. Appl. Math. 313, 185–201 (2017)
Wang, B., Meng, F., Fang, Y.: Efficient implementation of RKN-type Fourier collocation methods for second-order differential equations. Appl. Numer. Math. 119, 164–178 (2017)
Wang, B., Wu, X., Meng, F.: Trigonometric collocation methods based on Lagrange basis polynomials for multi-frequency oscillatory second order differential equations. J. Comput. Appl. Math. 313, 185–201 (2017)
Wang, B., Yang, H., Meng, F.: Sixth order symplectic and symmetric explicit ERKN schemes for solving multi frequency oscillatory nonlinear Hamiltonian equations. Calcolo 54, 117–140 (2017)
Wang, G.: Existence-stability theorems for strong vector set-valued equilibrium problems in reflexive Banach spaces. J. Inequal. Appl. 239, 1–14 (2015)
Wang, G., Che, H.T.: Generalized strict feasibility and solvability for generalized vector equilibrium problem with set-valued map in reflexive Banach spaces. J. Inequal. Appl. 1–11, 2012 (2012)
Wang, G., Yang, X.Q., Cheng, T.C.E.: Generalized Levitin-Polyak well-posedness for generalized semi-infinite programs. Numer. Funct. Anal. Optim. 34(6), 695–711 (2013)
Wang, G., Zhou, G.L., Caccetta, L.: Z-eigenvalue inclusion theorems for tensors. Discret Contin Dyn Syst Ser B 22(1), 187–198 (2017)
Wang, X.Y.: Alternating proximal penalization algorithm for the modified multiple-set split feasibility problems. J. Inequal. Appl. 2018, 48 (2018)
Wang, X.Y., Chen, H.B., Wang, Y.J.: Solution structures of tensor complementarity problem. Front. Math. China. https://doi.org/10.1007/s11464-018-0675-2
Wang, Y.J., Caccetta, L., Zhou, G.L.: Convergence analysis of a block improvement method for polynomial optimization over unit spheres. Numer. Linear Algebra Appl. 22, 1059–1076 (2015)
Wang, Y.J., Zhang, K.L., Sun, H.C.: Criteria for strong H-tensors. Front. Math. China 11(3), 577–592 (2016)
Wang, P., Zhang, D.: Convexity of level sets of minimal graph on space form with nonnegative curvature. J. Differ. Equ. 262, 5534–5564 (2017)
Wang, Y., Liu, L.S.: Uniqueness and existence of positive solutions for the fractional integro-differential equation. Bound. Value Probl. 2017, 12 (2017)
Xiao, Y.H., Zhu, H.: A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. 405(1), 310–319 (2013)
Xu, F.Y., Zhang, X.G., Wu, Y.H., Liu, L.S.: Global existence and the optimal decay rates for the three dimensional compressible nematic liquid crystal flow. Acta Appl. Math. 150, 67–80 (2017)
Xu, R., Meng, F.W.: Some new weakly singular integral inequalities and their applications to fractional differential equations. J. Inequal. Appl. 2016, 78 (2016)
Xu, Y.M., Wang, L.B.: Breakdown of classical solutions to Cauchy problem for inhomogeneous quasilinear hyperbolic systems. J. Pure Appl. Math. 46(6), 827–851 (2015)
Xu, Y.M., Zhang, H.J.: Positive solutions of an infinite boundary value problem for nth-order nonlinear impulsive singular integro-differential equations in Banach spaces. Appl. Math. Comput. 218, 5806–5818 (2012)
Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell_1-\) minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)
Zarantonello, E.H.: Projections on convex sets in Hilbert space and spectral theory. In: Zarantonello, E.H. (ed.) Contributions to Nonlinear Functional Analysis. Academic Press, New York (1971)
Zhou, G., Wang, G., Qi, L., Alqahtani, M.: A fast algorithm for the spectral radii of weakly reducible nonnegative tensors. Numer. Linear Algebra Appl. 25, e2134 (2018)
Zhang, K.L., Wang, Y.J.: An H-tensor based iterative scheme for identifying the positive definiteness of multivariate homogeneous forms. J. Comput. Appl. Math. 305(2), 1–10 (2016)
Zhang, H.Y., Wang, Y.J.: A new CQ method for solving split feasibility problem. Front. Math. China 5(1), 37–46 (2010)
Zheng, Z., Kong, Q.: Friedrichs extensions for singular Hamiltonian operators with intermediate deficiency indices. J. Math. Anal. Appl. 461, 1672–1685 (2018)
Zhang, K.: On sign-changing solution for some fractional differential equations. Bound. Value Probl. 2017, 59 (2017)
Acknowledgements
The authors thank the anonymous reviewers for their valuable comments and suggestions, which helped to improve the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
This project is supported by the Natural Science Foundation of China (Grants no.11801309, 11671228, 11601261), and Natural Science Foundation of Shandong Province (Grant no. ZR2016AQ12).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Feng, D., Wang, X. A linearly convergent algorithm for sparse signal reconstruction. J. Fixed Point Theory Appl. 20, 154 (2018). https://doi.org/10.1007/s11784-018-0635-1
Published:
DOI: https://doi.org/10.1007/s11784-018-0635-1
Keywords
- Compressive sensing
- projection-type algorithm
- global convergence
- R-linear convergence
- sparse signal reconstruction