Keywords

1 Introduction

A lot of engineering backgrounds such as objects detections for military purposes and magnetic resonance imaging (MRI) for medical applications, lead to the studies on image restorations. The main tasks of image restorations aim to the recovery of an image from given noisy measurement data. However, in most engineering configurations, the specified measurement data are limited frequency data instead of the full data in spatial domain. Therefore the image restorations are essentially of the nature of ill-posedness [1]. Mathematically, the image restorations require the stable approximations to the target image from insufficient noisy measurement data, which are also one of the most important research areas of applied mathematics by dealing with the ill-posedness of the image restorations.

In recent decades, various image restoration models and mathematical techniques have been developed due to the great importance of image restoration problems, such as level set methods, wavelet-based frameworks, nonlinear PDEs models and optimization schemes [2,3,4]. In all these studies, the basic ideas are the reconstructions of an image by some denoising process, while the key information about the image should be kept in terms of the regularization techniques.

The mathematical framework dealing with ill-posed problems is the regularizing scheme with some appropriate penalty terms incorporating into the cost functional. The key issue for this scheme is that the suitable weight called the regularizing parameters between the data matching term and the penalty terms should be specified artificially to keep the balance between the data matching and smoothness of the sought solution. In the cases that the exact solution to be sought is smooth, the penalty terms for the solution can be measured by some standard norms such as \(L^2\) or \(H^2\) norm for which the choice strategy for the regularizing parameters have been studied thoroughly [1, 4]. However, for non-smooth exact solution such as those in the image restorations with the sharp jump of the grey level function of the image, instead of the standard differential norms, the other non-differential penalty term such as total variation (TV) or \(L^0\)-norm sparsity penalty term should be applied [2, 3].

Motivated by the above engineering backgrounds, we consider an image recovery problem using the incomplete noisy frequency data by minimizing a cost functional with penalty terms in Sect. 2. Based on the derivatives expressions of the cost functional, an iterative scheme with outer and inner recursions are proposed in Sect. 3 to solve the minimizing problem. Finally, some numerical experiments are presented to show the validity of the proposed scheme in Sect. 4.

2 Optimization Modeling with Error Analysis

As the standard configurations in signal processing, we define the Fourier Transform matrix \(F\in \mathbb {R}^{N\times N}\) with the components

$$\begin{aligned} F_{i,j}=e^{-i\frac{2\pi }{N}ij},\quad \quad i,j=1,\cdots ,N. \end{aligned}$$
(2.1)

It is well-known that the matrix F is an unitary matrix [10] satisfying \(F^*F=\mathcal {I}\), where the superscript \(*\) denotes the conjugate transpose of an matrix, \(\mathcal {I}\) is identity matrix.

In most computer vision problems, the two-dimensional image \(f:=(f_{m,n}) (m,n=1,\cdots ,N)\) can be represented as a vector \(\mathbf {f}\). Here we introduce some symbols for this representation as follows:

  • \(\Box \) operator \(\mathbf {vect}:\mathbb {R}^{N\times N}\rightarrow \mathbb {R}^{N^2\times 1}\): \(\mathbf {vect}[f]: =(\mathbf {f}_1,\mathbf {f}_2,\cdots ,\mathbf {f}_{N^2})^T=\mathbf {f}\), where the \(N^2\)-elements are generated by re-ordering the N column vectors of f sequently.

  • \(\Box \) operator \(\mathbf {array}\): the inverse of the \(\mathbf {vect}\), i.e., \(\mathbf {array}[\mathbf {f}]=f\).

  • \(\Box \) Two-dimensional discrete Fourier transform (DFT) matrix \(\mathbf {F}\):

    $$\begin{aligned} \mathbf {\hat{f}}:=\mathbf {vect}[F^TfF]=(F\otimes F)\mathbf {f}:=\mathbf {Ff}, \end{aligned}$$

    where \(\otimes \) is the tensor product of two matrices.

By partial frequency data of \(\hat{f}\), we mean that only parts of the elements of \(\hat{f}\) are sampled, with other elements considered as 0. Denoted by P the \(N\times N\) matrix generating from the identity matrix \(\mathcal {I}\) by setting its NM rows as null vectors, i.e., \(P=\text {diag}(p_{11},p_{22},\cdots ,p_{NN})\) with \(p_{ii}\) being 1 or 0. Then \(P\hat{f}\) means that we only take \(M(\le N)\) rows of \(\hat{f}\) as our partial data. To the vector form \(\mathbf {f}\), the sampling matrix should be modified as an \(N^2\times N^2\) matrix. Then we have the following notations with tensor product:

$$\begin{aligned} \mathbf {vect}[P\hat{f}]=(\mathcal {I}\otimes P)\mathbf {vect}[\hat{f}]=(\mathcal {I}\otimes P)\mathbf {\hat{f}}:=\mathbf {P}\mathbf {\hat{f}}. \end{aligned}$$
(2.2)

Obviously, the matrix \(\mathbf {P}\) is the sampling matrix in algorithm domain which is chosen before reconstruction.

Generally, the frequency data of an image f are obtained by some scanning process, which are of unavoidable error, i.e., our inversion data for image recovery are in fact \(P\hat{g}^\delta \) with noisy data \(\hat{g}^\delta \) of \(\hat{f}\) satisfying

$$\begin{aligned} \Vert P\hat{f}-P\hat{g}^\delta \Vert _F\le \Vert \hat{f}-\hat{g}^\delta \Vert _F\le \delta , \end{aligned}$$
(2.3)

where \(\Vert \cdot \Vert _F\) is the Frobenius norm for an \(N\times N\) matrix, corresponding to the 2-norm of an \(N^2\)-dimensional vector after stacking the elements of an \(N\times N\) matrix into an \(N^2\)-dimensional vector. Hence the data matching term can be written as

$$\begin{aligned} \Vert P\hat{f}-P\hat{g}^{\delta }\Vert _{F}^2 =\Vert \mathbf {P}(\mathbf {Ff})-\mathbf {P}\mathbf {\hat{g}}^{\delta }\Vert _{2}^2. \end{aligned}$$
(2.4)

There are various sampling matrices, like those in radial lines sampling method [12], band sampling method [6] which is much efficient in numerical experiments. In this paper, we apply the random band sampling process which samples some rows randomly. Denoted by cenR the central ratio, i.e., to the frequency image there are only \(cenR\times N\) rows in the central parts of the frequency image (the centre is between No. \(N/2-1\) and N/2 rows) in Cartesian coordinate system or in the natural domain coordinate which is shown in Fig. 1(b), and the others sampling rows are the random ones. In the algorithm domain shown in Fig. 1(a), the sampling rows are distributed on the four corners, i.e., it is the same distribution as the mask as usual.

Fig. 1.
figure 1

(a) Algorithm domain coordinate; (b) Natural domain coordinate.

Recall that the most important issue for image restorations is the edge-preservation property of the image, which means that we are essentially interested in the efficient reconstruction for a piecewise constant image with the main interests in detecting image edges. Since the total variation (TV) of a two-dimensional function can describe the function jumps efficiently, we are then led to the following constraint optimization problem

$$\begin{aligned} \min \limits _{f\in \mathbb {R}^{N\times N}}\left\{ |f|_{TV}:\Vert PF^TfF-P\hat{g}^{\delta }\Vert _F^2 \le \delta \right\} , \end{aligned}$$
(2.5)

where \(P\hat{g}^{\delta }\) is the incomplete noisy frequency data, and TV penalty term \(|f|_{TV}\) for f is defined in a standard way [6, 12]. Since \(|f|_{TV}\) is not differentiable at \(f=\varTheta \) (zero matrix), we approximate \(|f|_{TV}\) by

$$\begin{aligned} |f|_{TV,\beta }=\sum \limits _{m,n=1}^{N} \sqrt{\left( \nabla _{m,n}^{x_1}f\right) ^2+\left( \nabla _{m,n}^{x_2}f\right) ^2+\beta } \end{aligned}$$
(2.6)

for small constant \(\beta >0\), where \(\nabla _{m,n}f:=\left( \nabla _{m,n}^{x_1}f,\nabla _{m,n}^{x_2}f\right) \) with two components

$$\begin{aligned} \nabla _{m,n}^{x_1}f= {\left\{ \begin{array}{ll} f_{m+1,n}-f_{m,n}, &{}\mathrm{if}\ m<N,\\ f_{1,n}-f_{m,n}, &{}\mathrm{if}\ m=N, \end{array}\right. } \ \nabla _{m,n}^{x_2}f= {\left\{ \begin{array}{ll} f_{m,n+1}-f_{m,n}, &{}\mathrm{if}\ n<N,\\ f_{m,1}-f_{m,n}, &{}\mathrm{if}\ n=N \end{array}\right. } \end{aligned}$$

for \(m,n=1,\cdots ,N\) due to the periodic boundary condition on f.

However, the constraint optimization problem (2.5) in the case of \(P\ne \mathcal {I}\) has no restrictions on the size of f, notice that \(PF^TXF=\varTheta \) may have nonzero solution X arbitrarily large for singular matrix P. To exclude this uncertainty, our image recovery problem is finally reformulated as the following unconstraint problem

$$\begin{aligned} \left\{ \begin{array}{ll} f^*:=\arg \min \limits _{f}J_{\beta }(f),\\ J_\beta (f):=\frac{1}{2}\Vert PF^TfF-P\hat{g}^{\delta }\Vert _F^2+{\alpha _1} \Vert f\Vert _{F}^2+{\alpha _2}|f|_{TV,\beta }. \end{array} \right. \end{aligned}$$
(2.7)

where \(\alpha _1,\alpha _2>0\) are regularizing parameters.

The theorems below illustrate the existence of the minimizer and establish the choice strategies for the regularizing parameters \(\alpha _1,\alpha _2\).

Theorem 1

For \(\alpha _1>0,\alpha _2,\beta \ge 0\), there exists a local minimizer to the optimization problem (2.7).

Proof

Since \(J_\beta (f)\ge 0\) for \(f\in \mathbb {R}^{N\times N}\), there exists a constant \(J^*\ge 0\) such that \(J^*=\inf \limits _{f}J_\beta (f)\). So there exists a matrix sequence \(\{f^k\in \mathbb {R}^{N\times N}:k=1,2,\cdots \}\) such that \(\lim \limits _{k\rightarrow \infty }J_\beta (f^k)=J^*\), which means \(\alpha _1\Vert f^k\Vert _F^2\le J_\beta (f^k)\le C_0\) for \(k=1,2,\cdots \), i.e., \(\Vert f^k\Vert _F^2\le C_0/\alpha _1\). Therefore there exists a subsequence matrix of \(\{f^k:k=1,2,\cdots \}\), still denoted by \(\{f^k:k=1,2,\cdots \}\), such that \(\lim \limits _{k\rightarrow \infty }f^k=f^*\).

Notice that \(|f|_{TV,\beta }\) is also continuous with respect to f by (2.6), the continuity of \(J_\beta (f)\) with respect to f yields \(J_\beta (f^*)=\lim \limits _{k\rightarrow \infty }J_\beta (f^k)=J^*=\inf \limits _{f}J_\beta (f)\), i.e., \(f^*\) is the minimizer of \(J_\beta (f)\). The proof is complete.    \(\square \)

Theorem 2

Denote by \(f^\dag \in \mathbb {R}^{N\times N}\) the exact image. Then the minimizer \(f^*=f^*_{\alpha _1,\alpha _2,\beta ,\delta }\) satisfies the following estimates

$$\begin{aligned}&\Vert PF^Tf^*_{\alpha _1,\alpha _2,\beta ,\delta }F-P\hat{g}^{\delta }\Vert _F^2 \le \delta ^2+\alpha _1\Vert f^\dag \Vert _F^2+2\alpha _2N^2\sqrt{\beta }+2\alpha _2|f^\dag |_{TV},\end{aligned}$$
(2.8)
$$\begin{aligned}&\Vert f^*_{\alpha _1,\alpha _2,\beta ,\delta }\Vert _F^2\le \frac{\delta ^2}{2\alpha _1} +\frac{\alpha _2}{\alpha _1}|f^\dag |_{TV}+\frac{\alpha _2}{\alpha _1}N^2 \sqrt{\beta }+\Vert f^\dag \Vert _F^2, \end{aligned}$$
(2.9)
$$\begin{aligned}&|f^*_{\alpha _1,\alpha _2,\beta ,\delta }|_{TV,\beta }\le \frac{\delta ^2}{2\alpha _2} +\frac{\alpha _1}{\alpha _2}\Vert f^\dag \Vert _F^2+N^2\sqrt{\beta }+|f^\dag |_{TV}. \end{aligned}$$
(2.10)

Proof

Since \(f^*_{\alpha _1,\alpha _2,\beta ,\delta }\) is the minimizer, we have

$$\begin{aligned}&\frac{1}{2}\Vert PF^Tf^*_{\alpha _1,\alpha _2,\beta ,\delta }F-P\hat{g}^{\delta }\Vert _F^2+{\alpha _1} \Vert f^*_{\alpha _1,\alpha _2,\beta ,\delta }\Vert _{F}^2+{\alpha _2}|f^*_{\alpha _1,\alpha _2,\beta ,\delta }|_{TV,\beta }\nonumber \\\le & {} \frac{1}{2}\Vert PF^Tf^\dag F-P\hat{g}^{\delta }\Vert _F^2+{\alpha _1} \Vert f^\dag \Vert _{F}^2+{\alpha _2}|f^\dag |_{TV,\beta }\nonumber \\\le & {} \frac{1}{2}\delta ^2+\alpha _1\Vert f^\dag \Vert _{F}^2+\alpha _2(|f^\dag |_{TV,\beta }-|f^\dag |_{TV})+\alpha _2|f^\dag |_{TV}\nonumber \\= & {} \frac{1}{2}\delta ^2+\alpha _1\Vert f^\dag \Vert _{F}^2+\alpha _2\sum \limits _{m,n=1}^{N} \frac{\beta }{\sqrt{|f_{m,n}^\dag |^2+\beta }+\sqrt{|f_{m,n}^\dag |^2}}+\alpha _2|f^\dag |_{TV}\nonumber \\\le & {} \frac{1}{2}\delta ^2+\alpha _1\Vert f^\dag \Vert _{F}^2+\alpha _2N^2\sqrt{\beta }+\alpha _2|f^\dag |_{TV}. \end{aligned}$$
(2.11)

Since \(|f^*_{\alpha _1,\alpha _2,\beta ,\delta }|_{TV}\le |f^*_{\alpha _1,\alpha _2,\beta ,\delta }|_{TV,\beta }\), the proof is complete by the triangle inequality. \(\Box \)

The above decompositions are important for seeking the minimizer of our cost functional, which is taken as our reconstruction of image. This result generates the resolution analysis for our reconstruction scheme in terms of the data-matching and regularity-matching for the image, i.e., the quantitative error descriptions on these two terms are given. We can adjust the parameters \(\alpha _1,\alpha _2\) analytically such that our reconstruction fits our concerns for either image details (data-matching) or image sparsity (TV difference).

3 The Iteration Algorithm to Find the Minimizer

Take the image vector \(\mathbf {f}=(\mathbf {f}_1,\mathbf {f}_2,\cdots ,\mathbf {f}_{N^2})^T\in \mathbb {R}^{N^2\times 1}\) as the equivalent variables, and each components \(\mathbf {f}_i\) has one-to-one correspondence relationship with \(f_{m,n}\), i.e. \(f_{m,n}=\mathbf {f}_{(n-1)N+m}\). For the optimization problem

$$\begin{aligned} \min \limits _{f}J_\beta (f)=\min \limits _{f}\left( \frac{1}{2}\Vert PF^TfF- P\hat{g}^{\delta }\Vert _F^2+{\alpha _1}\Vert f\Vert _{F}^2+{\alpha _2}|f|_{TV,\beta }\right) \end{aligned}$$
(3.1)

finding the minimizer \(f^*\) approximately, the Bregman iterative algorithm is given in [13], which is established in terms of Bregman distance [14]. In order to solve the optimization problem (3.1) iteratively, \(f^{(k+1)}\) is yielded by solving its Euler-Lagrange equation [15]. Due to the penalty terms \(\Vert f\Vert _{F}\) and \(|f|_{TV,\beta }\), the corresponding Euler-Lagrange equation for the minimizer is nonlinear. So we propose to find the minimizer by the lagged diffusivity fixed point method [16]. Considering the optimization problem with respect to the image vector \(\mathbf {f}\) as

$$\begin{aligned} \min \limits _{\mathbf {f}}J_{\beta }(\mathbf {f}):=\min \limits _{\mathbf {f}}\left( \frac{1}{2}\Vert \mathbf {P}(\mathbf {Ff})-\mathbf {P\hat{g}}^\delta \Vert _2^2+ {\alpha _1}\Vert \mathbf {f}\Vert _{2}^2+{\alpha _2}|\mathbf {f}|_{TV,\beta }\right) . \end{aligned}$$
(3.2)

In order to solve the Euler-Lagrange equation of (3.1), we need the derivatives of data matching term and the penalty terms in (3.2). By straightforward computations, these derivatives have the following expressions:

$$\begin{aligned} \left\{ \begin{array}{ll} \nabla _{\mathbf {f}}\frac{1}{2}\Vert PF^TfF-P\hat{g}^{\delta }\Vert _F^2 =\mathbf {F}^*\mathbf {P}^*\mathbf {PFf}- \mathbf {F}^*\mathbf {P}^*\mathbf {P}\mathbf {\hat{g}^\delta },\\ \nabla _{\mathbf {f}}\Vert f\Vert _{F}^2=2\mathcal {I}\otimes \mathcal {I}\mathbf {f},\\ \nabla _{\mathbf {f}}|f|_{TV,\beta }=\mathbf {L}[\mathbf {f}]\mathbf {f}, \end{array} \right. \end{aligned}$$
(3.3)

where \(\mathbf {P}=\mathcal {I}\otimes P\) and the \(N^2\times N^2\) matrix

$$\begin{aligned} \mathbf {L}[\mathbf {f}]:=(\mathcal {I}\otimes \mathcal {D})^T\varLambda [\mathbf {f}](\mathcal {I}\otimes \mathcal {D}) +(\mathcal {D}\otimes \mathcal {I})^T\varLambda [\mathbf {f}](\mathcal {D}\otimes \mathcal {I}) \end{aligned}$$
(3.4)

with

$$\begin{aligned} \left\{ \begin{array}{ll} \varLambda [\mathbf {f}]:=diag\left( \frac{1}{d_1[\mathbf {f}]},\cdots ,\frac{1}{d_{N^2}[\mathbf {f}]}\right) ,\\ d_i[\mathbf {f}]:=\sqrt{(\mathop {\sum }\nolimits _{l'=1}^{N^2}(\mathcal {I}\otimes \mathcal {D})_{i,l'}\mathbf {f}_{l'})^2 +(\mathop {\sum }\nolimits _{l'=1}^{N^2}(\mathcal {D}\otimes \mathcal {I})_{i,l'}\mathbf {f}_{l'})^2+\beta }, \end{array} \right. \end{aligned}$$

where \(i=i(m,n)=(n-1)N+m,l'=l(m',n')=(n'-1)N+m'\) for \(m,n,m',n'=1,\cdots ,N\), and the \(N\times N\) circulant matrix \(\mathcal {D}:=\mathbf {circulant}(-1,0,\cdots ,0,1)\),

Based on (3.3), we can find the approximate minimizer by the following Bregman iterative algorithm.

figure a

According to (3.3), the stacking vector \(\mathbf {f}^{(l+1)}\) of the minimizer \(f^{(l+1)}\) of \(J_\beta (f)\) at the \(l-\)th step satisfies the following nonlinear equation:

$$\begin{aligned} N^2(\mathcal {I}\otimes P)\mathbf {f} +2\alpha _1(\mathcal {I}\otimes \mathcal {I})\mathbf {f}+\alpha _2\mathbf {L}[\mathbf {f}]\mathbf {f} =\mathbf {F}^*\mathcal {I}\otimes P\left( \mathbf {\hat{g}}^{\delta }-\mathbf {Ff}^{(l)}\right) , \end{aligned}$$
(3.5)

with sampling data \(P\hat{g}^\delta \) and the spatial approximation \(f^{(l)}\) at the \((l-1)\)th step, which constitutes the standard Bregman iterative algorithm. Now we propose a new algorithm based on the Bregman iteration by introducing an inner recursion.

Notice, the real symmetric matrix \(\mathcal {I}\otimes P\) may not be invertible due to our finite sampling matrix P. Therefore an efficient algorithm should be developed for solving the nonlinear system (3.5) with unknown \(\mathbf {f}\in \mathbb {R}^{N^2\times 1}\). We apply the lagged diffusivity fixed point method [16].

Define \(\varLambda ^n[\mathbf {f}]:=\text {diag}(\frac{1}{d_{(n-1)N+1}[\mathbf {f}]},\cdots ,\frac{1}{d_{(n-1)N+N}[\mathbf {f}]})\), then \(\varLambda [\mathbf {f}]=\text {diag}(\varLambda ^1[\mathbf {f}], \cdots , \varLambda ^N[\mathbf {f}])\). Since

$$\begin{aligned} \mathbb {L}_1[\mathbf {f}]:=2\alpha _1(\mathcal {I}\otimes \mathcal {I}) +\alpha _2(\mathcal {I}\otimes \mathcal {D})^T\text {diag}(\varLambda ^1[\mathbf {f}],\cdots , \varLambda ^N[\mathbf {f}]) (\mathcal {I}\otimes \mathcal {D}) \end{aligned}$$

is a real positive block diagonal matrix and

$$\begin{aligned}&\mathbb {L}_2[\mathbf {f}]:=\alpha _2(\mathcal {D}\otimes \mathcal {I})^T\varLambda [\mathbf {f}](\mathcal {D}\otimes \mathcal {I})\\= & {} \alpha _2\left( \begin{array}{ccccc} \frac{1}{d_1[\mathbf {f}]}+\frac{1}{d_{N^2}[\mathbf {f}]}&{}-\frac{1}{d_1[\mathbf {f}]}&{}\cdots &{}0&{}-\frac{1}{d_{N^2}[\mathbf {f}]}\\ -\frac{1}{d_1[\mathbf {f}]}&{}\frac{1}{d_1[\mathbf {f}]}+\frac{1}{d_2[\mathbf {f}]}&{}\cdots &{}0&{}0\\ \vdots &{}\vdots &{}\ddots &{}\vdots &{}\vdots \\ 0&{}0&{}\cdots &{}\frac{1}{d_{N^2-2}[\mathbf {f}]}+\frac{1}{d_{N^2-1}[\mathbf {f}]}&{}-\frac{1}{d_{N^2-1}[\mathbf {f}]}\\ -\frac{1}{d_{N^2}[\mathbf {f}]}&{}0&{}\cdots &{}-\frac{1}{d_{N^2-1}[\mathbf {f}]}&{}\frac{1}{d_{N^2-1}[\mathbf {f}]}+\frac{1}{d_{N^2}[\mathbf {f}]} \end{array}\right) \end{aligned}$$

is a symmetric block matrix, we construct the inner iteration scheme from l-step for the nonlinear system (3.5) as

$$\begin{aligned} \mathbb {L}_1[\mathbf {f}^{(l)}]\mathbf {f}^{(l+1)}= & {} -\left( N^2(\mathcal {I}\otimes P)+\mathbb {L}_2[\mathbf {f}^{(l)}]\right) \mathbf {f}^{(l)}\nonumber \\&+\,\mathcal {I}\otimes P\left( \mathbf {F}^*\mathbf {\hat{g}^\delta }-\mathbf {F}^*\mathbf {Ff}^{(l)}\right) \end{aligned}$$
(3.6)

for \(l=0,1,\cdots \). Since \(\mathbb {L}_1[\mathbf {f}^{(l)}]\) is a known block diagonal matrix being symmetric positive, the computational costs for solving \(\mathbf {f}^{(l+1)}\) are affordable by solving each column vector of \(f^{(l+1)}\) separately, which meets an \(N-\)dimensional linear equations with symmetric positive coefficient matrix.

In the numerical experiments, we choose regularization of adjoint conjugate gradient method (ACGM) as the inner iteration scheme to solve (3.6). Let \(\mathbf {b}^{(l)}\) be the right term in (3.6) which is the known part from the \(l-\)step in exterior recursion, and \(\mu \) is the prior regularizing parameter in ACGM. So we have

$$\begin{aligned} \mathbf {f}^{(l+1)}_{k+1}=\mathbf {f}^{(l)}_k-\kappa ^{(l)}_k\left( \mu \mathbf {f}^{(l)}_k +\mathbb {L}_1[\mathbf {f}^{(l)}]^T(\mathbb {L}_1[\mathbf {f}^{(l)}] \mathbf {f}^{(l)}_k-\mathbf {b}^{(l)})\right) , \end{aligned}$$
(3.7)

where \(\kappa ^{(l)}_k\) is the step-size at k-step in inner recursion defining as

$$\begin{aligned} \kappa ^{(l)}_k:=\frac{\langle -\mathbf {r}^{(l)}_k,(\mu \mathcal {I}+\mathbb {L}_1[\mathbf {f}^{(l)}]^T \mathbb {L}_1[\mathbf {f}^{(l)}])(-\mathbf {r}^{(l)}_k)^T\rangle }{\Vert (\mu \mathcal {I}+\mathbb {L}_1[\mathbf {f}^{(l)}]^T \mathbb {L}_1[\mathbf {f}^{(l)}])(-\mathbf {r}^{(l)}_k)\Vert _2^2}, \end{aligned}$$

from the classic successive over relaxation method (SOR), \(\langle \ ,\ \rangle \) is the \(L^2\) inner product, and \(\mathbf {r}^{(l)}_k:=\mu \mathbf {f}^{(l)}_k +\mathbb {L}_1[\mathbf {f}^{(l)}]^T(\mathbb {L}_1[\mathbf {f}^{(l)}] \mathbf {f}^{(l)}_k-\mathbf {b}^{(l)})\).

Notice, the initial value \(\mathbf {f}^{(l)}_0\) in the inner recursion can be chosen as \(\varvec{0}\) or \(\mathbf {f}^{(l)}\), and the stopping criterion may be the maximum iteration number \(K_0\) or some others related to the small values of the cost function or the small difference of the iterative sequences. Here we stop the iteration process when the difference between \(\mathbf {f}^{(l+1)}\) and \(\mathbf {f}^{(l)}\) is smaller than \(10^{-3}\). Finally we have the scheme to find the approximate minimizer by the following iterative algorithm with inner recursion.

figure b

4 Numerical Experiments

All the numerical tests are performed in MATLAB 7.10 on a laptop with an Intel Core i5 CPU M460 processor and 2 GB memory.

We consider a model problem with \(\varOmega =[0,1]^2\) and \(N=128\). Define

$$\begin{aligned}&D_1:=\left\{ \varvec{x}=(x_1,x_2):\left( x_1-\frac{1}{4}\right) ^2+\left( x_2-\frac{1}{2}\right) ^2\le \frac{1}{64}\right\} ,\nonumber \\&D_2:=\left\{ \varvec{x}=(x_1,x_2):\left| x_1-\frac{3}{4}\right| \le \frac{1}{8},\left| x_2-\frac{1}{2}\right| \le \frac{1}{4}\right\} , \end{aligned}$$
(4.1)

and

$$\begin{aligned} f(\varvec{x}):= {\left\{ \begin{array}{ll} 1,&{}\varvec{x}\in D_1,\\ 2,&{}\varvec{x}\in D_2,\\ 0,&{}\varvec{x}\in \varOmega \setminus (D_1\cup D_2). \end{array}\right. } \end{aligned}$$
(4.2)

The functions \(f(\varvec{x})\) together with its frequency function \(\log (|\hat{f}(\varvec{\omega })|)\) in algorithm domain (i.e., after shifting as Fig. 1) is shown in Fig. 2(a) and (b). Obviously, the frequency data in the center (or in the four corners before shifting) consist of the main information about the image, so we should sample these data as much as possible.

Fig. 2.
figure 2

(a) \(f(\varvec{x})\); (b) Frequency function \(\log (|\hat{ f}(\varvec{\omega })|)\) after shifting; (c) and (d) Sampling noisy frequency data \(P_{60}\log (|\hat{g}^\delta (\varvec{\omega })|)\) and \(P_{90}\log (|\hat{g}^\delta (\varvec{\omega })|)\) after shifting.

Firstly we yield the full noisy data \(g_{m,n}^\delta \) from the exact image \(f_{m,n}\) by

$$\begin{aligned} g_{m,n}^\delta =f_{m,n}+\delta \times rand(m,n), \end{aligned}$$

where \(m,n=1,\cdots ,N\) and rand(mn) are the random numbers in \([-1,1]\). The mesh image of initial image and the noisy image are shown in Fig. 3. Then the full noisy frequency data are simulated by

$$\begin{aligned} \hat{g}_{m',n'}^\delta =\mathcal {F}[g_{m,n}^\delta ],\ m',n'=1,\cdots ,N. \end{aligned}$$
(4.3)

So with the random band row sampling method using sampling matrix P, \(P\hat{g}_{m',n'}^\delta \) is the input incomplete noisy data by row sampling process.

Fig. 3.
figure 3

The mesh of initial image \(f(\varvec{x})\) and noisy image \(g^\delta (\varvec{x})\).

Take \(\alpha _1=1000,\alpha _2=0.001,\beta =\mu =0.0001\), and the noise level \(\delta =0.1\). To the row sampling processes, we consider two schemes by taking \(M_0=60, cenR=0.3\) and \(M_0=90,cenR=0.3\), so the sampling ratios are \(M_0/M=60/128=46.88\%\) and \(70.31\%\), respectively. To compare the restoration performances by applying more sampling data, we require that the data for \(M_0=60\) be included in the data set for \(M_0=90\). In order to ensure the validity of tests, the random number rand(mn) and sampling rows should be fixed in each parts. Then we obtain the sampling matrix \(P_{60},P_{90}\) with \(p_{ii}=1\) only at the following locations:

$$\begin{aligned} i\in & {} \{1-5,16,17,18,23,34,37,39,40,43,44,45,47-55,\nonumber \\&58,60,61,63,64,70,76-79,81,82,83,86,88-91,\nonumber \\&94,95,97,98,100,101,103,105,107-112,125-128\} \end{aligned}$$
(4.4)

and

$$\begin{aligned} i\in & {} \{1-14,16,17,18,23,27-32,34,37,39,40,43,44,45,47-55,\nonumber \\&58,60,61,63,64,70,72,73,74,76-79,81,82,83,86,88,89-91,\nonumber \\&94,95,97,98,100,101,103,105,107-112,114-128\} \end{aligned}$$
(4.5)

respectively. Figure 2(c) and (d) show the two-dimensional image of full frequency data, the incomplete noisy frequency data with \(P_{60},P_{90}\) after shifting respectively.

In our iteration process, the Bregman iterative number is \(L_0=20\), and the initial value in ACGM inner recursion is \(f^{(l)}\). We compare Algorithm 2 with Algorithm 1, i.e., comparing the proposed scheme to the Bregman iterative algorithm without inner recursion. Figure 4(a) and (b) give the reconstructed image \(f^*\) with \(P_{60},P_{90}\) by our proposed algorithm, while Fig. 4(c) shows the reconstructed image \(f^*\) with \(P_{90}\) by standard Bregman iterative algorithm.

Fig. 4.
figure 4

(a), (b) The reconstruction of \(f^*\) with \(P_{60},P_{90}\) by our Bregman iterative algorithm with ACGM inner recursion; (c) The reconstruction of \(f^*\) with \(P_{90}\) by standard Bregman iterative algorithm without inner recursion.

From our numerical implementations, the algorithm based on random band sampling method can reconstruct the piecewise smooth image with good edge-preservation. Considering we apply the noisy data with relative error 10% and the unused sampling data (the lost data) are more than \(50\%\) and \(30\%\), the image restorations based on Bregman iterative algorithm with ACGM inner recursion are satisfactory. However, the reconstruction could only restore the relative grey level in the whole image, the exact value cannot be recovered efficiently. The numerical evidences for this phenomena are that the reconstructed image \(f^*\) with sampling matrix \(P_{90}\) has interfaces clearly, but the interfaces of \(f^*\) with \(P_{60}\) is worse.

5 Conclusion

An efficient algorithm to restore image based on \(L^2-TV\) regularization penalty terms is established. The data matching term of the optimizing model is only used limited data in frequency domain, which are obtained by random band sampling process. The new idea is that the model is included with two iteration: Bregman iteration and adjoint conjugate gradient method as inner recursion. In order to solve the optimizing problem, the Bregman iteration with lagged diffusivity fixed point method is used to solve the nonlinear Euler-Lagrange equation of modified reconstruction model. For the inner recursion, the initial value getting from the l-th exterior recursion can decrease the inner iteration time. The experimental results demonstrate that proposed algorithm with random band sampling is very efficient for recovering the piecewise smooth image with limited frequency data, compared with the standard Bregman iterative algorithm.