A neighborhood regression approach for removing multiple types of noises
Abstract
Image denoising is an important first step to provide cleaned images for followup tasks such as image segmentation and object recognition. Many image denoising filters have been proposed, with most of the filters focusing on one particular type of additive or multiplicative noise. In this article, we propose a novel neighborhood regression approach. Using the neighboring pixels as predictors, our approach has superb performance over multiple types of noises, including Gaussian, Poisson, Gaussian and Poisson, salt & pepper, and stripped noise. Our L_{2} regression filter can be parallelized to significantly speed up the denoising process to process a large number of noisy images. Meanwhile, our regression approach does not need tuning parameters or any training images, and it does not need any prior knowledge of the variance of the noise. Instead, our regression filter can accurately estimate the variance of the added Gaussian noise. We have performed extensive experiments, comparing our regression filter with the popular denoising filters, including BM3D, median filter, and wavelet filter, to demonstrate the superb performance of our proposed regression filter.
Keywords
Image denoising Neighborhood L_{2} regression filter Poisson noise Stripped noise Salt & pepper noise Gaussian noiseAbbreviations
 MSE
Mean square error
 PSNR
Peak signaltonoise ratio
 SSIM
Structural similarity
1 Introduction
Several works in the literature have addressed image restoration and noise reduction; however, most models address Gaussian noise. Many stateoftheart denoising algorithms utilize structures and characteristics pertaining to the noisy image such as the image selfsimilarity sparsity and fixed representations to filter out the noise and in some cases require a database of images of the same object. Other methods use supervised learning and predetermined characteristics to attenuate the noise. Nonetheless, most denoising tools have difficulties tackling severe noise or nonGaussian noise. In this paper, we develop an efficient statistical model for image denoising that is based on regressing the noising pixel on its neighboring pixels. Our regression filter is a novel approach and new concept for image denoising. The basic idea of the regression filter incorporates the neighboring noisy pixels to predict the value of the given pixel. The fundamental statistical concept is there is strong association among the neighboring pixels. The underlying regression models make no assumption about the statistical characteristics pertaining to the noise distribution, which makes it capable of handling different types of noises at varying levels of severity.
The model has proven to compete with popular and stellar image denoising algorithms including the median filter [21], wavelet filter [10, 11], and BM3D [9]. Unlike the BM3D filter that uses collaborative filtering or wavelet filters that use hardthresholding, the regression filter does not rely on sparse representation in transform domain and patches of subimages called blocks grouped into 3D arrays to reduce the noise. Also, the regression filter does not require a tuning parameter, or a threshold value. Computationally, the regression filter does not require training on large sets of images as in most deep learning or neural networks algorithms [6]. Our regression filter is computationally efficient and returns superb denoising results. We have done extensive experiments on different types of noise, such as Gaussian, Poisson, mixture of Gaussian and Poisson, salt & pepper, and stripped noise, and at different levels of noise severity. Due to the robustness of the regression model, our filter handles severe noise efficiently and effectively. Meanwhile, a most significant contribution of the regression filter is its ability to accurately estimate the variance of Gaussian noise. In the end, we perform experiments with our regression filter on 100 images.
The road map for the paper is as follows: Section 2 discusses image denoising algorithms in the literature. In Section 3, we present our regression filter. The general neighborhood regression approach is adjusted for each type of noise to achieve the best denoising results. Section 4 discusses the selected neighborhood size, which balances the regression model size and its performance. In Section 5, we run extensive experiments on 100 images to demonstrate the superb performance of our regression filter. Section 6 concludes the paper.
2 Related work
A large number of denoising algorithms solely work on Gaussian noise. Literature has documented different approaches to tackle noise, but many need a tuning parameter related to the variance of Gaussian noise. The nonlocal mean filter takes the mean of points whose Gaussian neighborhood resembles the neighborhood of a given pixel [4, 5]. The Gaussian smoothing model is introduced in [22]. In [15], the noise could be reduced by introducing a Gaussian kernel density. The Anisotropic Filtering method [32] convolves the image pixel in the direction orthogonal to the gradient of the pixel. Other filtering techniques include the Gaussian scale mixture modeling in [29], the principal component analysis approach in [34], the fast iterative shrinkage and thresholding method in [2, 23], and a model based on the EulerLagrange equations to reduce the noise [3].
There are many models that use sparse and redundant representations over trained dictionaries. Elad et al. [12] obtain a dictionary to describe image content. Extensions of this approach train a sparse dictionary for the noisy image [33]. The Principal Neighborhood Dictionary [35] is another dictionarybased approach for image denoising. Mairal et al. [26] implement simultaneous sparse coding which combines between nonlocal means approach and dictionary learning.
Patchbased filters implement a linear combination of image patches from the noisy image, which fit in the total least square sense [18]. An optimal spatial adaptation for patchbased image denoising method uses pointwise selection of small image patches [19]. The patchbased Wiener filter exploits patch redundancy [7]. Ghimpeteanu et al. [16] describe a method in which an image decomposition technique is implemented. Levin and Nadler [20] is a nonparametric approach that incorporates the distribution of natural images based on a huge set of patches. The popular image denoising algorithm BM3D is a blockmatching and 3D filtering approach [8], in which the denoising is based on enhanced sparse representation in transformed domain [6, 8, 9]. The enhancement of the sparsity is achieved by grouping similar 2D image fragments called blocks into 3D data arrays [9]. BM3D effectively filters the noise in 3D transformed domain. BM3D requires knowing the value of the variance of the Gaussian noise, and if no value is given, the algorithm assumes the default value σ = 50.
Waveletbased filters are very popular for image denoising too. Parrilli et al. [31] use nonlocal filtering and wavelet domain shrinkage. Yu et al. [41] incorporate waveletbased trivariate shrinkage with a spatialbased filter. Yaroslavsky and Eden [39] and Yaroslavsky [40] utilize the neighborhood filtering method to attenuate the noise. Stein’s unbiased risk estimate approach [24] is an orthonormal wavelet thresholding approach. Eslami and Radha [13] implement the contourlet transform. The bilateral filter [42] is a nonlinear filter performing spatial averaging. Yan et al. [38] explore the sparsity of wavelets and employ hierarchical dictionary learning in each level of the wavelets.
Several work in the literature have addressed Poisson noise such as the linear expansion of thresholds for mixed PoissonGaussian noise in [25] and the optimal inversion of the Anscombe transformation in lowcount Poisson image denoising in [27]. As for salt & pepper noise, the median filter [21] is a popular approach.
Regression approach has been used in image denoising as well as face recognition area. Wright et al. [36] utilize sparse representation for robust face recognition. Zhang et al. [43] use matrix normbased regression models for robust face recognition. Gu et al. [17] use a weighted nuclear norm minimization algorithm for denoising. Xie et al. [37] use a weighted Schatten pnorm minimization algorithm for denoising.
3 Methods—a neighborhood regression approach
In this paper, we introduce an L_{2} regression filter to remove multiple types of noises with superb performance. For every pixel, our regression filter uses the square neighborhood of a radius d as the predictors, with the pixel itself as the response in the L_{2} regression filter. Therefore, our filter utilizes all available squared patches to denoise an image.
Description of Poisson noise, salt & pepper noise, stripped noise, Gaussian noise, and Gaussian and Poisson noise
Poisson  Poisson transformation of pixels 

Salt & pepper  Corrupted black or white pixels 
Stripped  Horizontal or vertical lines 
Gaussian  Additive normal noise 
Gaussian and Poisson  Poisson followed by Gaussian 
L_{2} Regression—weighted sum of orthogonal axes We present our regression filter for removing multiple types of noises. Our filter has superb performance, since there is strong association between a pixel and its neighboring pixels. Our regression filter estimates the denoised value of a pixel P(i,j) using the noisy pixels in a square neighborhood of radius d of pixel (i,j). The square neighborhood of radius d has 4d(d + 1) pixels contained in the ball, i.e., pixels P(i ± k,j ± l),0 ≤ k,l ≤ d, excluding P(i,j). We convert a 2D noisy image P into a response vector Y and a predictor matrix X for the regression model, and then, we regress the matrix X on the response Y.
We assume the boundaries of an image are reflective; that is to say, points outside the boundary have the same values as pixels in the interior of the image. With regards to the boundary noisy pixels, we implement the extension theorem and use symmetry for points outside the boundary.
The actual relationship between a pixel and its neighboring pixels in a noisy image P depends on the type of noise. Hence, we have different regression models for different types of noises. The response vector Y and predictor matrix X for each type of noise is described in detail in Sections 3.1 to 3.5. Here is a general description for the procedure to obtain the denoised image.
The normality of the error term is one of the main assumptions in regression analysis. Yet one advantage of the regression model lies in the fact it is robust to minor violations of normality and works well for mild skewness. It is because of this property our regression filter outperforms other algorithms for severe noise levels and tackles multiple types of noises. We can easily have a parallel implementation of our regression filter [1, 28] for efficient processing of a large number of noisy images. For the best denoising results, we run 3–4 passes of our regression filter for a noisy image. More passes do not significantly improve the denoising result. A noisy image can be sliced into several pieces, and the regression filter applies to each piece for better denoising results too. Next, we present the regression models for each type of noise.
3.1 Regression model for Poisson noise
Poisson noise is applied on the image pixelwise. Each pixel has a Poisson noise drawn from a Poisson distribution with mean equal to the pixel value.^{1}

Length of the response vector Y^{ P } is the number of pixels in a noisy image P, n^{2}. For Poisson noise, an element of the response vector is the square root of pixel (i,j) of the noisy image P, \(Y^{P}[\!r]~=~\sqrt {P(i,j)}\). The square root transformation is derived from the BoxCox procedure.

The number of rows of matrix X^{ P } is also n^{2}. For an element in the response vector, \(Y^{P}[\!r]~=~\sqrt {P(i,j)}\), the corresponding row X^{ P }[ r,:], contains 1 (for intercept), P(i ± k,j ± l), P^{2}(i ± k,j ± l), and P^{3}(i ± k,j ± l), with 0 ≤ k,l ≤ d excluding (k,l) = (0,0). That is the linear, the squared, and the cubic term of the noisy pixels in P(i,j)’s radius d square neighborhood. The size of matrix X^{ P } is n^{2} × (12d(d + 1) + 1).

Length of ω^{ P } is 12d(d + 1) + 1. The denoised pixels are \(\left (\hat {Y^{P}}\right)^{2}\).
Our L_{2} regression filter is efficient for removing Poisson noise because the regression model is robust against slight skewness and minor violations of normality assumption.
In Fig. 1, we compare our regression filter with median filter^{2}, BM3D^{3}, and wavelet filter^{4}. We set radius d = 5. We measure the peak signaltonoise ratio (PSNR) and the structural similarity (SSIM) of each image. Our L_{2} regression filter outperforms all other filters and has the best denoising results in PSNR and SSIM, 32.29 and 0.83, respectively. More extensive experiments are in Section 5.
3.2 Regression model for salt & pepper noise

Length of Y^{ S P } is n^{2}. An element of the response vector is Y^{ S P }[ r] = P(i,j).

For an element Y^{ S P }[ r] = P(i,j), the corresponding row X^{ S P }[ r,:] contains 1 (for intercept), and P(i ± k,j ± l), with 0 ≤ k,l ≤ d excluding (k,l) = (0,0). The predictors have only the linear term of the noisy pixels in P(i,j)’s radius d square neighborhood. Higher order terms significantly increase the model size, yet yield litter gain in denoising results. The size of matrix X^{ S P } is n^{2} × (4d(d + 1) + 1).

Length of ω^{ S P } is 4d(d + 1) + 1. The denoised pixels are \(\hat {Y}^{SP}\).
3.3 Regression model for stripped noise
In some applications, the noise induced on the image is stripped in which a noisy line is formed horizontally or vertically across the image. Stripped lines are especially present in earth imaging due to malfunction in censors that creates a form of dark stripes [30]. In this section, we have horizontal lines. The procedure is the same for vertical stripes.

Length of Y^{ S } is n^{2}. An element of the response vector is Y^{ S }[ r] = P(i,j).

Because of multicollinearity issues for the stripped noise, the horizontal neighboring pixels are excluded. For Y^{ S }[ r] = P(i,j), the corresponding row X^{ S }[ r,:] contains 1 (for intercept), P(i ± k,j + l) and P(i ± k,j − l), with 0 ≤ k ≤ d and 1 ≤ l ≤ d, only the linear term. The size of matrix X^{ S } is n^{2} × (2d(2d + 1) + 1).

Length of ω^{ S } is 2d(2d + 1) + 1. The denoised pixels are \(\hat {Y}^{S}\).
For stripped noise, we divide a noisy image into four quarters and apply the regression filter to each quarter to enhance the performance.
Figure 5 shows a 20% additive stripped noise. Our regression filter has the best denoising results in both measures with a PSNR value of 27.82 and SSIM of 0.73. The size of the strip equals to the width of the image.
3.4 Regression model for Gaussian noise
Gaussian noise is the most common noise among the several types of noises considered in this paper. Gaussian noise is determined by its variance σ^{2}.^{7}

Length of the response vector Y^{ G } is n^{2}. An element of the response vector is Y^{ G }[ r] = P(i,j).

For Y^{ G }[ r] = P(i,j), the corresponding row X^{ G }[ r,:] contains 1 (for intercept), P(i ± k,j ± l), P^{2}(i ± k,j ± l), and P^{3}(i ± k,j ± l), with 0 ≤ k,l ≤ d excluding (k,l) = (0,0). That is the linear, the squared, and the cubic term of the noisy pixels in P(i,j)’s radius d square neighborhood. The size of matrix X^{ G } is n^{2} × (12d(d + 1) + 1).

Length of ω^{ G } is 12d(d + 1) + 1. The denoised pixels are \(\hat {Y}^{G}\).
For Gaussian noise, we divide a noisy image into four quarters and apply the regression filter to each quarter to enhance the performance.
Figure 7 demonstrates the performance of our regression filter with radius d = 5 (120 pixels in a square neighborhood). We compare our regression filter with median, BM3D, and wavelet filters. We measure PSNR and SSIM of each image. Our regression filter has the highest PSNR value of 29.83. We notice the regression model has an adjusted R^{2} value of 0.6235 and all coefficients add up to approximately 1.
3.5 Regression model for a mixture of Gaussian and Poisson noise
The mixture of Poisson and Gaussian noise is another type of noise as we mentioned earlier in this paper. It is normally induced on the image U by applying the Poisson noise first then adding Gaussian noise to get the final noisy image P.^{8}

Length of the response vector Y^{ G P } is n^{2}. An element of the response vector is Y^{ G P }[ r] = P(i,j).

For an element Y^{ G P }[ r] = P(i,j), the corresponding row X^{ G P }[ r,:] contains 1 (for intercept), P(i ± k,j ± l), P^{2}(i ± k,j ± l), and P^{3}(i ± k,j ± l), with 0 ≤ k,l ≤ d excluding (k,l) = (0,0). That is the linear, the squared, and the cubic term of the noisy pixels in P(i,j)’s radius d square neighborhood. The size of matrix X^{ G P } is n^{2} × (12d(d + 1) + 1).

Length of ω^{ G P } is 12d(d + 1) + 1. The denoised pixels are \(\hat {Y}^{GP}\).
3.6 Gaussian noise variance estimation
4 Neighborhood size
In this section, we implement the regression filter on six sample images, as shown in Section 3.2, to determine the neighborhood radius d value needed to achieve good denoising results. We use the models described in Section 3 for each type of noise.
PSNR and SSIM averaging over six images for increasing radius d on Poisson noise
d = 1  d = 2  d = 3  d = 4  d = 5  d = 6  

PSNR  31.21  31.93  32.09  32.18  32.22  32.25 
SSIM  0.813  0.839  0.842  0.844  0.845  0.845 
Average computation time in seconds over six images for removing different types of noises
Reg.  BM3D  Wav.  Med.  

Poisson  0.70  2.81  0.33  0.15 
Salt & pepper  0.68  1.20  0.33  0.15 
Stripped  0.71  2.36  0.64  0.14 
Gaussian  0.93  2.34  0.35  0.16 
Gaussian and Poisson  0.93  3.06  0.34  0.16 
5 Results and discussion—experiments
In this section, we conduct extensive experiments to demonstrate the performance of the regression filter against BM3D, median, and wavelet filters for the different types of noise. We examine the denoising rate on a sample of 100 test images from [14] to show the superb performance of our regression filter.
As shown in Figs. 4, 6, and 8, we further demonstrate the superb performance of our approach for removing five different types of noises over onehundred images in this section. BM3D has better performance for light Gaussian noise. However, our approach is much more robust. Our performance degrades much slower for Gaussian noise as the noise intensity increases. Our approach outperforms BM3D for σ > 100. For Gaussian noise, our approach is able to accurately estimate σ. The estimated σ helps BM3D to improve its performance. Here, we summarize the experimental results. Our approach has the best performance compared with the other three approaches for Poisson noise, salt & pepper noise with noise level up to 90%, stripped noise with noise level up to 90%, Gaussian noise with σ > 100, and Gaussian and Poisson noise with σ > 100.
6 Conclusions
In this work, we present a novel neighborhood regression approach to tackle different types of noise, including Gaussian, Poisson, mixture of Gaussian and Poisson, salt & pepper, and stripped noise. Since regression model is robust against mild violation of normality assumption of the noise term, our regression filter is able to efficiently filter out different types of noise. Increasing the neighborhood size improves the performance of our regression filter but also increases the model size. Balancing the two factors, we recommend radius d = 5. Meanwhile, there are established methods for parallel implementation of L_{2} regression model, which can be applied to our approach to efficiently process a large number of noisy images.
Our regression filter does not require any tuning parameter, such as an estimated variance of the added Gaussian noise. It does not need a decision such as softthresholding or hardthresholding either. Instead, our regression model is able to accurately estimate the variance of the added Gaussian noise, which can be used in BM3D to optimize its performance. Our regression filter does not require a pretraining over a large set of images either. The performance of the regression filter as we have demonstrated in extensive experiments surpasses stateoftheart denoising filters.
Footnotes
 1.
Matlab function imnoise with option ’poisson’
 2.Matlab function medfilt2
 3.
BM3D code is from https://github.com/glemaitre/BM3D
 4.
Matlab function ddencmp and wdencmp
 5.
Matlab function imnoise with option ’salt & pepper’
 6.
imageprocessingplace.com/root_files_V3/image_databases.htm
 7.
Matlab code P = U + σ×randn(size(U))
 8.
P=imnoise(U, ’poisson’); P=P+ σ×randn(size(U))
Notes
Acknowledgements
Not applicable.
Funding
This work is partly supported by NSF DMS1228348 and ARO W911NF1710356.
Availability of data and materials
www.stat.purdue.edu/~xbw/JIVP/6SampleImages
It is the link for 6 sample clean images for displays.
www.stat.purdue.edu/~xbw/JIVP/100CleanImages
It is the link for 100 clean images in experiments.
Authors’ contributions
Both authors contribute to the original research in the paper. Both authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
 1.MR Adjout, F Boufares, in 2014 Tenth International Conference on SignalImage Technology and InternetBased Systems. A massively parallel processing for the multiple linear regression (IEEEMarrakech, 2014), pp. 666–671.CrossRefGoogle Scholar
 2.A Beck, M Teboulle, Fast gradientbased algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18(11), 2419–2434 (2009).MathSciNetCrossRefMATHGoogle Scholar
 3.J Bai, X Feng, Fractionalorder anisotropic diffusion for image denoising. IEEE Trans. Image Process. 16(10), 2492–2502 (2007).MathSciNetCrossRefGoogle Scholar
 4.A Buades, B Coll, JM Morel, Image denoising methods. A new nonlocal principle. SIAM Rev. 52(1), 113–147 (2010).MathSciNetCrossRefMATHGoogle Scholar
 5.A Buades, B Coll, JM Morel, A review of image denoising algorithms, with a new one. Multiscale model. Simul. 4(2), 490–530 (2005).MathSciNetMATHGoogle Scholar
 6.H Burger, C Schuler, S Harmeling, in 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Image denoising: can plain neural networks compete with bm3d? (IEEEProvidence, 2012), pp. 2392–2399.CrossRefGoogle Scholar
 7.P Chatterjee, P Milanfar, Patchbased nearoptimal image denoising. IEEE Trans. Image Process. 21(4), 1635–1649 (2012).MathSciNetCrossRefMATHGoogle Scholar
 8.K Dabov, A Foi, V Katkovnik, K Egiazarian, in Electronic Imaging 2006, International Society for Optics and Photonics. Image denoising with blockmatching and 3d filtering (SPIE, the International Society for Optics and PhotonicsSan Jose, 2006).Google Scholar
 9.K Dabov, A Foi, V Katkovnik, K Egiazarian, Image denoising by sparse 3d transformdomain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007).MathSciNetCrossRefGoogle Scholar
 10.RA DeVore, B Jawerth, BJ Lucier, Image compression through wavelet transform coding. IEEE Trans. Inform. Theory. 38(2), 719–746 (1992).MathSciNetCrossRefMATHGoogle Scholar
 11.DL Donoho, Denoising by softthresholding. IEEE Trans. Inform. Theory. 41(3), 613–627 (1995).MathSciNetCrossRefMATHGoogle Scholar
 12.M Elad, M Aharon, Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15(12), 3736–3745 (2006).MathSciNetCrossRefGoogle Scholar
 13.R Eslami, H Radha, Translationinvariant contourlet transform and its application to image denoising. IEEE Trans. Image Process. 15(11), 3362–3374 (2006).CrossRefGoogle Scholar
 14.M Everingham, LV Gool, C Williams, J Winn, A Zisserman, The PASCAL Visual Object Classes Challenge 2012 (VOC2012) results. http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html. Accessed 22 Mar 2018.
 15.D Gabor, Information theory in electron microscopy. Lab. Investig. 14(6), 801–807 (1965).Google Scholar
 16.G Ghimpeteanu, T Batard, M Bertalmio, S Levine, A decomposition framework for image denoising algorithms. IEEE Trans. Image Process. 25(1), 388–399 (2016).MathSciNetCrossRefGoogle Scholar
 17.S Gu, L Zhang, W Zuo, X Feng, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Weighted nuclear norm minimization with application to image denoising (IEEEColumbus, 2014), pp. 2862–2869.Google Scholar
 18.K Hirakawa, TW Parks, Image denoising using total least squares. IEEE Trans. Image Process.15(9), 2730–2742 (2006).CrossRefGoogle Scholar
 19.C Kervrann, J Boulanger, Optimal spatial adaptation for patchbased image denoising. IEEE Trans. Image Process. 15(10), 2866–2878 (2006).CrossRefGoogle Scholar
 20.A Levin, B Nadler, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Natural image denoising: optimality and inherent bounds (IEEEBarcelona, 2011), pp. 2833–2840.Google Scholar
 21.JS Lim, Twodimensional signal and image processing (Prentice Hall, Englewood Cliffs, 1990).Google Scholar
 22.M Lindenbaum, M Fischer, A Bruckstein, On gabor contribution to image enhancement. Pattern Recognit. 27:, 1–8 (1994).CrossRefGoogle Scholar
 23.H Liu, R Xiong, J Zhang, W Gao, in IEEE Conference on Computer Vision and Pattern Recognition. Image denoising via adaptive softthresholding based on nonlocal samples (IEEEBoston, 2015), pp. 484–492.Google Scholar
 24.F Luisier, T Blu, M Unser, A new sure approach to image denoising: interscale orthonormal wavelet thresholding. IEEE Trans. Image Process. 16(3), 593–606 (2007).MathSciNetCrossRefGoogle Scholar
 25.F Luisier, T Blu, M Unser, Image denoising in mixed Poisson–Gaussian noise. IEEE Trans. Image Process. 20(3), 696–708 (2011).MathSciNetCrossRefMATHGoogle Scholar
 26.J Mairal, F Bach, J Ponce, G Sapiro, A Zisserman, in 2009 IEEE 12th International Conference on Computer Vision. Nonlocal sparse models for image restoration (IEEEKyoto, 2009), pp. 2272–2279.CrossRefGoogle Scholar
 27.M Mäkitalo, A Foi, Optimal inversion of the Anscombe transformation in lowcount poisson image denoising. IEEE Trans. Image Process. 20(1), 99–109 (2011).MathSciNetCrossRefMATHGoogle Scholar
 28.X Mingxian, JJ Miller, EJ Wegman, Parallelizing multiple linear regression for speed and redundancy: an empirical study. J. Stat. Comput. Simul. 39(4), 205–214 (1991).CrossRefGoogle Scholar
 29.M Miller, N Kingsbury, Image denoising using derotated complex wavelet coefficients. IEEE Trans. Image Process. 17(9), 1500–1511 (2008).MathSciNetCrossRefGoogle Scholar
 30.M Mobasheri, S Zendehbad, Detection and elimination of striped noise in chrisproba sensor images. Symp. Adv. Sci. TechnologyCommissionIV, 68–72 (2014).Google Scholar
 31.S Parrilli, M Poderico, CV Angelino, L Verdoliva, A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage. IEEE Trans. Geosci. Remote Sensing. 50(2), 606–616 (2012).CrossRefGoogle Scholar
 32.P Perona, J Malik, Scale space and edge detection using anisotropic diffusion. IEEE Trans. Patt. Anal. Mach. Intell. 12:, 629–639 (1990).CrossRefGoogle Scholar
 33.M Protter, M Elad, Image sequence denoising via sparse and redundant representations. IEEE Trans. Image Process. 18(1), 27–35 (2009).MathSciNetCrossRefMATHGoogle Scholar
 34.T Tasdizen, in ICIP 15th IEEE International Conference on Image Processing. Principal components for nonlocal means image denoising (IEEESan Diego, 2008), pp. 1728–1731.Google Scholar
 35.T Tasdizen, Principal neighborhood dictionaries for nonlocal means image denoising. IEEE Trans. Image Process. 18(12), 2649–2660 (2009).MathSciNetCrossRefMATHGoogle Scholar
 36.J Wright, AY Yang, A Ganesh, SS Sastry, Y Ma, Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009).CrossRefGoogle Scholar
 37.Y Xie, S Gu, Y Liu, W Zuo, W Zhang, L Zhang, Weighted Schatten pnorm minimization for image denoising and background subtraction. IEEE Trans. Image Process. 25(10), 4842–4857 (2016).MathSciNetCrossRefGoogle Scholar
 38.R Yan, L Shao, Y Liu, Nonlocal hierarchical dictionary learning using wavelets for image denoising. IEEE Trans. Image Process. 22(12), 4689–4698 (2013).MathSciNetCrossRefMATHGoogle Scholar
 39.L Yaroslavsky, M Eden, Fundamentals of digital optics (Birkhauser Boston, Boston, 1996).CrossRefMATHGoogle Scholar
 40.L Yaroslavsky, in Proceedings of wavelet applications in signal and image processing IV. Local adaptive image restoration and enhancement with the use of dft and dct in a running window (SPIE, the International Society for Optics and PhotonicsDenver, 1996), pp. 1–13.Google Scholar
 41.H Yu, L Zhao, H Wang, Image denoising using trivariate shrinkage filter in the wavelet domain and joint bilateral filter in the spatial domain. IEEE Trans. Image Process. 18(10), 2364–2369 (2009).MathSciNetCrossRefMATHGoogle Scholar
 42.M Zhang, BK Gunturk, Multiresolution bilateral filtering for image denoising. IEEE Trans.Image Process. 17(12), 2324–2333 (2008).MathSciNetCrossRefMATHGoogle Scholar
 43.H Zhang, J Yang, J Xie, J Qian, B Zhang, Weighted sparse coding regularized nonconvex matrix regression for robust face recognition. Inf. Sci. 394:, 1–17 (2017).MathSciNetGoogle Scholar
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.