Image denoising in undecimated dual-tree complex wavelet domain using multivariate t-distribution


Denoising of natural images is a basic problem in image processing. The present paper proposes a new algorithm for image denoising based on the maximum a-posteriori (MAP) estimator in undecimated dual-tree complex wavelet transform. The undecimated dual-tree complex wavelet transform (UDT-CWT), along with the directional selectivity of the dual-tree complex wavelet transform (DT-CWT), offers exact translational invariance property through removing the down-sampling of filter outputs together with the up-sampling of the complex filter pairs of DT-CWT. These properties are very important in image denoising. The performance of the MAP estimator depends strongly on the probability of noise-free wavelet coefficients. In our proposed denoising method, multivariate t-distribution is applied as the prior probability of noise-free coefficients. The t-distribution can accurately model the statistics of wavelet coefficients, which have peaky and heavy-tailed characteristics. On the other hand, the multivariate model makes it possible to take into account the dependencies of wavelet coefficients and their neighbors. Also, in our work, the necessary parameters of the multivariate distribution will be estimated in a locally-adaptive way to improve the denoising results via using the correlations among the amplitudes of neighbor coefficients. Simulation results delineate that the proposed algorithm outperforms state-of-the-art denoising algorithms in the literature.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5


  1. 1.

    Achim A, Bezerianos A, Tsakalides P (2001) Novel Bayesian multiscale method for speckle removal in medical ultrasound images. IEEE Trans Med Imaging 20(8):772–783

    Google Scholar 

  2. 2.

    Achim, A., Herranz, D., Kuruoglu, E. E.: Astrophysical image denoising using bivariate isotropic Cauchy distributions in the undecimated wavelet domain. In: 2004 international conference on image processing, 2004. ICIP'04. 2004, pp. 1225-1228. IEEE

  3. 3.

    Bartholomew, D. J., Knott, M., Moustaki, I.: Latent variable models and factor analysis: A unified approach, vol. 904. John Wiley & Sons, (2011)

  4. 4.

    Basso RM, Lachos VH, Cabral CRB, Ghosh P (2010) Robust mixture modeling based on scale mixtures of skew-normal distributions. Computational Statistics & Data Analysis 54(12):2926–2941

    MathSciNet  MATH  Google Scholar 

  5. 5.

    Böhning, D.: Computer-assisted analysis of mixtures and applications. In. Taylor & Francis, (2000), Computer-Assisted Analysis of Mixtures and Applications

  6. 6.

    Casella G, Fienberg S, Olkin I (2007) Matrix algebra: theory, computations, and applications in statistics. Springer New York

  7. 7.

    Chang SG, Yu B, Vetterli M (2000) Spatially adaptive wavelet thresholding with context modeling for image denoising. IEEE Trans Image Process 9(9):1522–1531

    MathSciNet  MATH  Google Scholar 

  8. 8.

    Chen G, Zhu W-P, Xie W (2012) Wavelet-based image denoising using three scales of dependency. IET Image Process 6(6):756–760

    MathSciNet  Google Scholar 

  9. 9.

    Crouse M, Nowak RD, Baraniuk RG (1998) Wavelet-based statistical signal processing using hidden Markov models. IEEE Trans Signal Process 46(4):886–902

    MathSciNet  Google Scholar 

  10. 10.

    Cui L, Wang Z, Cen Y, Li X, Sun J (2014) An extension of the interscale SURE-LET approach for image denoising. Int J Adv Robot Syst 11(2):9

    Google Scholar 

  11. 11.

    Deledalle C-A, Duval V, Salmon J (2012) Non-local methods with shape-adaptive patches (NLM-SAP). Journal of Mathematical Imaging and Vision 43(2):103–120

    MathSciNet  MATH  Google Scholar 

  12. 12.

    Donoho DL, Johnstone IM (1995) Adapting to unknown smoothness via wavelet shrinkage. J Am Stat Assoc 90(432):1200–1224

    MathSciNet  MATH  Google Scholar 

  13. 13.

    Fadili JM, Boubchir L (2005) Analytical form for a Bayesian wavelet estimator of images using the Bessel K form densities. IEEE Trans Image Process 14(2):231–240

    MathSciNet  Google Scholar 

  14. 14.

    Fowler JE (2005) The redundant discrete wavelet transform and additive noise. IEEE Signal Processing Letters 12(9):629–632

    Google Scholar 

  15. 15.

    Gai S, Luo L (2015) Image denoising using normal inverse gaussian model in quaternion wavelet domain. Multimed Tools Appl 74(3):1107–1124

    Google Scholar 

  16. 16.

    Hill, P. R., Canagarajah, C. N., Bull, D. R.: Image fusion using complex wavelets. In: BMVC 2002, pp. 1-10. Citeseer

  17. 17.

    Hill, P., Achim, A., Bull, D.: The undecimated dual tree complex wavelet transform and its application to bivariate image denoising using a cauchy model. In: 2012 19th IEEE international conference on image processing 2012, pp. 1205-1208. IEEE

  18. 18.

    Hill PR, Achim AM, Bull DR, Al-Mualla ME (2014) Dual-tree complex wavelet coefficient magnitude modelling using the bivariate Cauchy–Rayleigh distribution for image denoising. Signal Process 105:464–472.

    Article  Google Scholar 

  19. 19.

    Hill PR, Anantrasirichai N, Achim A, Al-Mualla ME, Bull DR (2015) Undecimated dual-tree complex wavelet transforms. Signal Process Image Commun 35:61–70

    Google Scholar 

  20. 20.

    Huynh-Thu Q, Ghanbari M (2008) Scope of validity of PSNR in image/video quality assessment. Electron Lett 44(13):800–801

    Google Scholar 

  21. 21.

    Kaur, S., Singh, N.: Image Denoising Techniques: A Review. International Journal of Innovative Research in Computer and Communication Engineering 2(6) (2014).

  22. 22.

    Khmag A, Al Haddad SAR, Ramlee RA, Kamarudin N, Malallah FL (2018) Natural image noise removal using non local means and hidden Markov models in stationary wavelet transform domain. Multimed Tools Appl 77(15):20065–20086

    Google Scholar 

  23. 23.

    Kotz S, Nadarajah S (2004) Multivariate t-distributions and their applications. Cambridge University Press

  24. 24.

    Lasmar N-E, Berthoumieu Y (2014) Gaussian copula multivariate modeling for texture image retrieval using wavelet transforms. IEEE Trans Image Process 23(5):2246–2261

    MathSciNet  MATH  Google Scholar 

  25. 25.

    Liang M, Du J, Liu H (2014) Self-adaptive spatial image denoising model based on scale correlation and SURE-LET in the nonsubsampled contourlet transform domain. SCIENCE CHINA Inf Sci 57(9):1–15

    Google Scholar 

  26. 26.

    Lin P-E (1972) Some characterizations of the multivariate t distribution. J Multivar Anal 2(3):339–344

    MathSciNet  Google Scholar 

  27. 27.

    Luisier F, Blu T, Unser M (2007) A new SURE approach to image denoising: Interscale orthonormal wavelet thresholding. IEEE Trans Image Process 16(3):593–606

    MathSciNet  Google Scholar 

  28. 28.

    Mihcak MK, Kozintsev I, Ramchandran K, Moulin P (1999) Low-complexity image denoising based on statistical modeling of wavelet coefficients. IEEE Signal Processing Letters 6(12):300–303

    Google Scholar 

  29. 29.

    Min D, Jiuwen Z, Yide M (2015) Image denoising via bivariate shrinkage function based on a new structure of dual contourlet transform. Signal Process 109:25–37

    Google Scholar 

  30. 30.

    Naimi H, Adamou-Mitiche ABH, Mitiche L (2015) Medical image denoising using dual tree complex thresholding wavelet transform and wiener filter. Journal of King Saud University-Computer and Information Sciences 27(1):40–45

    Google Scholar 

  31. 31.

    Om H, Biswas M (2014) MMSE based map estimation for image denoising. Opt Laser Technol 57:252–264

    Google Scholar 

  32. 32.

    Om H, Biswas M (2015) A generalized image denoising method using neighbouring wavelet coefficients. SIViP 9(1):191–200

    Google Scholar 

  33. 33.

    Pizurica A, Philips W (2006) Estimating the probability of the presence of a signal of interest in multiresolution single-and multiband image denoising. IEEE Trans Image Process 15(3):654–665

    Google Scholar 

  34. 34.

    Po D-Y, Do MN (2006) Directional multiscale modeling of images using the contourlet transform. IEEE Trans Image Process 15(6):1610–1620

    MathSciNet  Google Scholar 

  35. 35.

    Portilla, J., Strela, V., Wainwright, M. J., Simoncelli, E. P.: Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Trans Image Processing 12(11) (2003).

  36. 36.

    Rabani H (2006) VAFA. D.M, Wavelet based image denoising based on a mixture of Laplace distributions

    Google Scholar 

  37. 37.

    Rabbani H, Vafadust M (2008) Image/video denoising based on a mixture of Laplace distributions with local parameters in multidimensional complex wavelet domain. Signal Process 88(1):158–173

    MATH  Google Scholar 

  38. 38.

    Rabbani, H., Vafadust, M., Gazor, S., Selesnick, I.: Image denoising employing a bivariate cauchy distribution with local variance in complex wavelet domain. In: 2006 IEEE 12th digital signal processing workshop & 4th IEEE signal processing education workshop 2006, pp. 203-208. IEEE

  39. 39.

    Robert C, Casella G (2013) Monte Carlo statistical methods. Springer Science & Business Media

  40. 40.

    Sadreazami H, Ahmad MO, Swamy M (2016) A study on image denoising in contourlet domain using the alpha-stable family of distributions. Signal Process 128:459–473

    Google Scholar 

  41. 41.

    Saeedzarandi, M., Nezamabadi-pour, H., Jamalizadeh, A.: Dual-Tree Complex Wavelet Coefficient Magnitude Modeling Using Scale Mixtures of Rayleigh Distribution for Image Denoising. Circuits, Systems, and Signal Processing, 1–26 (2019).

  42. 42.

    Selesnick IW, Baraniuk RG, Kingsbury NG (2005) The dual-tree complex wavelet transform. IEEE Signal Process Mag 22(6):123–151

    Google Scholar 

  43. 43.

    Sendur L, Selesnick IW (2002) Bivariate shrinkage with local variance estimation. IEEE signal processing letters 9(12):438–441

    Google Scholar 

  44. 44.

    Simoncelli, E. P.: Bayesian denoising of visual images in the wavelet domain. In: Bayesian inference in wavelet-based models. pp. 291–308. Springer, (1999)

  45. 45.

    Su C-C, Cormack LK, Bovik AC (2014) Closed-form correlation model of oriented bandpass natural images. IEEE Signal Processing Letters 22(1):21–25

    Google Scholar 

  46. 46.

    Sutour C, Deledalle C-A, Aujol J-F (2014) Adaptive regularization of the NL-means: application to image and video denoising. IEEE Trans Image Process 23(8):3506–3521

    MathSciNet  MATH  Google Scholar 

  47. 47.

    Tan S, Jiao L (2007) Multivariate statistical models for image denoising in the wavelet domain. Int J Comput Vis 75(2):209–230

    Google Scholar 

  48. 48.

    Wang J, Taaffe MR (2015) Multivariate mixtures of normal distributions: properties, random vector generation, fitting, and as models of market daily changes. INFORMS J Comput 27(2):193–203

    MathSciNet  MATH  Google Scholar 

  49. 49.

    Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Google Scholar 

  50. 50.

    Wang X-Y, Zhao L, Niu P-P, Fu Z-K (2011) Image Denoising using Gaussian scale mixtures with Gaussian–Hermite PDF in steerable pyramid domain. Journal of Mathematical Imaging and Vision 39(3):245–258

    MathSciNet  MATH  Google Scholar 

  51. 51.

    Wang J, Wu J, Wu Z, Jeong J, Jeon G (2017) Wiener filter-based wavelet domain denoising. Displays 46:37–41.

    Article  Google Scholar 

  52. 52.

    Wang X, Song R, Song C, Tao J (2018) The NSCT-HMT model of remote sensing image based on Gaussian-Cauchy mixture distribution. IEEE Access 6:66007–66019

    Google Scholar 

  53. 53.

    Yan, C., Zhang, K., Qi, Y.: Image denoising using modified nonsubsampled Contourlet transform combined with Gaussian scale mixtures model. In: international conference on intelligent Science and big data engineering 2015, pp. 196-207. Springer

  54. 54.

    Zeng W, Fu X, Hu C, Du Y (2018) Wavelet denoising with generalized bivariate prior model. Multimed Tools Appl 77(16):20863–20887

    Google Scholar 

  55. 55.

    Zhang L, Dong W, Zhang D, Shi G (2010) Two-stage image denoising by principal component analysis with local pixel grouping. Pattern Recogn 43(4):1531–1549

    MATH  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Mansoore Saeedzarandi.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix 1

Appendix 1

From eq. (15), we have:

$$ {\displaystyle \begin{array}{c}\hat{x}=\underset{x}{\arg\;\max}\kern0.24em lnp\left(x,u\left|y\right.\right)=\underset{x}{\arg\;\max}\kern0.24em \mathit{\ln}\frac{p\left(y\left|x\right.,u\right){p}_X\left(x,u\right)}{p_Y(y)}\\ {}=\underset{x}{\arg\;\max}\kern0.24em \mathit{\ln}\frac{p\left(y\left|x\right.,u\right){p}_X\left(x|u\right){p}_U(u)}{p_Y(y)}\\ {}=\underset{x}{\arg\;\max}\kern0.24em \left( lnp\left(y\left|x,u\right.\right)+\mathit{\ln}{p}_X\left(x|u\right)+\mathit{\ln}{p}_U(u)-\mathit{\ln}{p}_Y(y)\right),\end{array}} $$

The probability density functions of y and u do not depend on x thus, it can be concluded that:

$$ \hat{\boldsymbol{x}}\propto \underset{\boldsymbol{x}}{\arg\;\max}\kern0.24em \left(\mathit{\ln}\ p\left(\boldsymbol{y}\left|\boldsymbol{x}\right.,u\right)+\mathit{\ln}\ {p}_X\left(\boldsymbol{x}|u\right)\right). $$

From eq. (10), we can easily conclude that

$$ p\left(\boldsymbol{y}|\boldsymbol{x},u\right)=p\left(\boldsymbol{y}|\boldsymbol{x}\right)={\varphi}_p\left(\boldsymbol{y};\boldsymbol{x},{\boldsymbol{\varSigma}}_{\boldsymbol{n}}\right)=\frac{1}{{\left(2\pi \right)}^{p/2}{\left|{\boldsymbol{\varSigma}}_{\boldsymbol{n}}\right|}^{1/2}}\mathit{\exp}\left(\frac{-{\left(\boldsymbol{y}-\boldsymbol{x}\right)}^T{{\boldsymbol{\varSigma}}_n}^{-1}\left(\boldsymbol{y}-\boldsymbol{x}\right)}{2}\right). $$

Therefore, we obtain:

$$ \hat{\boldsymbol{x}}\propto \underset{\boldsymbol{x}}{\arg\;\max}\kern0.24em \left(\ln\ {\varphi}_p\left(\boldsymbol{y};\boldsymbol{x},{\boldsymbol{\varSigma}}_{\boldsymbol{n}}\right)+\mathit{\ln}\ {p}_X\left(\boldsymbol{x}|u\right)\right). $$

On the other hand, from eq. (3), we can conclude that:

$$ p\left(\boldsymbol{x}|u\right)={\varphi}_p\left(\boldsymbol{x};\mathbf{0},\boldsymbol{\varSigma} {u}^{-1}\right)=\frac{1}{{\left(2\pi \right)}^{p/2}{\left|\boldsymbol{\varSigma} {u}^{-1}\right|}^{1/2}}\mathit{\exp}\left(\frac{-{\boldsymbol{x}}^Tu{\boldsymbol{\varSigma}}^{-1}\boldsymbol{x}}{2}\right). $$

From (37) and (38), we obtain:

$$ \hat{\boldsymbol{x}}\propto \underset{\boldsymbol{x}}{\arg\;\max}\kern0.49em \left(\mathit{\ln}\ {\varphi}_p\left(\boldsymbol{y};\boldsymbol{x},{\boldsymbol{\varSigma}}_{\boldsymbol{n}}\right)+\mathit{\ln}\ {\varphi}_p\left(\boldsymbol{x};\mathbf{0},\boldsymbol{\varSigma} {u}^{-1}\right)\right) $$

and from eqs. (36) and (38) we concluded that:

$$ {\varphi}_p\left(\boldsymbol{y};\boldsymbol{x},{\boldsymbol{\varSigma}}_{\boldsymbol{n}}\right)=\frac{1}{{\left(2\pi \right)}^{p/2}{\left|{\boldsymbol{\varSigma}}_{\boldsymbol{n}}\right|}^{1/2}}\mathit{\exp}\left(\frac{-{\left(\boldsymbol{y}-\boldsymbol{x}\right)}^T{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\left(\boldsymbol{y}-\boldsymbol{x}\right)}{2}\right), $$
$$ {\varphi}_p\left(\boldsymbol{x};\mathbf{0},\boldsymbol{\varSigma} {u}^{-1}\right)=\frac{1}{{\left(2\pi \right)}^{p/2}{\left|\boldsymbol{\varSigma} {u}^{-1}\right|}^{1/2}}\mathit{\exp}\left(\frac{-{\boldsymbol{x}}^Tu{\boldsymbol{\varSigma}}^{-1}\boldsymbol{x}}{2}\right). $$

By substituting eqs. (40) and (41) into eq. (39) we have:

$$ \hat{\boldsymbol{x}}\propto \underset{\boldsymbol{x}}{\arg\;\max}\kern0.24em \left(\mathit{\ln}\ \left(\frac{1}{{\left(2\pi \right)}^{\frac{p}{2}}{\left|{\boldsymbol{\varSigma}}_{\boldsymbol{n}}\right|}^{\frac{1}{2}}}\right)+\left(\frac{-{\left(\boldsymbol{y}-\boldsymbol{x}\right)}^T{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\left(\boldsymbol{y}-\boldsymbol{x}\right)}{2}\right)+\mathit{\ln}\ \left(\frac{1}{{\left(2\pi \right)}^{p/2}{\left|\boldsymbol{\varSigma} {u}^{-1}\right|}^{1/2}}\right)+\left(\frac{-{\boldsymbol{x}}^Tu{\boldsymbol{\varSigma}}^{-1}\boldsymbol{x}}{2}\right)\right), $$

because the first and third terms do not depend on x thus we can conclude that:

$$ \hat{\boldsymbol{x}}\propto \underset{\boldsymbol{x}}{\arg\;\max}\kern0.24em \left(\left(\frac{-{\left(\boldsymbol{y}-\boldsymbol{x}\right)}^T{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\left(\boldsymbol{y}-\boldsymbol{x}\right)}{2}\right)+\left(\frac{-{\boldsymbol{x}}^Tu{\boldsymbol{\varSigma}}^{-1}\boldsymbol{x}}{2}\right)\right) $$
$$ \hat{\boldsymbol{x}}\propto \underset{\boldsymbol{x}}{\arg\;\min}\kern0.24em \left(\left(\frac{{\left(\boldsymbol{y}-\boldsymbol{x}\right)}^T{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\left(\boldsymbol{y}-\boldsymbol{x}\right)}{2}\right)+\left(\frac{{\boldsymbol{x}}^Tu{\boldsymbol{\varSigma}}^{-1}\boldsymbol{x}}{2}\right)\right) $$
$$ \hat{\boldsymbol{x}}\propto \underset{\boldsymbol{x}}{\arg\;\min}\kern0.24em \left(\left({\left(\boldsymbol{y}-\boldsymbol{x}\right)}^T{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\left(\boldsymbol{y}-\boldsymbol{x}\right)\right)+\left({\boldsymbol{x}}^Tu{\boldsymbol{\varSigma}}^{-1}\boldsymbol{x}\right)\right) $$
$$ \hat{\boldsymbol{x}}\propto \underset{\boldsymbol{x}}{\arg\;\min}\kern0.24em \left({\boldsymbol{y}}^T{\Sigma}_{\boldsymbol{n}}^{-1}\boldsymbol{y}-{\boldsymbol{y}}^T{\Sigma}_{\boldsymbol{n}}^{-1}\boldsymbol{x}-{\boldsymbol{x}}^T{\Sigma}_{\boldsymbol{n}}^{-1}\boldsymbol{y}+{\boldsymbol{x}}^T{\Sigma}_{\boldsymbol{n}}^{-1}\boldsymbol{x}+{\boldsymbol{x}}^T{\Sigma}_{\boldsymbol{n}}^{-1}\boldsymbol{x}\right) $$
$$ \hat{\boldsymbol{x}}\propto \underset{\boldsymbol{x}}{\arg\;\min}\kern0.24em \left({\boldsymbol{y}}^T{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\boldsymbol{y}-2{\boldsymbol{y}}^T{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\boldsymbol{x}+{\boldsymbol{x}}^T{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\boldsymbol{x}+{\boldsymbol{x}}^Tu{\boldsymbol{\varSigma}}^{-1}\boldsymbol{x}\right) $$

Now by differentiating of (43) with respect to x and using the following derivative rules:

$$ \frac{\partial {b}^T\theta }{\partial \theta }=b $$
$$ \frac{\partial {\theta}^T B\theta}{\partial \theta }=2 B\theta $$

where b and θ are p × 1 real vectors and B is a p × p symmetric matrix [6], we have:

$$ -2{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\boldsymbol{y}+2{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\hat{\boldsymbol{x}}+2u{\boldsymbol{\varSigma}}^{-1}\hat{\boldsymbol{x}}=0, $$

thus, we can compute \( \hat{\boldsymbol{x}} \) as follows:

$$ {\displaystyle \begin{array}{l}\hat{\boldsymbol{x}}={\left({{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}+u{\boldsymbol{\varSigma}}^{-1}\right)}^{-1}{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\boldsymbol{y}={\left({{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\left(\mathbf{I}+u{\boldsymbol{\varSigma}}_{\boldsymbol{n}}{\boldsymbol{\varSigma}}^{-1}\right)\right)}^{-1}{{\boldsymbol{\varSigma}}_{\boldsymbol{n}}}^{-1}\boldsymbol{y}={\left(\boldsymbol{I}+u{\boldsymbol{\varSigma}}_{\boldsymbol{n}}{\boldsymbol{\varSigma}}^{-1}\right)}^{-1}\boldsymbol{y}\\ {}\kern1em ={\left({\boldsymbol{\varSigma} \boldsymbol{\varSigma}}^{-1}+u{\boldsymbol{\varSigma}}_{\boldsymbol{n}}{\boldsymbol{\varSigma}}^{-1}\right)}^{-1}\boldsymbol{y}={\left(\left(\boldsymbol{\varSigma} +u{\boldsymbol{\varSigma}}_{\boldsymbol{n}}\right){\boldsymbol{\varSigma}}^{-1}\right)}^{-1}\boldsymbol{y}=\boldsymbol{\varSigma} {\left(\boldsymbol{\varSigma} +u{\boldsymbol{\varSigma}}_{\boldsymbol{n}}\right)}^{-1}\boldsymbol{y}\end{array}} $$

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Saeedzarandi, M., Nezamabadi-pour, H., Saryazdi, S. et al. Image denoising in undecimated dual-tree complex wavelet domain using multivariate t-distribution. Multimed Tools Appl 79, 22447–22471 (2020).

Download citation


  • Image denoising
  • MAP estimator
  • Undecimated dual-tree complex wavelet transform
  • Heavy-tail characteristic
  • Multivariate t-distribution