Abstract
In the usual non-local variational models, such as the non-local total variations, the image is regularized by minimizing an energy term that penalizes gray-levels discrepancy between some specified pairs of pixels; a weight value is computed between these two pixels to penalize their dissimilarity. In this paper, we impose some regularity to those weight values. More precisely, we minimize a function involving a regularization term, analogous to an \(H^1\) term, on weights. Doing so, the finite differences defining the image regularity depend on their environment. When the weights are difficult to define, they can be restored by the proposed stable regularization scheme. We provide all the details necessary for the implementation of a PALM algorithm with proved convergence. We illustrate the ability of the model to restore relevant unknown edges from the neighboring edges on an image inpainting problem. We also argue on inpainting, zooming and denoising problems that the model better recovers thin structures.
Similar content being viewed by others
Notes
We use \(r=5\) in the experiments.
In the experiments, we consider 4-connectivity: \({{\mathcal {N}}}=\{(1,0),(0,1)\}\).
In the experiments, we use \(K=90\).
Notice that similar properties have been observed in [18] for the non-local mean: Small details such as thin lines can fade away when using large patches.
We are aware of the fact that designing a good stopping criterion would permit to save time. However, since the paper presents a new model, we preferred to loose computational time in order to obtain results that truly reflect the behavior of this model.
Note that, if all are in the missing domain, then the algorithm assigns weights uniformly equal to \(1/|{{\mathcal {B}}}|\).
Again, we just take a very large number of iteration for which we have convergence.
Again, we just take a very large number of iteration for which we have convergence.
Notice that a similar upper bound is provided in [10] for the usual finite differences.
References
Arias, P., Caselles, V., Sapiro, G.: A variational framework for non-local image inpainting. In: Cremers, D., Boykov, Y., Blake, A., Schmidt, F.R. (eds.) Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 345–358. Springer, Berlin (2009)
Arias, P., Facciolo, G., Caselles, V., Sapiro, G.: A variational framework for exemplar-based image inpainting. Int. J. Comput. Vis. 93(3), 319–347 (2011)
Aujol, J.F., Dossal, C.: Stability of over-relaxations for the forward–backward algorithm, application to FISTA. SIAM J. Optim. 25(4), 2408–2433 (2015)
Aujol, J.F., Ladjal, S., Masnou, S.: Exemplar-based inpainting from a variational point of view. SIAM J. Math. Anal. 42(3), 1246–1285 (2010)
Ballester, C., Bertalmio, M., Caselles, V., Sapiro, G., Verdera, J.: Filling-in by joint interpolation of vector fields and gray levels. IEEE Trans. Image Process. 10(8), 1200–1211 (2001)
Bauschke, H.H., Burachik, R., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H.: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, vol. 49. Springer Science & Business Media, New York (2011)
Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. Serie A 146(1–2), 459–494 (2014)
Buades, A., Coll, B., Morel, J.M.: A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 4(2), 490–530 (2005)
Burger, M., He, L., Schönlieb, C.B.: Cahn–Hilliard inpainting and a generalization for grayvalue images. SIAM J. Imaging Sci. 2(4), 1129–1167 (2009)
Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20(1–2), 89–97 (2004)
Chan, T.F., Shen, J.: Nontexture inpainting by curvature-driven diffusions. J. Vis. Commun. Image Represent. 12(4), 436–449 (2001)
Chen, A., Bertozzi, A.L., Ashby, P.D., Getreuer, P., Lou, Y.: Enhancement and recovery in atomic force microscopy images. In: Andrews, T.D. (ed.) Excursions in Harmonic Analysis, vol. 2, pp. 311–332. Springer, Berlin (2013)
Chen, C., Chan, R.H., Ma, S., Yang, J.: Inertial proximal admm for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 8(4), 2239–2267 (2015)
Cho, M., Mishra, K.V., Cai, J.F., Xu, W.: Block iterative reweighted algorithms for super-resolution of spectrally sparse signals. IEEE Signal Process. Lett. 22(12), 2319–2313 (2015)
Chouzenoux, E., Pesquet, J.C., Repetti, A.: A block coordinate variable metric forward–backward algorithm. J. Glob. Optim. 66, 1–29 (2016)
Condat, L.: Fast projection onto the simplex and the \(\ell _{1}\) ball. Math. Program. 158(1), 575–585 (2016)
Deledalle, C.A., Duval, V., Salmon, J.: Non-local methods with shape-adaptive patches (NLM-SAP). J. Math. Imaging Vis. 43(2), 103–120 (2012)
Duval, V., Aujol, J.F., Gousseau, Y.: On the parameter choice for the non-local means. CMLA Preprint (2010)
Elmoataz, A., Lezoray, O., Bougleux, S.: Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing. IEEE Trans. Image Process. 17(7), 1047–1060 (2008)
Facciolo, G., Arias, P., Caselles, V., Sapiro, G.: Exemplar-based interpolation of sparsely sampled images. In: Cremers, D., Boykov, Y., Blake, A., Schmidt, F.R. (eds.) Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 331–344. Springer, Berlin (2009)
Fedorov, V., Facciolo, G., Arias, P.: Variational framework for non-local inpainting. Image Process. On Line 5, 362–386 (2015)
Garcia, D.: Robust smoothing of gridded data in one and higher dimensions with missing values. Comput. Stat. Data Anal. 54(4), 1167–1178 (2010)
Gilboa, G., Osher, S.: Nonlocal linear image regularization and supervised segmentation. Multiscale Model. Simul. 6(2), 595–630 (2007)
Gilboa, G., Osher, S.: Nonlocal operators with applications to image processing. Multiscale Model. Simul. 7(3), 1005–1028 (2008)
Jung, M., Vese, L.A.: Nonlocal variational image deblurring models in the presence of Gaussian or impulse noise. In: Tai, XC., Mørken, K., Lysaker, M., Lie, K.A. (eds.) Scale Space and Variational Methods in Computer Vision, pp. 401–412. Springer, Berlin (2009)
Kanizsa, G.: Organization in Vision: Essays on Gestalt perception. Praeger, Westport (1979)
Kheradmand, A., Milanfar, P.: A general framework for regularized, similarity-based image restoration. IEEE Trans. Image Process. 23(12), 5136–5151 (2014)
Lebrun, M., Buades, A., Morel, J.M.: Implementation of the “Non-Local Bayes” (NL-Bayes) image denoising algorithm. Image Process. On Line 3, 1–42 (2013)
Lebrun, M., Buades, A., Morel, J.M.: A nonlocal Bayesian image denoising algorithm. SIAM J. Imaging Sci. 6(3), 1665–1688 (2013)
Levina, E., Bickel, P.J.: Texture synthesis and nonparametric resampling of random fields. Ann. Stat. 34, 1751–1773 (2006)
Lou, Y., Zhang, X., Osher, S., Bertozzi, A.: Image recovery via nonlocal operators. J. Sci. Comput. 42(2), 185–197 (2010)
Mumford, D., Nitzberg, M., Shiota, T.: Filtering, Segmentation and Depth. Lecture Notes in Computer Science, vol. 662. Springer-Verlag New York, Inc. Secaucus, NJ, USA (1993)
Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. Serie A 103(1), 127–152 (2005)
Ouyang, Y., Chen, Y., Lan, G., Pasiliao Jr., E.: An accelerated linearized alternating direction method of multipliers. SIAM J. Imaging Sci. 8(1), 644–681 (2015)
Peyré, G.: Sparse modeling of textures. J. Math. Imaging Vis. 34(1), 17–31 (2009)
Peyré, G., Bougleux, S., Cohen, L.: Non-local regularization of inverse problems. In: The 10th European Conference on Computer Vision, pp. 57–68. Springer (2008)
Peyré, G., Bougleux, S., Cohen, L.D.: Non-local regularization of inverse problems. Inverse Probl. Imaging 5(2), 511–530 (2011)
Pizarro, L., Mrázek, P., Didas, S., Grewenig, S., Weickert, J.: Generalised nonlocal image smoothing. Int. J. Comput. Vis. 90(1), 62–87 (2010)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60(1), 259–268 (1992)
Schönlieb, Cb, Bertozzi, A.: Unconditionally stable schemes for higher order inpainting. Commun. Math. Sci. 9(2), 413–457 (2011)
Shen, J., Chan, T.F.: Mathematical models for local nontexture inpaintings. SIAM J. Appl. Math. 62(3), 1019–1043 (2002)
Talebi, H., Milanfar, P.: Global image denoising. IEEE Trans. Image Process. 23(2), 755–768 (2014)
Wang, G., Garcia, D., Liu, Y., De Jeu, R., Dolman, A.J.: A three-dimensional gap filling method for large geophysical datasets: application to global satellite soil moisture observations. Environ. Model. Softw. 30, 139–142 (2012)
Wei, L.Y., Levoy, M.: Fast texture synthesis using tree-structured vector quantization. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 479–488. ACM Press/Addison-Wesley Publishing Co. (2000)
Weiss, P., Blanc-Feraud, L., Aubert, G.: Efficient schemes for total variation minimization under constraints in image processing. SIAM J. Sci. Comput. 31(3), 2047–2080 (2009)
Weissman, T., Ordentlich, E., Seroussi, G., Verdú, S., Weinberger, M.J.: Universal discrete denoising: known channel. IEEE Trans. Inf. Theory 51(1), 5–28 (2005)
Yaroslavsky, L.P., Yaroslavskij, L.: Digital Picture Processing: An Introduction. Springer, Berlin (1985)
Zhang, X., Burger, M., Bresson, X., Osher, S.: Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imaging Sci. 3(3), 253–276 (2010)
Acknowledgements
François Malgouyres would like to thank Julien Rabin for fruitful discussions on the subject and for teaching him how to efficiently perform the projection on the simplex and also would like to thank Prof. Gabriel Peyré for providing his code.
Author information
Authors and Affiliations
Corresponding author
Additional information
ZL is supported in part by the NSF Grant DMS-1621798.
TZ is partially supported by NSFC 11271049, RGC 12302714 and RFGs of HKBU.
Appendices
Appendices
1.1 TV Under a Max Form
We want to prove that for any \(u\in {\mathbb {R}}^{{\mathcal {P}}}\) and any (fixed) \(v\in {{{\mathcal {U}}}^{{\mathcal {P}}}}\)
where we remind that, for \(w\in {\mathbb {R}}^{{{\mathcal {P}}}\times {{\mathcal {B}}}}\), the norm defining the constraint takes the form
First, notice that if for all \(p\in {{\mathcal {P}}}\) we know \(w^*_p\in {\mathbb {R}}^{{\mathcal {B}}}\) such that
where we denote \(\mathbf {D}_vu_p = (\mathbf {D}_vu_{p,q})_{q\in {{\mathcal {B}}}} \in {\mathbb {R}}^{{\mathcal {B}}}\); we can deduce from the optimality of all its components \(w^*_p\) that \(w^* = (w^*_p)_{p\in {{\mathcal {P}}}}\in {\mathbb {R}}^{{{\mathcal {P}}}\times {{\mathcal {B}}}}\) satisfies
In order to calculate \(w^*_p\), for a given \(p\in {{\mathcal {P}}}\), we first remark that there exists \(\alpha _p^*\ge 0\) such that
Then, if we use this expression in (53) we have that
We finally obtain that
This corresponds to the expression of \(w^*(u)_p\) in Proposition 1.
If we now use the expression for \(w^*_p\) to calculate the objective function, we find that
As a consequence,
1.2 Proof of Proposition 1
Let us first remind and adapt to the context of this paper theorem 1 stated in [33]. This result considers finite-dimensional real vector spaces \(V_1\) and \(V_2\), a linear operator \(A:V_1\rightarrow V_2\), a parameter \(\mu >0\) and a function
where \(Q_2\subset V_2\) is a closed convex bounded set.
It is stated and proved in [33] that function \(f_\mu \) is continuously differentiable at any \(x\in V_1\). Moreover, if we denote \(u_\mu (x)\) the unique solution of the maximization problem defining \(f_\mu \), we have:
where \(A^*\) is the adjoint of A. Moreover, \(x\longmapsto \ f_\mu (x)\) is Lipschitz continuous with constant
where
Proposition 1 is a straightforward application of this statement to the function
In order to compute the Lipschitz constant, we need however to find a bound for \(\Vert \mathbf {D}_v\Vert _{{\mathbb {R}}^{{\mathcal {P}}}\rightarrow {\mathbb {R}}^{{{\mathcal {P}}}\times {{\mathcal {B}}}}}\). In order to compute it, we consider \(u\in {\mathbb {R}}^{{\mathcal {P}}}\). We have for any \(v\in {{\mathcal {U}}}^{{\mathcal {P}}}\)
where \(|{{\mathcal {B}}}|\) denotes the cardinality of \({{\mathcal {B}}}\). We then deduceFootnote 9 that for any \(v\in {{\mathcal {U}}}^{{\mathcal {P}}}\)
Finally, it is standard that (32) is a consequence of the fact that \(l''=\frac{\sqrt{2} \sqrt{|{{\mathcal {B}}}| +1}}{\mu }\) is a Lipschitz bound for \(u\longmapsto \nabla _u TV(v,u)\).
1.3 Proof of Proposition 2
Considering \(v\in {{\mathcal {U}}}^{{\mathcal {P}}}, u \in {\mathbb {R}}^{{\mathcal {P}}}\) and a small variation \(h\in {\mathbb {R}}^{{{\mathcal {P}}}\times {{\mathcal {B}}}}\), we denote \({{\mathcal {P}}}_1= \{p\in {{\mathcal {P}}}| (\mathbf {A}_uv)_p \ge \frac{\mu }{2} \}\) and \({{\mathcal {P}}}_2 = {{\mathcal {P}}}\setminus {{\mathcal {P}}}_1\). For h small enough, we have
Moreover,
Denoting for all \(p\in {{\mathcal {P}}}_1\)
and for all \(p\in {{\mathcal {P}}}_2\)
and using the simple closed-form expression for the derivative \(\varPsi '_\mu \), we get for all \(p\in {{\mathcal {P}}}\)
Using this notation in the previous calculations we obtain that
We finally conclude that
This proves the first part of Proposition 2.
Let us now show that \(v\longmapsto TV(v,u)\) is concave over \({\mathbb {R}}_+^{{{\mathcal {P}}}\times {{\mathcal {B}}}}\). In order to do so, we rewrite the latter formula under the form \((u^*(v))_p = \varphi ((\mathbf {A}_uv)_p)\) where the function \(\varphi \) is defined for all \(t\ge 0\) by
Notice that function \(\varphi \) is non-increasing and therefore
We now consider v and \(v'\in {\mathbb {R}}_+^{{{\mathcal {P}}}\times {{\mathcal {B}}}}\) and \(u\in {\mathbb {R}}^{{\mathcal {P}}}\). Using Taylor’s theorem, we know there exists \(t\in [0,1]\) such that \(v''=tv'+(1-t)v\) satisfies
However, for any \(p\in {{\mathcal {P}}}, \mathbf {A}_uv''_p - \mathbf {A}_uv_p = t (\mathbf {A}_uv'_p- \mathbf {A}_uv_p) \) and \(\mathbf {A}_uv''_p - \mathbf {A}_uv_p\) and \(\mathbf {A}_uv'_p- \mathbf {A}_uv_p\) have the same sign. Using (74), we then get
and finally
This concludes the proof.
1.4 Proof of Proposition 3
For any \(v\in {{{\mathcal {U}}}^{{\mathcal {P}}}}\), given the expression
we immediately have
We only need to calculate the Lipschitz constant \(l'\) provided in Proposition 3. In order to do so, we consider v and \(v'\in {{{\mathcal {U}}}^{{\mathcal {P}}}}\) and denote \(w=v'-v\). We have
Moreover, using the formula for \(B^*\), we get
The term inside the absolute value can be rewritten, using the definition of B, under the form
Therefore,
Finally,
We then conclude that \(v\longmapsto \nabla R(v)\) is Lipschitz with Lipschitz constant \(6\sqrt{2} \gamma |{{\mathcal {N}}}|\). This concludes the proof.
Rights and permissions
About this article
Cite this article
Li, Z., Malgouyres, F. & Zeng, T. Regularized Non-local Total Variation and Application in Image Restoration. J Math Imaging Vis 59, 296–317 (2017). https://doi.org/10.1007/s10851-017-0732-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-017-0732-6
Keywords
- Non-local regularization
- Proximal alternating linearized minimization
- Non-convex minimization
- Total variation
- Image restoration