Denoising of Image Gradients and Constrained Total Generalized Variation
We derive a denoising method that uses higher order derivative information. Our method is motivated by work on denoising of normal vectors to the image which then are used for a better denoising of the image itself. We propose to denoise image gradients instead of image normals, since this leads to a convex optimization problem. We show how the denoising of the image gradient and the image itself can be done simultaneously in one optimization problem. It turns out that the resulting problem is similar to total generalized variation denoising, thus shedding more light on the motivation of the total generalized variation penalties. Our approach, however, works with constraints, rather than penalty functionals. As a consequence, there is a natural way to choose one of the parameters of the problems and we motivate a choice rule for the second involved parameter.
This material was based upon work supported by the National Science Foundation under Grant DMS-1127914 to the Statistical and Applied Mathematical Sciences Institute. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.