Advertisement

Loss-Specific Training of Non-Parametric Image Restoration Models: A New State of the Art

  • Jeremy Jancsary
  • Sebastian Nowozin
  • Carsten Rother
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7578)

Abstract

After a decade of rapid progress in image denoising, recent methods seem to have reached a performance limit. Nonetheless, we find that state-of-the-art denoising methods are visually clearly distinguishable and possess complementary strengths and failure modes. Motivated by this observation, we introduce a powerful non-parametric image restoration framework based on Regression Tree Fields (RTF). Our restoration model is a densely-connected tractable conditional random field that leverages existing methods to produce an image-dependent, globally consistent prediction. We estimate the conditional structure and parameters of our model from training data so as to directly optimize for popular performance measures. In terms of peak signal-to-noise-ratio (PSNR), our model improves on the best published denoising method by at least 0.26dB across a range of noise levels. Our most practical variant still yields statistically significant improvements, yet is over 20× faster than the strongest competitor. Our approach is well-suited for many more image restoration and low-level vision problems, as evidenced by substantial gains in tasks such as removal of JPEG blocking artefacts.

Keywords

Mean Square Error Loss Function Regression Tree Image Patch Conditional Random Field 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  2. 2.
    Wang, Z., Simoncelli, E.P.: Maximum differentiation (MAD) competition: A methodology for comparing computational models of perceptual discriminability. Journal of Vision 8(12), 1–13 (2008)zbMATHCrossRefGoogle Scholar
  3. 3.
    Jancsary, J., Nowozin, S., Sharp, T., Rother, C.: Regression tree fields – An efficient, non-parametric approach to image labeling problems. In: CVPR (2012)Google Scholar
  4. 4.
    Zoran, D., Weiss, Y.: From learning models of natural image patches to whole image restoration. In: ICCV (2011)Google Scholar
  5. 5.
    Mairal, J., Bach, F., Ponce, J., Shapiro, G., Zisserman, A.: Non-local sparse models for image restoration. In: ICCV (2009)Google Scholar
  6. 6.
    Foi, A., Katkovnik, V., Egiazarian, K.O.: Pointwise shape-adaptive DCT for high-quality denoising and deblocking of grayscale and color images. IEEE Trans. Image Process. 16(5), 1395–1411 (2007)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Tappen, M.F., Liu, C., Adelson, E.H., Freeman, W.T.: Learning Gaussian conditional random fields for low-level vision. In: CVPR (2007)Google Scholar
  8. 8.
    Nowozin, S., Rother, C., Bagon, S., Sharp, T., Yao, B., Kohli, P.: Decision tree fields. In: ICCV (2011)Google Scholar
  9. 9.
    Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.O.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Roth, S., Black, M.: Fields of experts. International Journal of Computer Vision 82, 205–229 (2009)CrossRefGoogle Scholar
  11. 11.
    Schmidt, U., Gao, Q., Roth, S.: A generative perspective on MRFs in low-level vision. In: CVPR (2010)Google Scholar
  12. 12.
    Samuel, K.G.G., Tappen, M.F.: Learning optimized MAP estimates in continuously-valued MRF models. In: CVPR (2009)Google Scholar
  13. 13.
    Pletscher, P., Nowozin, S., Kohli, P., Rother, C.: Putting MAP Back on the Map. In: Mester, R., Felsberg, M. (eds.) DAGM 2011. LNCS, vol. 6835, pp. 111–121. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  14. 14.
    Estrada, F., Fleet, D., Jepson, A.: Stochastic image denoising. In: BMVC (2009)Google Scholar
  15. 15.
    Aodha, O.M., Brostow, G.J., Pollefeys, M.: Segmenting video into classes of algorithm-suitability. In: CVPR (2010)Google Scholar
  16. 16.
    Birgin, E.G., Martinez, J.M., Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 10(4), 1196–1211 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    Schmidt, M., van den Berg, E., Friedlander, M., Murphy, K.: Optimizing costly functions with simple constraints: A limited-memory projected quasi-Newton algorithm. In: AISTATS (2009)Google Scholar
  18. 18.
    Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Wadsworth Publishing Company, Belmont (1984)Google Scholar
  19. 19.
    Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 898–916 (2011)CrossRefGoogle Scholar
  20. 20.
    Demšar, J.: Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7, 1–30 (2006)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Jeremy Jancsary
    • 1
  • Sebastian Nowozin
    • 2
  • Carsten Rother
    • 2
  1. 1.Vienna University of TechnologyAustria
  2. 2.Microsoft Research CambridgeUnited Kingdom

Personalised recommendations