Advertisement

Lifting Layers: Analysis and Applications

  • Peter Ochs
  • Tim Meinhardt
  • Laura Leal-Taixe
  • Michael MoellerEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11205)

Abstract

The great advances of learning-based approaches in image processing and computer vision are largely based on deeply nested networks that compose linear transfer functions with suitable non-linearities. Interestingly, the most frequently used non-linearities in imaging applications (variants of the rectified linear unit) are uncommon in low dimensional approximation problems. In this paper we propose a novel non-linear transfer function, called lifting, which is motivated from a related technique in convex optimization. A lifting layer increases the dimensionality of the input, naturally yields a linear spline when combined with a fully connected layer, and therefore closes the gap between low and high dimensional approximation problems. Moreover, applying the lifting operation to the loss layer of the network allows us to handle non-convex and flat (zero-gradient) cost functions. We analyze the proposed lifting theoretically, exemplify interesting properties in synthetic experiments and demonstrate its effectiveness in deep learning approaches to image classification and denoising.

Keywords

Machine learning Deep learning Interpolation Approximation theory Convex relaxation Lifting 

Notes

Acknowledgements

This research was partially funded by the Humboldt Foundation through the Sofja Kovalevskaja Award.

Supplementary material

474172_1_En_4_MOESM1_ESM.pdf (1.1 mb)
Supplementary material 1 (pdf 1158 KB)

References

  1. 1.
    Maas, A., Hannun, A., Ng, A.: Rectifier nonlinearities improve neural network acoustic models. In: ICML (2013)Google Scholar
  2. 2.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: ICCV (2015)Google Scholar
  3. 3.
    Goodfellow, I., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout networks. In: ICML (2013)Google Scholar
  4. 4.
    Pock, T., Cremers, D., Bischof, H., Chambolle, A.: Global solutions of variational models with convex regularization. SIAM J. Imaging Sci. 3(4), 1122–1145 (2010)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Möllenhoff, T., Laude, E., Moeller, M., Lellmann, J., Cremers, D.: Sublabel-accurate relaxation of nonconvex energies. In: CVPR (2016)Google Scholar
  6. 6.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. The MIT Press (2016)Google Scholar
  7. 7.
    Dugas, C., Bengio, Y., Bélisle, F., Nadeau, C., Garcia, R.: Incorporating second-order functional knowledge for better option pricing. In: NIPS (2001)Google Scholar
  8. 8.
    Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs). In: ICLR (2016)Google Scholar
  9. 9.
    Jarrett, K., Kavukcuoglu, K., Ranzato, M., LeCun, Y.: What is the best multi-stage architecture for object recognition? In: CVPR (2009)Google Scholar
  10. 10.
    Xu, B., Wang, N., Chen, T., Li, M.: Empirical evaluation of rectified activations in convolutional network (2015). ICML Deep Learning Workshop. https://arxiv.org/abs/1505.00853
  11. 11.
    Klambauer, G., Unterthiner, T., Mayr, A., Hochreiter, S.: Self-normalizing neural networks. In: NIPS (2017)Google Scholar
  12. 12.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML (2015)Google Scholar
  13. 13.
    Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Sig. Syst. 2(4), 303–314 (1989)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Leshno, M., Lin, V., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6(6), 861–867 (1993)CrossRefGoogle Scholar
  15. 15.
    Unser, M.: A representer theorem for deep neural networks (2018). Preprint at https://arxiv.org/abs/1802.09210
  16. 16.
    Laude, E., Möllenhoff, T., Moeller, M., Lellmann, J., Cremers, D.: Sublabel-accurate convex relaxation of vectorial multilabel energies. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 614–627. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_37CrossRefGoogle Scholar
  17. 17.
    Montúfar, G., Pascanu, R., Cho, K., Bengio, Y.: On the number of linear regions of deep neural networks. In: NIPS (2014)Google Scholar
  18. 18.
    Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Gu, S., Zhang, L., Zuo, W., Feng, X.: Weighted nuclear norm minimization with application to image denoising. In: CVPR (2014)Google Scholar
  21. 21.
    Zoran, D., Weiss, Y.: From learning models of natural image patches to whole image restoration. In: ICCV (2011)Google Scholar
  22. 22.
    Burger, H., Schuler, C., Harmeling, S.: Image denoising: can plain neural networks compete with BM3D? In: CVPR (2012)Google Scholar
  23. 23.
    Schmidt, U., Roth, S.: Shrinkage fields for effective image restoration. In: CVPR (2014)Google Scholar
  24. 24.
    Chen, Y., Pock, T.: On learning optimized reaction diffusion processes for effective image restoration. In: ICCV (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Peter Ochs
    • 1
  • Tim Meinhardt
    • 2
  • Laura Leal-Taixe
    • 2
  • Michael Moeller
    • 3
    Email author
  1. 1.Saarland UniversitySaarbrückenGermany
  2. 2.Technical University of MunichMunichGermany
  3. 3.University of SiegenSiegenGermany

Personalised recommendations