Advertisement

Data Augmentation via Variational Auto-Encoders

  • Unai Garay-Maestre
  • Antonio-Javier GallegoEmail author
  • Jorge Calvo-Zaragoza
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11401)

Abstract

Data augmentation is a widely considered technique to improve the performance of Convolutional Neural Networks during training. This step consists in synthetically generate new labeled data by perturbing the samples of the training set, which is expected to provide more robustness to the learning process. The problem is that the augmentation procedure has to be adjusted manually because the perturbations considered must make sense for the task at issue. In this paper we propose the use of Variational Auto-Encoders (VAEs) to generate new synthetic samples, instead of resorting to heuristic strategies. VAEs are powerful generative models that learn a parametric latent space of the input domain from which new samples can be generated. In our experiments over the well-known MNIST dataset, the data augmentation by VAEs improves the base results, yet to a lesser extent of that obtained by a well-adjusted conventional data augmentation. However, the combination of both conventional and VAE-guided data augmentations outperforms all the results, thereby demonstrating the goodness of our proposal.

Keywords

Data augmentation Variational auto-encoders Convolutional Neural Networks MNIST dataset 

Notes

Acknowledgements

This work was supported by the Spanish Ministerio de Ciencia, Innovación y Universidades through HISPAMUS project (Ref. TIN2017-86576-R, partially funded by UE FEDER funds).

References

  1. 1.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  2. 2.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. Computing Research Repository abs/1312.6114 (2013)Google Scholar
  4. 4.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: 26th Annual Conference on Neural Information Processing Systems, pp. 1106–1114 (2012)Google Scholar
  5. 5.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  6. 6.
    Lv, J.J., Cheng, C., Tian, G.D., Zhou, X.D., Zhou, X.: Landmark perturbation-based data augmentation for unconstrained face recognition. Signal Process. Image Commun. 47, 465–475 (2016)CrossRefGoogle Scholar
  7. 7.
    Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21–26 June 2014, pp. 1278–1286 (2014)Google Scholar
  8. 8.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, pp. 318–362. MIT Press, Cambridge (1986)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Unai Garay-Maestre
    • 1
  • Antonio-Javier Gallego
    • 1
    Email author
  • Jorge Calvo-Zaragoza
    • 2
  1. 1.Department of Software and Computing SystemsUniversity of AlicanteAlicanteSpain
  2. 2.PRHLT Research CentreUniversitat Politècnica de ValènciaValenciaSpain

Personalised recommendations