Advertisement

A Regularization Post Layer: An Additional Way How to Make Deep Neural Networks Robust

  • Jan VaněkEmail author
  • Jan Zelinka
  • Daniel Soutner
  • Josef Psutka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10583)

Abstract

Neural Networks (NNs) are prone to overfitting. Especially, the Deep Neural Networks in the cases where the training data are not abundant. There are several techniques which allow us to prevent the overfitting, e.g., L1/L2 regularization, unsupervised pre-training, early training stopping, dropout, bootstrapping or cross-validation models aggregation. In this paper, we proposed a regularization post-layer that may be combined with prior techniques, and it brings additional robustness to the NN. We trained the regularization post-layer in the cross-validation (CV) aggregation scenario: we used the CV held-out folds to train an additional neural network post-layer that boosts the network robustness. We have tested various post-layer topologies and compared results with other regularization techniques. As a benchmark task, we have selected the TIMIT phone recognition which is a well-known and still favorite task where the training data are limited, and the used regularization techniques play a key role. However, the regularization post-layer is a general method, and it may be employed in any classification task.

Keywords

Speech recognition Phone recognition Acoustic modeling Neural networks Regularization Neural networks ensemble 

Notes

Acknowledgments

This research was supported by the Grant Agency of the Czech Republic, project No. GAČR GBP103/12/G084.

References

  1. 1.
    Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural networks architectures. Neural Comput. 7(2), 219–269 (1995)CrossRefGoogle Scholar
  2. 2.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. (JMLR) 15, 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Wang, S.I., Manning, C.D.: Fast dropout training. In: Proceedings of the 30th International Conference on Machine Learning, vol. 28, pp. 118–126 (2013)Google Scholar
  4. 4.
    Kang, G., Li, J., Tao, D.: Shakeout: a new regularized deep neural network training scheme. In: Proceedings of the AAAI Conference, pp. 1751–1757 (2016)Google Scholar
  5. 5.
    Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. (JMLR) 1, 1–40 (2009)zbMATHGoogle Scholar
  7. 7.
    Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)CrossRefGoogle Scholar
  8. 8.
    Zhang, G.: Neural networks for classification: a survey. IEEE Trans. Syst. Man Cybern. 30(4), 451–462 (2000)CrossRefGoogle Scholar
  9. 9.
    Zhou, Z.H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1–2), 239–263 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Perrone, M.P., Cooper, L.N.: When networks disagree: ensemble methods for hybrid neural networks. Technical report, DTIC Document (1992)Google Scholar
  11. 11.
    Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)zbMATHGoogle Scholar
  12. 12.
    Barrow, D.K., Crone, S.F.: Crogging (cross-validation aggregation) for forecasting - a novel algorithm of neural network ensembles on time series subsamples. In: Proceedings of the International Joint Conference on Neural Networks (2013)Google Scholar
  13. 13.
    Seltzer, M.L., Yu, D., Wang, Y.: An investigation of deep neural networks for noise robust speech recognition. In: Proceedings of the ICASSP (2013)Google Scholar
  14. 14.
    Vesely, K., Ghoshal, A., Burget, L., Povey, D.: Sequence-discriminative training of deep neural networks. In: Proceedings of the INTERSPEECH, pp. 2345–2349 (2013)Google Scholar
  15. 15.
    Moon, T., Choi, H., Lee, H., Song, I.: RNNDROP: a novel dropout for RNNs in ASR. In: Proceedings of the ASRU (2015)Google Scholar
  16. 16.
    Tóth, L.: Convolutional deep maxout networks for phone recognition. In: Proceedings of the INTERSPEECH, pp. 1078–1082 (2014)Google Scholar
  17. 17.
    Tóth, L.: Combining time- and frequency-domain convolution in convolutional neural network-based phone recognition. In: Proceedings of the ICASSP (2014)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Jan Vaněk
    • 1
    Email author
  • Jan Zelinka
    • 1
  • Daniel Soutner
    • 1
  • Josef Psutka
    • 1
  1. 1.Department of Cybernetics, New Technologies for the Information SocietyUniversity of West Bohemia in PilsenPilsenCzech Republic

Personalised recommendations