Advertisement

Stochastic Drop of Kernel Windows for Improved Generalization in Convolution Neural Networks

  • Sangwon Lee
  • Gil-Jin JangEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 903)

Abstract

We propose a novel dropout technique for convolutional neural networks by redesigned Dropout and DropConnect methods. Conventional drop methods work on the individual single weight value of the fully connected network. When they are applied to convolution layers, only some kernel weights are removed. However, all the weights of the convolutional kernel windows together constitute a specific pattern, so dropping part of kernel window weights may cause change of the learned patterns and may model completely different local patterns. We assign the basic unit of drop method for convolutional weights to be the whole kernel windows, so one output map value is dropped. We evaluated the proposed DropKernel strategy by the object classification performance on CIFAR10 in comparison to conventional Dropout and DropConnect methods, and showed improved performance of the proposed method.

Keywords

Convolutional neural networks Dropout Object recognition 

Notes

Acknowledgments

This work was supported by Institute for Information and communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No. R7124-16-0004, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding, 50%) and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF-2017M3C1B6071400).

References

  1. 1.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Wan, L., Zeiler, M., Zhang, S., Cun, Y. L., Fergus, R.: Regularization of neural networks using DropConnect. In: the 30th International Conference on Machine Learning (ICML 2013), pp. 1058–1066 (2013)Google Scholar
  3. 3.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  4. 4.
    Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv:1605.07146 (2016)
  5. 5.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: the 32nd International Conference on Machine Learning (ICML 2015) (2015)Google Scholar
  6. 6.
    Abadi, M., Agarwal, A., Barham, P., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467 (2016)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Electronics EngineeringKyungpook National UniversityDaeguSouth Korea

Personalised recommendations