Role of Filter Sizes in Effective Image Classification Using Convolutional Neural Network

  • Vaibhav SharmaEmail author
  • E. Elamaran
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 768)


Over the past few years, Deep Neural Networks have provided us the best results on a variety of problems, such as pattern recognition, computer vision, and Speech recognition and image classification. Convolutional neural networks are one of the deep learning models which are mostly used in image classification and are the base of many other deep neural network models. Convolution neural network uses convolution and pooling layers for feature abstraction. Unlike a regular Neural Network, the layers of a Convolutional neural Network have neurons arranged in three dimensions: width, height, depth, and filter sizes of different dimensions are used for feature reduction. But the problem with the convolution neural network is that it is difficult to train and can led to overfitting. There are many factors to look for while designing Convolutional Neural Networks one of them is filter size. Dimensions of filter sizes play a very important role for effective training in the convolutional neural network, So, here in this paper we compared the results of 3 × 3, 5 × 5 and 7 × 7 filter sizes and checked training accuracy, test accuracy, training loss, test loss as constraints.


ConvNet Deep learning Activation function Max pooling Filter size Classification Convolutional neural network 


  1. 1.
    Zeiler, M.D., Fergus, R.: Visualizing and Understanding Convolutional Networks. arXiv:1311.2901v3 [cs.CV], 28 Nov 2013
  2. 2.
    Audhkhasi, K., Osoba, O., Kosko, B.: Noise enhanced convolutional neural network. 78, 15–23 (2016) (Elsevier)Google Scholar
  3. 3.
    Lin, D., Lin, Z., Sun, L., Toh, K.A., Cao, J.: LLC encoded BoW features and softmax regression for microscopic image classification. In: 2017 IEEE International Symposium on Circuits and Systems (ISCAS) Google Scholar
  4. 4.
    Tang, J., Deng, C., Huang, G.B.: Extreme learning machine for multilayer perceptron. IEEE Trans. Neural Netw. Learn. Syst. 4Google Scholar
  5. 5.
    Awad, M., Wang, L., Chin, Y., Khan, L., Chen, G., Chebil, F.: A framework for image classification. In: 2006 IEEE Southwest Symposium on Image Analysis and InterpretationGoogle Scholar
  6. 6.
    Li, J., Zhang, H., Zhang, L.: A nonlinear regression classification algorithm with small sample set for hyperspectral imageGoogle Scholar
  7. 7.
    Sun, M., Song, Z., Jiang, X., Pan, J.: Learning pooling for convolutional neural network. Neurocomputing 224, 96–104 (2017)Google Scholar
  8. 8.
    Lecun, Y., Cortes, C.: The MNIST database of handwritten digitsGoogle Scholar
  9. 9.
    Cui, X., Beaver, J.M., St. Charles, J., Potok, T.E.: Dimensionality reduction particle swarm algorithm for high dimensional clustering. In: 2008 IEEE Swarm Intelligence Symposium St. Louis, MO, USA, 21–23 Sept 2008Google Scholar
  10. 10.
    lde, H., Kurita, T.: Improvement of learning for CNN with ReLU activation by sparse regularization. In: 2017 International Joint Conference on Neural Networks (IJCNN)Google Scholar
  11. 11.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556v6 [cs.CV], 10 Apr 2015
  12. 12.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)Google Scholar
  13. 13.
    Agarwal, S., Ranjan, P., Rajesh, R.: Dimensionality reduction methods classical and recent trends : a survey. IJCTA 9(10), 4801–4808 (2016)Google Scholar
  14. 14.
    Zhu, Y., Mak, B.: Speeding up softmax computations in DNN-based large vocabulary speech recognition by senone weight vector selection Acoustics. In: 2017 IEEE International Conference on Speech and Signal Processing (ICASSP)Google Scholar
  15. 15.
    Li, X., Li, F., Fern, X., Raich, R.: Filter Shaping for Convolutional Neural Networks Conference Paper at ICLR 2017Google Scholar
  16. 16.
    Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J.: Rethinking the inception architecture for computer vision. arXiv:1512.00567v3 [cs.CV], 11 Dec 2015
  17. 17.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv:1512.03385v1 [cs.CV], 10 Dec 2015

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.SRM UniversityKattankulathurIndia

Personalised recommendations