Research on image classification method of features of combinatorial convolution


In image classification, shallow convolutional features and deep convolutional features are not fully utilized by many network frameworks. To solve this problem, we propose a combinatorial convolutional network (CCNet) that integrates convolutional features of all levels. According to its own structure, the convolutional features of shallow, medium, and deep levels are extracted. These features are combined by weighted concatenation and convolutional fusion, and the coefficients of each channel of final combination feature are again weighted to improve the identification degree of features. CCNet can improve the single case where most network only add or concatenate shallow and deep features, so that the network can achieve lower classification error rate while generating low-dimensional features. Extensive experiments are performed on CIFAR-10 and CIFAR-100 respectively. The experimental results show that the low-dimensional image feature vectors generated by CCNet effectively reduce the classification error rate when the number of convolutional layers does not exceed 100 layers.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7


  1. Abdi M, Nahavandi S (2016) Multi-residual networks: improving the speed and accuracy of residual networks. arXiv preprint arXiv:1609.05672

  2. Chen Z, Ho P (2018) Cloud based content classification with global-connected net (GC-Net). In: 2018 21st conference on innovation in clouds, internet and networks and workshops, Paris, France, pp 1–6

  3. Csurka G, Dance CR, Fan L, Willamowski J, Bray C (2004) Visual categorization with bags of keypoints. In: Workshop on statistical learning in computer vision, ECCV, pp 1:1–22

  4. Han X, Dai Q (2018) Batch-normalized Mlpconv-wise supervised pre-training network in network. Appl Intell 48(1):142–155.

    Article  Google Scholar 

  5. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, pp 770–778.

  6. He K, Zhang X, Ren S, Sun J (2016b) Identity mappings in deep residual networks. European conference on computer vision. Springer, Cham, pp 630–645.

    Google Scholar 

  7. Hu J, Shen L, Sun G (2018). Squeeze-and-excitation networks. In: IEEE/CVF conference on computer vision and pattern recognition. Salt Lake City, UT, USA, pp 7132–7141.

  8. Huang G, Sun Y, Liu Z, Sedra D, Weinberger KQ (2016) Deep networks with stochastic depth. European conference on computer vision. Springer, Cham, pp 646–661

    Google Scholar 

  9. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, pp 2261–2269

  10. Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167

  11. Jiang X, Pang Y, Sun M, Li X (2017) Cascaded subpatch networks for effective CNNs. IEEE Trans Neural Netw Learn Syst 29(7):2684–2694.

    MathSciNet  Article  Google Scholar 

  12. Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images. Technical report, University of Toronto

  13. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: International conference on neural information processing systems, pp 1097–1105

  14. Lai D, Tian W, Chen L (2019) Improving classification with semi-supervised and fine-grained learning. Pattern Recogn 88:547–556

    Article  Google Scholar 

  15. Larsson G, Maire M, Shakhnarovich G (2016) Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648

  16. Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: 2006 IEEE computer society conference on computer vision and pattern recognition, New York, USA, vol 2, pp 2169–2178

  17. Lin T, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, USA, pp 2117–2125.

  18. Lu B, Hu Q, Hui Y, Wen Q, Li M (2018) Feature reinforcement network for image classification. In: 2018 IEEE international conference on multimedia and expo, San Diego, CA, USA, pp 1–6.

  19. Srivastava RK, Greff K, Schmidhuber J (2015) Training very deep networks. In: Advances in neural information processing systems, pp 2377–2385

  20. Szegedy C, Liu W, Jia Y et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, USA, pp 1–9

  21. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, pp 2818–2826.

  22. Szegedy C, Ioffe S, Vanhoucke V, Alemi AA (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-first AAAI conference on artificial intelligence. San Francisco, California, USA, pp 4278–4284

  23. Veit A, Wilber M, Belongie S (2016) Residual networks are exponential ensembles of relatively shallow networks, vol 1(2), p 3. arXiv preprint arXiv:1605.06431

  24. Weng Y, Zhou T, Liu L, Xia C (2019) Automatic convolutional neural architecture search for image classification under different scenes. IEEE Access 7:38495–38506

    Article  Google Scholar 

  25. Xie S, Girshick R, Dollár P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. In: IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, pp 5987–5995

  26. Xu M, Zhu J, Lv P, Zhou B, Tappen MF, Ji R (2017) Learning-based shadow recognition and removal from monochromatic natural images. IEEE Trans Image Process 26(12):5811–5824

    MathSciNet  Article  Google Scholar 

  27. Xu M, Li C, Lv P, Lin N, Hou R, Zhou B (2018) An efficient method of crowd aggregation computation in public areas. IEEE Trans Circuits Syst Video Technol 28(10):2814–2825.

    Article  Google Scholar 

  28. Yan C, Xie H, Chen J, Zha Z, Hao X, Zhang Y, Dai Q (2018) A fast uyghur text detector for complex background images. IEEE Trans Multimedia 20(12):3389–3398.

    Article  Google Scholar 

  29. Yan C, Li L, Zhang C, Liu B, Zhang Y, Dai Q (2019) Cross-modality bridging and knowledge transferring for image understanding. IEEE Trans Multimedia.

    Article  Google Scholar 

  30. Yue K, Xu F, Yu J (2019) Shallow and wide fractional max-pooling network for image classification. Neural Comput Appl 31(2):409–419.

    Article  Google Scholar 

  31. Zagoruyko S, Komodakis N (2016) Wide residual networks. arXiv preprint arXiv:1605.07146

Download references


This work is supported by the National Natural Science Foundation of China (no. 51641609), Natural Science Foundation of Hebei Province of China (no. F2015203212).

Author information



Corresponding author

Correspondence to Yaqian Li.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wu, C., Li, Y., Zhao, Z. et al. Research on image classification method of features of combinatorial convolution. J Ambient Intell Human Comput 11, 2913–2923 (2020).

Download citation


  • Image classification
  • Convolutional neural network
  • Combinatorial convolution
  • Weighted concatenation