Advertisement

Multi-attention Guided Activation Propagation in CNNs

  • Xiangteng He
  • Yuxin PengEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11257)

Abstract

CNNs compute the activations of feature maps and propagate them through the networks. Activations carry various information with different impacts on the prediction, thus should be handled with different degrees. However, existing CNNs usually process them identically. Visual attention mechanism focuses on the selection of regions of interest and the control of information flow through the network. Therefore, we propose a multi-attention guided activation propagation approach (MAAP), which can be applied into existing CNNs to promote their performance. Attention maps are first computed based on the activations of feature maps, vary as the propagation goes deeper and focus on different regions of interest in the feature maps. Then multi-level attention is utilized to guide the activation propagation, giving CNNs the ability to adaptively highlight pivotal information and weaken uncorrelated information. Experimental results on fine-grained image classification benchmark demonstrate that the applications of MAAP achieve better performance than state-of-the-art CNNs.

Keywords

Multiple attention Activation propagation Convolutional Neural Networks 

Notes

Acknowledgments

This work was supported by National Natural Science Foundation of China under Grant 61771025 and Grant 61532005.

References

  1. 1.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, vol. 86, pp. 2278–2324. IEEE (1998)CrossRefGoogle Scholar
  2. 2.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
  3. 3.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NIPS), pp. 91–99 (2015)Google Scholar
  4. 4.
    Liu, X., Xia, T., Wang, J., Lin, Y.: Fully convolutional attention localization networks: efficient attention localization for fine-grained recognition. arXiv:1603.06765 (2016)
  5. 5.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems (NIPS), pp. 1097–1105 (2012)Google Scholar
  6. 6.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)Google Scholar
  7. 7.
    Anderson, J.R.: Cognitive Psychology and Its Implications. WH Freeman/Times Books/Henry Holt & Co., New York (1990)Google Scholar
  8. 8.
    Tsotsos, J.K., Culhane, S.M., Wai, W.Y.K., Lai, Y., Davis, N., Nuflo, F.: Modeling visual attention via selective tuning. Artif. Intell. 78(1–2), 507–545 (1995)CrossRefGoogle Scholar
  9. 9.
    Karklin, Y., Lewicki, M.S.: Emergence of complex cell properties by learning to generalize in natural scenes. Nature 457(7225), 83–86 (2009)CrossRefGoogle Scholar
  10. 10.
    Zhang, X., Zhaoping, L., Zhou, T., Fang, F.: Neural activities in V1 create a bottom-up saliency map. Neuron 73(1), 183–192 (2012)CrossRefGoogle Scholar
  11. 11.
    Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979)CrossRefGoogle Scholar
  12. 12.
    Deng, J., Dong, W., Socher, R., Li, L-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255 (2009)Google Scholar
  13. 13.
    Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD birds-200-2011 dataset (2011)Google Scholar
  14. 14.
    Nilsback, M.E., Zisserman, A.: Automated flower classification over a large number of classes. In: Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722–729 (2008)Google Scholar
  15. 15.
    Krause, J., Stark, M., Deng, J., Fei-Fei, L.: 3D object representations for fine-grained categorization. In: International Conference of Computer Vision Workshop (ICCV), pp. 554–561 (2013)Google Scholar
  16. 16.
    Maji, S., Rahtu, E., Kannala, J., Blaschko, M., Vedaldi, A.: Fine-grained visual classification of aircraft. arXiv:1306.5151 (2013)
  17. 17.
    He, X., Peng, Y.: Fine-grained image classification via combining vision and language. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017Google Scholar
  18. 18.
    Fu, J., Zheng, H., Mei, T.: Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017Google Scholar
  19. 19.
    Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678. ACM (2014)Google Scholar
  20. 20.
    Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)
  21. 21.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Institute of Computer Science and TechnologyPeking UniversityBeijingChina

Personalised recommendations