Advertisement

Embedded adaptive cross-modulation neural network for few-shot learning

  • Peng Wang
  • Jun ChengEmail author
  • Fusheng Hao
  • Lei Wang
  • Wei Feng
ATCI 2019
  • 20 Downloads

Abstract

Although deep neural networks have made great success in several scenarios of machine learning, they face persistent challenges in small training datasets learning scenarios. Few-shot learning aims to learn from a few labeled examples. However, the limited training samples and weakly distinguishable embedding vectors in a metric space often lead to unsatisfactory test results and directly calculating the distance between tensors can cause ambiguity. This paper proposes an embedded adaptive cross-modulation (EACM) method for few-shot learning which combines the information between support and query examples. Specifically, the inter-class categorizability between the support set prototype representations is enhanced by the adaptive cosine metric module to improve the accuracy of the few-shot recognition result. The learning is performed by using the cross-modulation module at many levels of abstraction layers along the prediction pipeline. The support set and query set feature cross-enhance, which improves the generalization ability and robustness of image recognition. Afterward, we further combine above two methods by a weight balance scalar to determine the task-related metric space and construct a joint loss function. Theoretical analysis demonstrates the generalization ability of EACM. We conduct comprehensive experiments on mini-ImageNet and CUB datasets. Experimental results show that our approach is the state-of-the-art approach by significant margins.

Keywords

Few-shot Image classification Embedded adaptive Cross-modulation 

Notes

Acknowledgements

Funding was provided by National Natural Science Foundation of China (Grant Nos. U1713213, 61772508), National Key R&D Program of China (2018YFB1308000), National Natural Science Foundation of China (U1713213, 61772508), Key Research and Development Program of Guangdong Province [grant numbers 2019B090915001], Shenzhen Technology Project (JCYJ20180507182610734, JCYJ20170413152535587), CAS Key Technology Talent Program.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Ravi S, Larochelle H (2017) Optimization as a model for few-short learning. In: ICLR, pp 1–11Google Scholar
  2. 2.
    Perez E, de Vries H, Strub F, Dumoulin V, Courville A (2017) Learning visual reasoning without strong priors. In: MLSLP workshop at ICMLGoogle Scholar
  3. 3.
    Li J, Wong HC, Lo SL, Xin Y (2018) Multiple object detection by a deformable part-based model and an R-CNN. IEEE Signal Process Lett 25(2):288–292CrossRefGoogle Scholar
  4. 4.
    Wu C, Li Y, Zhao Z, Liu B (2019) Extreme learning machine with autoencoding receptive fields for image classification. Neural Comput Appl 2019:1–17Google Scholar
  5. 5.
    Wang X, Gao L, Song J, Shen H (2017) Beyond frame-level CNN: saliency-aware 3-D CNN with LSTM for video action recognition. IEEE Signal Process Lett 24(4):510–514CrossRefGoogle Scholar
  6. 6.
    Liu F, Tao D, Wang L, Xu Y, Xia H, Cheng J (2018) Ensemble one-dimensional convolution neural networks for skeleton-based action recognition. IEEE Signal Process Lett 25(7):1044–1048CrossRefGoogle Scholar
  7. 7.
    Kalash M, Rochan M, Mohammed N, Bruce ND, Wang Y, Iqbal F (2018) Malware classification with deep convolutional neural networks. In: 2018 9th IFIP international conference on new technologies, mobility and security, NTMS 2018—proceedings, vol 2018, no 6, pp 1–5Google Scholar
  8. 8.
    Chen T, Zhao Y, Guo Y (2019) Sparsity-regularized feature selection for multi-class remote sensing image classification. Neural Comput Appl 2019:1–9Google Scholar
  9. 9.
    Taylor L, Nitschke G (2017) Improving deep learning using generic data augmentation. In: CoRRGoogle Scholar
  10. 10.
    Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252MathSciNetCrossRefGoogle Scholar
  11. 11.
    Garcia V, Bruna J (2018) Few-shot learning with graph neural networks. In: Proceedings of the international conference on learning representationsGoogle Scholar
  12. 12.
    Das R, Walia E (2019) Partition selection with sparse autoencoders for content based image classification. Neural Comput Appl 31(3):675–690CrossRefGoogle Scholar
  13. 13.
    Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd international conference on machine learning, pp 448–456Google Scholar
  14. 14.
    Kukačka J, Golkov V, Cremers D (2017)  Regularization for deep learning: a taxonomy. arXiv:1710.10686
  15. 15.
    Hilliard N, Phillips L, Howland S, Yankov A, Corley CD, Hodas NO (2018) Few-shot learning with metric-agnostic conditional embeddings. arXiv:1802.04376
  16. 16.
    Zou X, Zhou L, Li K, Ouyang A, Chen C (2019) Multi-task cascade deep convolutional neural networks for large-scale commodity recognition. Neural Comput Appl.  https://doi.org/10.1007/s00521-019-04311-9 CrossRefGoogle Scholar
  17. 17.
    Munkhdalai T, Yuan X, Mehri S, Trischler A (2018) Rapid adaptation with conditionally shifted neurons. In: Proceedings of the 35th international conference on machine learning, pp 3664–3673Google Scholar
  18. 18.
    Mishra N, Rohaninejad M, Chen XPA (2018) A simple neural attentive meta-learner. In: ICLR, 2018Google Scholar
  19. 19.
    Santoro A, Bartunov S, Botvinick M, Wierstra D, Lillicrap T, Deepmind G (2016) Meta-learning with memory-augmented neural networks Google DeepMind. In: Proceedings of the 33rd international conference on machine learning, pp 1842–1850Google Scholar
  20. 20.
    Vinyals O, Blundell C, Lillicrap T, Kavukcuoglu K, Wierstra D (2016) Matching networks for one shot learning. In: Advances in neural information processing systemsGoogle Scholar
  21. 21.
    Sung F, Yang Y, Zhang L (2018) Learning to compare : relation network for few-shot learning Queen Mary University of London. In: Cvpr, pp 1199–1208Google Scholar
  22. 22.
    Oh J, Singh S, Lee H, Kohli P (2017) Zero-shot task generalization with multi-task deep reinforcement learning. In: ICMLGoogle Scholar
  23. 23.
    Finn C, Abbeel P, Levine S (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In: ICMLGoogle Scholar
  24. 24.
    Nichol A, Achiam J, Schulman J (2018) On first-order meta-learning algorithms. In: CoRRGoogle Scholar
  25. 25.
    Yoon J, Kim T, Dia O, Kim S (2018) Bayesian model-agnostic meta-learning. In: NIPS'18 proceedings of the 32nd international conference on neural information processing systems, pp 7343–7353Google Scholar
  26. 26.
    Finn C, Xu K, Levine S (2018) Probabilistic model-agnostic meta-learning. In: Advances in Neural Information Processing Systems, pp. 9516–9527Google Scholar
  27. 27.
    Grant E, Finn C, Levine S, Darrell T, Griffiths T (2018) Recasting gradient-based meta-learning as hierarchical Bayes. In: CoRRGoogle Scholar
  28. 28.
    Rusu AA, Rao D, Sygnowski J, Vinyals O, Pascanu R, Osindero S, Hadsell R (2018) Meta-learning with latent embedding optimization. In: CoRRGoogle Scholar
  29. 29.
    Snell J, Swersky K, Zemel R (2017) Prototypical networks for few-shot learning. In: Advances in neural information processing systems, pp 4077–4087Google Scholar
  30. 30.
    Bromley J, Guyon I, LeCun Y Signature verification using a “siamese” time delay neural network. In: NIPSGoogle Scholar
  31. 31.
    Koch G, Zemel R, RS Deep Learning Workshop (2015) Siamese neural networks for one-shot image recognition. In: ICMLGoogle Scholar
  32. 32.
    Bertinetto L, Henriques JF, Torr PH, Vedaldi A (2018) Meta-learning with differentiable closed-form solvers. In ICLR, 2019, pp 2–8  Google Scholar
  33. 33.
    Santoro A, Raposo D, Barrett DGT, Malinowski M, Pascanu R, Battaglia P, Lillicrap T (2017) A simple neural network module for relational reasoning. In: NIPSGoogle Scholar
  34. 34.
    Koch, G., Zemel, R., & Salakhutdinov, R. (2015). Siamese neural networks for one-shot image recognition. In ICML deep learning workshop vol. 2Google Scholar
  35. 35.
    Qiao S, Liu C, Shen W, Yuille A (2018) Few-shot image recognition by predicting parameters from activations. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 7229–7238Google Scholar
  36. 36.
    Nicosia M, Moschitti A (2017) Learning contextual embeddings for structural semantic similarity using categorical information, aclweb.org, pp 260–270Google Scholar
  37. 37.
    Weston J, Chopra S, Bordes A (2014) Memory networks. arXiv:1410.3916
  38. 38.
    Cai Q, Pan Y, Yao T, Yan C, Mei T (2018) Memory matching networks for one-shot image recognition. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 4080–4088Google Scholar
  39. 39.
    Munkhdalai T, Yu H (2017) Meta networks. In: ACM 2017Google Scholar
  40. 40.
    Triantafillou E, Larochelle H, Snell J, Tenenbaum J (2018) Meta-learning for semi-supervised few-shot classification. In: ICLRGoogle Scholar
  41. 41.
    Hao F, Cheng J, Wang L, Cao J (2019) Instance-level embedding adaptation for few-shot learning. In: IEEE AccessGoogle Scholar
  42. 42.
    Perez E, Strub F, de Vries H, Dumoulin V, Courville A (2017) FiLM: visual reasoning with a general conditioning layer. In: AAAI 2017Google Scholar
  43. 43.
    Oreshkin BN, Rodriguez P, Lacoste A (2018) TADAM: task dependent adaptive metric for improved few-shot learning. In: NIPSGoogle Scholar
  44. 44.
    Wah C, Branson S, Welinder P, Perona P, Belongie S (2011) The Caltech-UCSD Birds-200–2011 dataset. In: Cns-Tr-2011-001Google Scholar
  45. 45.
    Chen W-Y, Liu Y-C, Kira Z, Wang Y-CF, Huang J-B (2019) A closer look at few-shot classification. In: Proceedings of the international conference learning representGoogle Scholar
  46. 46.
    Ketkar N (2017) In: Deep learning with python. Apress, Berkeley, CA, USA, pp 195–208  Google Scholar
  47. 47.
    Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. In: ICLRGoogle Scholar
  48. 48.
    Li Z, Zhou F, Chen F, Li H (2017) Meta-SGD: learning to learn quickly for few-shot learning. In: CoRRGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhenChina
  2. 2.Shenzhen College of Advanced TechnologyUniversity of Chinese Academy of SciencesBeijingChina
  3. 3.The Chinese University of Hong KongHong KongChina

Personalised recommendations