Advertisement

Deep Online Storage-Free Learning on Unordered Image Streams

  • Andrey BesedinEmail author
  • Pierre Blanchart
  • Michel Crucianu
  • Marin Ferecatu
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 967)

Abstract

In this work we develop an online deep-learning based approach for classification on data streams. Our approach is able to learn in an incremental way without storing and reusing the historical data (we only store a recent history) while processing each new data sample only once. To make up for the absence of the historical data, we train Generative Adversarial Networks (GANs), which, in recent years have shown their excellent capacity to learn data distributions for image datasets. We test our approach on MNIST and LSUN datasets and demonstrate its ability to adapt to previously unseen data classes or new instances of previously seen classes, while avoiding forgetting of previously learned classes/instances of classes that do not appear anymore in the data stream.

Keywords

Deep learning GAN Data streams Classification 

References

  1. 1.
    Besedin, A., Blanchart, P., Crucianu, M., Ferecatu, M.: Evolutive deep models for online learning on data streams with no storage. In: 2nd ECML/PKDD 2017 Workshop on Large-Scale Learning from Data Streams in Evolving Environments (2017)Google Scholar
  2. 2.
    Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12(Aug), 2493–2537 (2011)zbMATHGoogle Scholar
  3. 3.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  4. 4.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. arXiv preprint arXiv:1703.06870 (2017)
  5. 5.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  6. 6.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
  7. 7.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  8. 8.
    McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. Psychol. Learn. Motiv. 24, 109–165 (1989)CrossRefGoogle Scholar
  9. 9.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
  10. 10.
    Sermanet, P., Kavukcuoglu, K., Chintala, S., LeCun, Y.: Pedestrian detection with unsupervised multi-stage feature learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3626–3633 (2013)Google Scholar
  11. 11.
    Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)Google Scholar
  12. 12.
    Webb, G.I., Hyde, R., Cao, H., Nguyen, H.L., Petitjean, F.: Characterizing concept drift. Data Min. Knowl. Discov. 30(4), 964–994 (2016)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Andrey Besedin
    • 1
    Email author
  • Pierre Blanchart
    • 1
  • Michel Crucianu
    • 2
  • Marin Ferecatu
    • 2
  1. 1.CEA, LIST, Laboratoire d’Analyse de Donnes et Intelligence des Systemes, Digiteo Labs SaclayGif-sur-Yvette CedexFrance
  2. 2.Centre d’études et de recherche en informatique et communications, Le CNAMParisFrance

Personalised recommendations