Advertisement

IL4IoT: Incremental Learning for Internet-of-Things Devices

  • Yuanyuan BaoEmail author
  • Wai Chen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11912)

Abstract

Considering that Internet-of-Things (IoT) devices are often deployed in highly dynamic environments, mainly due to their continuous exposure to end-users’ living environments, it is imperative that the devices can continually learn new concepts from data stream without catastrophic forgetting. Although simply replaying all the previous training samples can alleviate this catastrophic forgetting problem, it not only may pose privacy risks, but also may require huge computing and memory resources, which makes this solution infeasible for resource-constrained IoT devices. In this paper, we propose IL4IoT, a lightweight framework for incremental learning for IoT devices. The framework consists of two cooperative parts: a continually updated knowledge-base and a task-solving model. Through this framework, we can achieve incremental learning while alleviating the catastrophic forgetting issue, without sacrificing privacy-protection and computing-resource efficiency. Our experiments on MNIST dataset and SDA dataset demonstrate the effectiveness and efficiency of our approach.

Keywords

Incremental learning Catastrophic forgetting Internet of Things Continuous learning Autoencoder Knowledge base 

References

  1. 1.
    Wu, Q., et al.: Cognitive internet of things: a new paradigm beyond connection. IEEE Internet Things J. 1(2), 129–143 (2014). 2014CrossRefGoogle Scholar
  2. 2.
    Liu, B.: Lifelong machine learning: a paradigm for continuous learning. Front. Comput. Sci. 11(3), 359–361 (2017)CrossRefGoogle Scholar
  3. 3.
    Gupta, C., et al.: ProtoNN: compressed and accurate kNN for resource-scarce devices. In: ICML (2017)Google Scholar
  4. 4.
    Kumar, A., Goyal, S., Varma, M.: Resource-efficient machine learning in 2 KB RAM for the internet of things. In: ICML (2017)Google Scholar
  5. 5.
    Rebuffi, S.A., Kolesnikov, A., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: CVPR (2017)Google Scholar
  6. 6.
    Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual learning with deep generative replay. In: NIPS, pp. 2990–2999 (2017)Google Scholar
  7. 7.
    Li, Z., Hoiem, D.: Learning without forgetting. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 614–629. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46493-0_37CrossRefGoogle Scholar
  8. 8.
    Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. In: Proceedings of the National Academy of Sciences, p. 201611835 (2017)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Lopez-Paz, D., et al.: Gradient episodic memory for continual learning. In: NIPS, pp. 6470–6479 (2017)Google Scholar
  10. 10.
    Lane, N.D., Bhattacharya, S., Georgiev, P., Forlivesi, C., Kawsar, F.: An early resource characterization of deep learning on wearables, smartphones and internet-of-things devices. In: IoT-App (2015)Google Scholar
  11. 11.
    McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. Psychol. Learn. Motiv. 24, 109–165 (1989)CrossRefGoogle Scholar
  12. 12.
    Robins, A.: Catastrophic forgetting, rehearsal and pseudo rehearsal. Connect. Sci. 7(2), 123–146 (1995)CrossRefGoogle Scholar
  13. 13.
    Goodfellow, I., et al.: Generative adversarial nets. In: NIPS, pp. 2672–2680 (2014)Google Scholar
  14. 14.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. In: Deep Learning Workshop NIPS (2014)Google Scholar
  15. 15.
    Kemker, R., Kanan, C.: FearNet: brain-inspired model for incremental learning. In: ICLR (2018)Google Scholar
  16. 16.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  17. 17.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504 (2006)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Welling, M.: Herding dynamical weights to learn. In: ICML (2009)Google Scholar
  19. 19.
    LeCun, Y., Cortes, C., Burges, C.: MNIST handwritten digit database Google Scholar
  20. 20.
    Barshan, B., Yüksek, M.C.: Recognizing daily and sports activities in two open source machine learning environments using body-worn sensor units. Comput. J. 57(11), 1649–1667 (2014)CrossRefGoogle Scholar
  21. 21.
    Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)Google Scholar
  22. 22.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: ICLR (2014)Google Scholar
  24. 24.
    Makhzani, A., Frey, B.: K-sparse autoencoder. In: ICLR (2014)Google Scholar
  25. 25.
    Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.: Extracting and composing robust features with denoising autoencoders. In: ICML (2008)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.China Mobile Research InstituteBeijingChina

Personalised recommendations