Abstract
In environments where various tasks are sequentially given to deep neural networks (DNNs), training methods are needed that enable DNNs to learn the given tasks continuously. A DNN is typically trained by a single dataset, and continuous learning of subsequent datasets causes the problem of catastrophic forgetting. Previous studies have reported results for consolidation learning methods in recognition tasks and reinforcement learning problems. However, those methods were validated on only a few examples of predictive learning for time series. In this study, we applied elastic weight consolidation (EWC) and pseudo-rehearsal to the predictive learning of time series and compared their learning results. Evaluating the latent space after the consolidation learning revealed that the EWC method acquires properties of the pre-training and subsequent datasets with the same distribution, and the pseudo-rehearsal method distinguishes the properties and acquires them with different distributions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Atkinson, C., McCane, B., Szymanski, L., Robins, A.: Pseudo-rehearsal: achieving deep reinforcement learning without catastrophic forgetting. Neurocomputing 428, 291–307 (2021). https://doi.org/10.1016/j.neucom.2020.11.050
Ha, D., Eck, D.: A neural representation of sketch drawings. In: International Conference on Learning Representations (2018)
Ho, J., Ermon, S.: Generative adversarial imitation learning. In: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29. Curran Associates, Inc. (2016)
Hou, S., Pan, X., Loy, C.C., Wang, Z., Lin, D.: Lifelong learning via progressive distillation and retrospection. In: Proceedings of European Conference on Computer Vision, pp. 437–452 (2018)
Jongejan, J., Rowley, H., Kawashima, T., Kim, J., Thomson, R., Fox-Gieg, N.: Quick, Draw! (2016). https://quickdraw.withgoogle.com
Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017). https://doi.org/10.1073/pnas.1611835114
Koch, G., Zemel, R., Salakhutdinov, R.: Siamese neural networks for one-shot image recognition. In: ICML Deep Learning Workshop, vol. 2 (2015)
Robins, A.: Catastrophic forgetting, rehearsal and pseudorehearsal. Connect. Sci. 7(2), 123–146 (1995). https://doi.org/10.1080/09540099550039318
Rusu, A.A., et al.: Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016)
Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815–823 (2015). https://doi.org/10.1109/CVPR.2015.7298682
Schwarz, J., et al.: Progress & compress: a scalable framework for continual learning. In: 35th International Conference on Machine Learning (ICML 2018), vol. 10, pp. 7199–7208 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Nakajo, R., Ogata, T. (2021). Comparison of Consolidation Methods for Predictive Learning of Time Series. In: Fujita, H., Selamat, A., Lin, J.CW., Ali, M. (eds) Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices. IEA/AIE 2021. Lecture Notes in Computer Science(), vol 12798. Springer, Cham. https://doi.org/10.1007/978-3-030-79457-6_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-79457-6_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-79456-9
Online ISBN: 978-3-030-79457-6
eBook Packages: Computer ScienceComputer Science (R0)