Skip to main content

Search Heuristics for the Optimization of DBN for Time Series Forecasting

  • Chapter
  • First Online:
Deep Neural Evolution

Part of the book series: Natural Computing Series ((NCS))

Abstract

A deep belief net (DBN) with multi-stacked restricted Boltzmann machines (RBMs) was proposed by Hinton and Salakhutdinov for reducing the dimensionality of data in 2006. Comparing to the conventional methods, such as the principal component analysis (PCA), the superior performance of DBN received the most attention by the researchers of pattern recognition, and it even brought out a new era of artificial intelligence (AI) with a keyword “deep learning” (DL). Deep neural networks (DNN) such as DBN, deep auto-encoders (DAE), and convolutional neural networks (CNN) have been successfully applied to the fields of dimensionality reduction, image processing, pattern recognition, etc., nevertheless, there are more AI disciplines in which they could be applied such as computational cognition, behavior decision, forecasting, and others. Furthermore, the architectures of conventional deep models are usually handcrafted, i.e., the optimization of the structure of DNN is still a problem. In this chapter, we mainly introduce how DBNs were firstly adopted to time series forecasting systems by our original studies, and two kinds of heuristic optimization methods for structuring DBNs are discussed: particle swarm optimization (PSO), a well-known method in swarm intelligence; and random search (RS), which is a simpler and useful algorithm for high dimensional hyper-parameter exploration.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  2. Sivler, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529, 484–489 (2016)

    Article  Google Scholar 

  3. Sivler, D., et al.: Mastering the game of go without human knowledge. Nature 550, 354–359 (2017)

    Article  Google Scholar 

  4. Williams, R.J., Simple statistical gradient following algorithms for connectionist reinforcement learning. Mach. Learn. 8, 229–256 (1992)

    MATH  Google Scholar 

  5. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  6. Hinton, G.E., Osindero, S., The, Y.W.: A faster learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1544 (2006)

    Article  MathSciNet  Google Scholar 

  7. Roux, N.L., Bengio, Y.: Representational power of restricted Boltzmann machines and deep belief networks. Neural Comput. 20(6), 1631–1649 (2008)

    Article  MathSciNet  Google Scholar 

  8. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS’12), vol.1, pp. 1097–1105 (2012)

    Google Scholar 

  9. IMAGENET Large Scale Visual Recognition Challenge (2012). http://www.image-net.org/challenges/LSVRC/2012/

  10. Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012)

    MathSciNet  MATH  Google Scholar 

  11. Ba, J., Frey, B.: Adaptive dropout for training deep neural networks. In: Advances in Neural Information Processing Systems (NIPS2013), vol. 26, pp. 3084–3092 (2013)

    Google Scholar 

  12. Nowakowski, G., Dorogyy, Y., Doroga-Ivaniuk, O.: Neural network structure optimization algorithm. J. Autom. Mob. Robot. Intell. Syst. 12(1), 5–13 (2018)

    Google Scholar 

  13. Kuremoto, T., Kimura, S., Kobayashi, K., Obayashi M.: Time series forecasting using restricted Boltzmann machine. In: Proceedings of the 8th International Conference on Intelligent Computing (ICIC 2012). Communications in Computer and Information Science (CCIS), vol. 304, pp. 17–22. Springer, Berlin (2012)

    Google Scholar 

  14. Kuremoto, T., Kimura, S., Kobayashi, K., Obayashi, M.: Time series forecasting using a deep belief network with restricted Boltzmann machines. Neurocomputing 137(5), 47–56 (2014)

    Article  Google Scholar 

  15. Kuremoto, T., Hirata, T., Obayashi, M., Mabu, S., Kobayashi, K.: Forecast chaotic time series data by DBNs. In: Proceedings of the 7th International Congress on Image and Signal Processing (CISP 2014), pp. 1304–1309 (2014)

    Google Scholar 

  16. Kennedy, J., Eberhart, R.C.: Particle swarm optimization. In: IEEE International Conference on Neural Networks, pp. 1942–1948 (1995)

    Google Scholar 

  17. Hirata, T., Kuremoto, T., Obayashi, M., Mabu, S.: Time series prediction using DBN and ARIMA. In: Proceedings of International Conference on Computer Application Technologies (CCATS 2015), pp. 24–29 (2015)

    Google Scholar 

  18. Hirata, T., Kuremoto, T., Obayashi, M., Mabu, S., Kobayashi, K.: Deep belief network using reinforcement learning and its applications to time series forecasting. In: Proceedings of International Conference on Neural Information Processing (ICONIP’ 16). Lecture Notes in Computer Science (LNCS), vol. 9949, pp. 30–37. Springer, Berlin (2016)

    Chapter  Google Scholar 

  19. Hirata, T., Kuremoto, T., Obayashi, M., Mabu, S., Kobayashi, K.: Forecasting real time series data using deep belief net and reinforcement learning. J. Robot. Netw. Artif. Life 4(4), 260–264 (2018). https://doi.org/10.2991/jrnal.2018.4.4.1

    Article  Google Scholar 

  20. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint:1409.15566, CISP2014

    Google Scholar 

  21. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  22. Wang, B., Sun, Y., Xue, B., Zhang, M.: Evolving deep convolutional neural networks by variable-length particle swarm optimization for image classification. In: 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1514–1521 (2018)

    Google Scholar 

  23. Fernandes Jr., F.E., Yen, G.G.: Particle swarm optimization of deep neural networks architectures for image classification. Swarm Evol. Comput. 49, 62–73 (2019)

    Article  Google Scholar 

  24. Lorenz, E.N.: Deterministic nonperiodic flow. J. Atmos. Sci. 20, 130–141 (1963)

    Article  MathSciNet  Google Scholar 

  25. Henon, M.: A two-dimensional mapping with a strange attractor. Commun. Math. Phys. 50(1), 69–77 (1976)

    Article  MathSciNet  Google Scholar 

  26. Lendasse, A., Oja, E., Simula, O., Verleysen, M.: Time series prediction competition: the CATS benchmark. In: Proceedings of international joint conference on neural networks (IJCNN’04), pp. 1615–1620 (2004)

    Google Scholar 

  27. Lendasse, A., Oja, E., Simula, O., Verleysen, M.: Time series prediction competition: the CATS benchmark. Neurocomputing 70, 2325–2329 (2007)

    Article  Google Scholar 

  28. Tieleman, T.: Training restricted Boltzmann machines using approximations to the likelihood gradient. In: Proceedings of the 25 International Conference on Machine Learning (ICML ’08), pp. 1064–1071 (2008)

    Google Scholar 

  29. Kuremoto, T., Tokuda, S., Obayashi, M., Mabu, S., Kobayashi, K.: An experimental comparison of deep belief nets with different learning methods. In: Proceedings of 2017 RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing (NCSP 2017), pp. 637–640 (2017)

    Google Scholar 

  30. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representation by back-propagating errors. Nature 232(9), 533–536 (1986)

    Article  Google Scholar 

  31. NN3: http://www.neural-forecasting-competition.com/NN3/index.htm

  32. Kuremoto, T., Obayashi, M., Kobayashi, M.: Neural forecasting systems. In: Weber, C., Elshaw, M., Mayer, N.M. (eds.) Reinforcement Learning, Theory and Applications, Chapter 1, pp. 1–20 (2008). InTech

    MATH  Google Scholar 

  33. Box, G.E.P., Pierce, D.A.: Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. J. Am. Stat. Assoc. 65(332), 1509–1526 (1970)

    Article  MathSciNet  Google Scholar 

  34. Hirata, T., Kuremoto, T., Obayashi, M., Mabu, S., Kobayashi, K.: A novel approach to time series forecasting using deep learning and linear model. IEEJ Trans. Electron. Inf. Syst. 136(3), 248–356 (2016, in Japanese)

    Article  Google Scholar 

  35. Kimura, H., Kobayashi, S.: Reinforcement learning for continuous action using stochastic gradient ascent. In: Proceedings of 5th Intelligent Autonomous Systems (IAS-5), pp. 288–295 (1998)

    Google Scholar 

  36. Kuremoto, T., Hirata, T., Obayashi, M., Mabu, S., Kobayashi, K.: Training deep neural networks with reinforcement learning for time series forecasting. In: Time Series Analysis - Data, Methods, and Applications, InTechOpen (2019)

    Google Scholar 

  37. Aalto University Applications of Machine Learning Group Datasets. 〈http://research.ics.aalto.fi/eiml/datasets.shtml〉 (01-01-17)

  38. Hyndman, R.J.: Time Series Data Library (TSDL) (2013). 〈http://robjhyndman.com/TSDL/〉 (01-01-13)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Takashi Kuremoto .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Kuremoto, T., Hirata, T., Obayashi, M., Kobayashi, K., Mabu, S. (2020). Search Heuristics for the Optimization of DBN for Time Series Forecasting. In: Iba, H., Noman, N. (eds) Deep Neural Evolution. Natural Computing Series. Springer, Singapore. https://doi.org/10.1007/978-981-15-3685-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-3685-4_5

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-3684-7

  • Online ISBN: 978-981-15-3685-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics