Skip to main content

Recurrent Neural Networks for Multimodal Time Series Big Data Analytics

  • Chapter
  • First Online:
Multimodal Analytics for Next-Generation Big Data Technologies and Applications
  • 674 Accesses

Abstract

This chapter considers the challenges when using Recurrent Neural Networks (RNNs) for Big multimodal time series, forecasting where both the spatial and temporal information has to be used for accurate forecasting. Although RNN and its variations such as Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Simple Recurrent Unit (SRU) progressively improve the quality of training outcomes by implementing the network structures and optimisation techniques, one major limitation in such models is that most of them vectorise the input data and thus destroy a continuous spatial representation. We propose an approach termed the Tensorial Recurrent Neural Network (TRNNs) which addresses the problem on how to analyse the multimodal data as well as considers the relationship along with the time series, and shows that the TRNN outperforms other RNN models for image captioning applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.lockheedmartin.com/us/products/W-ICEWS/iData.html

References

  1. Bai, M., Zhang, B., Gao, J.: Tensorial Recurrent Neural Networks for Longitudinal Data Analysis (2017). http://arxiv.org/abs/1708.00185

  2. Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)

    Article  Google Scholar 

  3. Kolda, T.G.: Multilinear Operators for Higher-Order Decompositions. Technical report, Sandia National Laboratories (2006)

    Google Scholar 

  4. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  5. Cho, K., van Merriënboer, B., Gűlçehre, Ç., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder–decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–1734. Association for Computational Linguistics, Doha, Qatar (2014). http://www.aclweb.org/anthology/D14-1179

  6. Goudarzi, A., Banda, P., Lakin, M.R., Teuscher, C., Stefanovic, D.: A comparative study of reservoir computing for temporal signal processing. arXiv preprint arXiv:1401.2224 (2014)

    Google Scholar 

  7. Jordan, M.I.: Serial order: a parallel distributed processing approach. Adv. Psychol. 121:471–495 (1997)

    Google Scholar 

  8. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  Google Scholar 

  9. Zhang, S., Wu, Y., Che, T., Lin, Z., Memisevic, R., Salakhutdinov, R.R., Bengio, Y.: Architectural complexity measures of recurrent neural networks. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29, pp. 1822–1830. Curran Associates (2016). http://papers.nips.cc/paper/6303-architectural-complexity-measures-of-recurrent-neural-networks.pdf

  10. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)

    Article  Google Scholar 

  11. Hochreiter, S.: Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut fűr Informatik, Lehrstuhl Prof. Brauer, Technische Universität Műnchen (1991)

    Google Scholar 

  12. Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J.: Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: Kremer, S.C., Kolen, J.F. (eds.) A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press (2001)

    Google Scholar 

  13. Gers, F.A., Schmidhuber, E.: LSTM recurrent networks learn simple context-free and context-sensitive languages. Trans. Neural Netw. 12(6), 1333–1340 (2001). https://doi.org/10.1109/72.963769

    Article  Google Scholar 

  14. Hochreiter, S., Heusel, M., Obermayer, K.: Fast model-based protein homology detection without alignment. Bioinformatics. 23(14), 1728–1736 (2007)

    Article  Google Scholar 

  15. Chen, K., Zhou, Y., Dai, F.: A LSTM-based method for stock returns prediction: a case study of China stock market. In: 2015 I.E. International Conference on Big Data (Big Data), pp. 2823–2824 (2015)

    Google Scholar 

  16. Bianchi, F.M., Maiorino, E., Kampffmeyer, M.C., Rizzi, A., Jenssen, R.: Recurrent Neural Networks for Short-Term Load Forecasting: An Overview and Comparative Analysis. SpringerBriefs in Computer Science. Springer (2017). https://books.google.com.au/books?id=wu09DwAAQBAJ

  17. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling (2014)

    Google Scholar 

  18. Lei, T., Zhang, Y.: Training RNNs as Fast as CNNs. arXiv preprint arXiv:1709.02755 (2017)

    Google Scholar 

  19. Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: continual prediction with LSTM. Neural Comput. 12(10), 2451–2471 (2000)

    Article  Google Scholar 

  20. Hoff, P.D.: Multilinear tensor regression for longitudinal relational data. Ann. Appl. Stat. 9(3), 1169–1193 (2015)

    Article  MathSciNet  Google Scholar 

  21. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: lessons learned from the 2015 MSCOCO image captioning challenge. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 652–663 (2017)

    Article  Google Scholar 

  22. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  23. Fortunato, M., Blundell, C., Vinyals, O.: Bayesian Recurrent Neural Networks. arXiv preprint arXiv:1704.02798 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingyuan Bai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Bai, M., Zhang, B. (2019). Recurrent Neural Networks for Multimodal Time Series Big Data Analytics. In: Seng, K., Ang, Lm., Liew, AC., Gao, J. (eds) Multimodal Analytics for Next-Generation Big Data Technologies and Applications. Springer, Cham. https://doi.org/10.1007/978-3-319-97598-6_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-97598-6_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-97597-9

  • Online ISBN: 978-3-319-97598-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics