Abstract
In this chapter we summarize the main points of our overview and draw our conclusions. We discuss our interpretations about the reasons behind the different results and performance achieved by the Recurrent Neural Network architectures analyzed. We conclude by hypothesizing possible guidlines for selecting suitable models depending on the specific task at hand.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Reference
Kumar A, Irsoy O, Ondruska P, Iyyer M, Bradbury J, Gulrajani I, Zhong V, Paulus R, Socher R (2016) Ask me anything: Dynamic memory networks for natural language processing. In: Balcan MF, Weinberger KQ (eds) Proceedings of the 33rd International Conference on Machine Learning, PMLR, New York, USA, Proceedings of Machine Learning Research, vol 48, pp 1378–1387
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2017 The Author(s)
About this chapter
Cite this chapter
Bianchi, F.M., Maiorino, E., Kampffmeyer, M.C., Rizzi, A., Jenssen, R. (2017). Conclusions. In: Recurrent Neural Networks for Short-Term Load Forecasting. SpringerBriefs in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-70338-1_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-70338-1_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-70337-4
Online ISBN: 978-3-319-70338-1
eBook Packages: Computer ScienceComputer Science (R0)