Advertisement

Abstract

In this work, we propose to use recurrent deep learning method to model a complex system. We have chosen Deep Elman neural network with different structures and sigmoidal activation functions. The emphasis of the paper is to compare modeling results on a greenhouse and to demonstrate the abilities of Deep Elman neural network in a modeling step. For this, we used training and validation datasets. Simulation results proved the ability and the efficiency of Deep Elman neural network with two hidden layers.

Keywords

Greenhouse Elman neural network Modeling Recurrent neural network 

References

  1. 1.
    Achanta, S., Gangashetty, S.V.: Deep Elman recurrent neural networks for statistical parametric speech synthesis. Speech Commun. 93, 31–42 (2017)CrossRefGoogle Scholar
  2. 2.
    Chen, D., Mak, B.K.: Multitask learning of deep neural networks for low-resource speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 23, 1172–1183 (2015)Google Scholar
  3. 3.
    Islam, S.M.S., Rahman, S., Rahman, M.M., Dey, E.K., Shoyaib, M.: Application of deep learning to computer vision: a comprehensive study. In: Proceedings of International Conference on Informatics, Electronics and Vision, pp. 592–597 (2016)Google Scholar
  4. 4.
    Makrem, B.J., Imen, J., Kaïs, O.: Study of speaker recognition system based on Feed Forward deep neural networks exploring text-dependent mode. In: 2016 7th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT), pp. 355–360, Hammamet (2016)Google Scholar
  5. 5.
    Gazzah, S., Mhalla, A., Amara, N.E.B.: Vehicle detection on a video traffic scene: review and new perspectives. In: 2016 7th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT), pp. 448–454, Hammamet (2016)Google Scholar
  6. 6.
    Kutucu, H., Almryad, A.: An application of artificial neural networks to assessment of the wind energy potential in Libya. In: 2016 7th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT), pp. 405–409, Hammamet (2016)Google Scholar
  7. 7.
    Kruger, N., Janssen, P., Kalkan, S., Lappe, M., Leonardis, A., Piater, J., Rodriguez-Sanchez, A.J., Wiskott, L.: Deep hierarchies in the primate visual cortex: what can we learn for computer vision. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1847–1871 (2013)CrossRefGoogle Scholar
  8. 8.
    Psaltis, D., Sideris, A., Yamamura, A.A.: A multilayer neural network controller. IEEE Control Syst. Mag. 8, 17–21 (1988)CrossRefGoogle Scholar
  9. 9.
    Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical Report GMD Report 148, German National Research Center for Information Technology (2001)Google Scholar
  10. 10.
    Jaeger, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304, 78–80 (2004)CrossRefGoogle Scholar
  11. 11.
    Elman, J.L.: Finding structure in time. Cogn. Sci. 14, 179–211 (1990)CrossRefGoogle Scholar
  12. 12.
    Siegelmann, H.T., Sontag, E.D.: Turing computability with neural nets. Appl. Math. Lett. 4, 7–80 (1991)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Fan, Y., Qian, Y., Xie, F.L., Soong, F.K.: TTS synthesis with bidirectional LSTM based recurrent neural networks. In: Proc. INTERSPEECH, pp. 1964–1968 (2014)Google Scholar
  14. 14.
    Wu, Z., King, S.: Investigating gated recurrent networks for speech synthesis. In: Proc. ICASSP, pp. 5140–5144 (2016)Google Scholar
  15. 15.
    Pham, D.T., Liu, X.: Training of Elman networks and dynamic system modelling. Int. J. Syst. Sci. 27, 221–226 (1996)CrossRefGoogle Scholar
  16. 16.
    Deng, L., Hinton, G., Kingsbury, B.: New types of deep neural network learning for speech recognition and related applications: an overview. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 26–31 (2013)Google Scholar
  17. 17.
    Baghernezhad, F., Khorasani, K.: Computationally intelligent strategies for robust fault detection, isolation, and identification of mobile robots. Neurocomputing 171, 335–346 (2016)CrossRefGoogle Scholar
  18. 18.
    Yan, A., Wang, W., Zhang, C., Zhao, H.: A fault prediction method that uses improved case-based reasoning to continuously predict the status of a shaft furnace. Inf. Sci. 259, 169–281 (2014)CrossRefGoogle Scholar
  19. 19.
    Huang, H.B., Huang, X.R., Li, R.X., Lim, T.C., Ding, W.P.: Sound quality prediction of vehicle interior noise using deep belief networks. Appl. Acoust. 113, 149–161 (2016)CrossRefGoogle Scholar
  20. 20.
    Souissi, M.: Modélisation et commande du climat d’une serre agricole. Ph.D. Thesis, University of Tunis, Tunis (2002)Google Scholar
  21. 21.
    Zen, H., Sak, H.: Unidirectional long short-term memory recurrent neural network with recurrent output layer for low-latency speech synthesis. In: Proc. ICASSP, pp. 4470–4474 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Control and Energy Management Laboratory (CEM-Lab)University of GabesGabesTunisia
  2. 2.Control and Energy Management Laboratory (CEM-Lab)University of SfaxSfaxTunisia

Personalised recommendations