Learning of Spatio-temporal Dynamics in Thermal Engineering

  • Matthias De Lozzo
  • Patricia Klotz
  • Béatrice Laurent
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 311)


Thermal engineering deals with the estimation of the temperature at different points and instants for a given set of boundary and initial conditions. For this, an analytic model replaces accurate but time-expensive numerical simulation models; it is independent of the boundary conditions and parameterized by the statistical learning of multidimensional temporal trajectories. This black-box model is a recursive neural network emulating the temperatures of interest over time from the only knowledge of initial conditions and exogenous variables.

The number of hidden neurons is selected by a non-asymptotic approach based upon the minimization of a penalyzed criterion. Methods like the slope heuristic and the dimension jump enable the calibration of the penalty constant in presence of a n-sample. In practice, their extrapolation to dependent data gives accurate results in the sense of the mean square error.

The surrogate model and the model selection are successfully applied to an industrial benchmark.


recursive neural network nonuniform time step model selection penalized criterion thermal engineering 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Lasance, C.J.M.: Ten Years of Boundary-Condition-Independent Compact Thermal Modeling of Electronic Parts: A Review. Heat Transfer Engineering 29(2), 149–168 (2008)CrossRefGoogle Scholar
  2. 2.
    Wang, Y.J., Chin-Teng, L.: Runge-Kutta Neural Network for Identification of Dynamical Systems in High Accuracy. IEEE Transactions on Neural Networks 9, 294–307 (1998)CrossRefGoogle Scholar
  3. 3.
    Ljung, L.: System identification: theory for the user, 2nd edn. Prentice Hall PTR (1999)Google Scholar
  4. 4.
    Narendra, K.S.: Identification and control of dynamical systems using neural networks. IEEE Transactions on Neural Networks 1, 4–27 (1990)CrossRefGoogle Scholar
  5. 5.
    Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press (1995)Google Scholar
  6. 6.
    Igel, C., Hüsken, M.: Improving the Rprop Learning Algorithm. In: Proceedings of the Second International Symposium on Neural Computation (2000)Google Scholar
  7. 7.
    Nguyen, D., Widrow, B.: Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights. In: Proceedings of the International Joint Conference on Neural Networks, vol. 3, pp. 21–26 (1990)Google Scholar
  8. 8.
    Arlot, S., Massart, P.: Data-driven Calibration of Penalties for Least-Squares Regression, vol. 10, pp. 245–279 (2009)Google Scholar
  9. 9.
    Barron, A., Birgé, L., Massart, P.: Risk bounds for model selection via penalization. Probability Theory and Related Fields 113, 301–413 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Baudry, J.-P., Maugis, C., Michel, B.: Slope heuristics: overview and implementation. Statistics and Computing 22(2), 455–470 (2012)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Matthias De Lozzo
    • 1
    • 2
  • Patricia Klotz
    • 2
  • Béatrice Laurent
    • 3
  1. 1.EPSILON IngénierieLabège CedexFrance
  2. 2.The French Aerospace LabONERAToulouse Cedex 4France
  3. 3.Institut de Mathématiques de ToulouseINSA ToulouseToulouse Cedex 4France

Personalised recommendations