Advertisement

Forecasting Natural Gas Flows in Large Networks

  • Mauro Dell’Amico
  • Natalia Selini HadjidimitriouEmail author
  • Thorsten Koch
  • Milena Petkovic
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10710)

Abstract

Natural gas is the cleanest fossil fuel since it emits the lowest amount of other remains after being burned. Over the years, natural gas usage has increased significantly. Accurate forecasting is crucial for maintaining gas supplies, transportation and network stability. This paper presents two methodologies to identify the optimal configuration o parameters of a Neural Network (NN) to forecast the next 24 h of gas flow for each node of a large gas network.

In particular the first one applies a Design Of Experiments (DOE) to obtain a quick initial solution. An orthogonal design, consisting of 18 experiments selected among a total of 4.374 combinations of seven parameters (training algorithm, transfer function, regularization, learning rate, lags, and epochs), is used. The best result is selected as initial solution of an extended experiment for which the Simulated Annealing is run to find the optimal design among 89.100 possible combinations of parameters.

The second technique is based on the application of Genetic Algorithm for the selection of the optimal parameters of a recurrent neural network for time series forecast. GA was applied with binary representation of potential solutions, where subsets of bits in the bit string represent different values for several parameters of the recurrent neural network.

We tested these methods on three municipal nodes, using one year and half of hourly gas flow to train the network and 60 days for testing. Our results clearly show that the presented methodologies bring promising results in terms of optimal configuration of parameters and forecast error.

Keywords

Machine learning Neural networks Genetic algorithm Simulated annealing Design Of Experiments (DOE) Time series forecast 

Notes

Acknowledgments

The authors would like to acknowledge networking support by the COST Action TD1207.

References

  1. 1.
    Marx, B.: Fitting a continuous profile to hourly natural gas flow data, Ph.D. thesis, Marquette University Department of Electrical and Computer Engineering, Milwaukee, WI (2007)Google Scholar
  2. 2.
    Lyness, F.K.: Gas demand forecasting. Statistician 33(1), 9–21 (1984)CrossRefGoogle Scholar
  3. 3.
    Lim, H.L., Brown, R.H.: Hourly gas load forecasting model input factor identification using a genetic algorithm. In: Proceedings of 44th IEEE Midwest Symposium on Circuits and Systems, Dayton, OH, pp. 670–673 (2001)Google Scholar
  4. 4.
    Zhang, G., Patuwo, B.E., Hu, M.Y.: Forecasting with artificial neural networks: the state of the art. Int. J. Forecast. 14, 35–62 (1998)CrossRefGoogle Scholar
  5. 5.
    Cai, X., Zhang, N., Venayagamoorthy, G.K., Wunsch, D.C.: Time series prediction with recurrent neural networks trained by a hybrid PSO-EA algorithm. Neurocomputing 70(13), 2342–2353 (2007)CrossRefGoogle Scholar
  6. 6.
    Connor, J.T., Martin, R.D., Atlas, L.E.: Recurrent neural networks and robust time series prediction. IEEE Trans. Neural Netw. 5(2), 240–254 (1994)CrossRefGoogle Scholar
  7. 7.
    Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: Coello, C.A.C. (ed.) LION 2011. LNCS, vol. 6683, pp. 507–523. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-25566-3_40CrossRefGoogle Scholar
  9. 9.
    Tsai, J.-T., Chou, J.-H., Liu, T.-K.: Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm. IEEE Trans. Neural Netw. 17(1), 69–80 (2006)CrossRefGoogle Scholar
  10. 10.
    Leung, Y.W., Wang, Y.: An orthogonal genetic algorithm with quantization for global numerical optimization. IEEE Trans. Evol. Comput. 5(1), 41–53 (2001)CrossRefGoogle Scholar
  11. 11.
    Yao, X.: Evolving artificial networks. Proc. IEEE 87, 1423–1447 (1999)CrossRefGoogle Scholar
  12. 12.
    Zhou, Z.-H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1), 239–263 (2002)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Koza, R., Rice, J.P.: Genetic generation of both the weights and architecture for a neural network. In: Proceedings of International Joint Conference on Neural Networks, Seattle, WA, pp. 397–404 (1991)Google Scholar
  14. 14.
    Richards, N., Moriarty, D.E., Miikkulainen, R.: Evolving neural networks to play go. Appl. Intell. 8, 85–96 (1997)CrossRefGoogle Scholar
  15. 15.
    Curran, D., O’Riordan, C.: Applying Evolutionary Computation to Designing Neural Networks: A Study of the State of the Art, Department of Information Technology, National University of Ireland, Galway, Ireland, Technical report NUIG-IT-111 002 (2002)Google Scholar
  16. 16.
    Kotthoff, L., Thornton, C., Hoos, H., Hutter, F., Leyton-Brown, K.: Auto-WEKA 2.0: Automatic model selection and hyperparameter optimization in WEKA. J. Mach. Learn. Res. 18(25), 1–5 (2017)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Eibe, F., Hall, M.A., Witten, I.H.: The WEKA Workbench. Online Appendix for Data Mining: Practical Machine Learning Tools and Techniques, 4th edn. Morgan Kaufmann, San Francisco (2016)Google Scholar
  18. 18.
    López-Ibáñez, M., Dubois-Lacoste, J., Pérez Cáceres, L., Stützle, T., Birattari, M.: The irace package: iterated racing for automatic algorithm configuration. Oper. Res. Perspect. 3, 43–58 (2016)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Hutter, F., Holger, H.H., Leyton-Brown, K., Stützle, T.: ParamILS: an automatic algorithm configuration framework. J. Artif. Res. 36, 267–306 (2009)zbMATHGoogle Scholar
  20. 20.
    Adenso-Díaz, B., Laguna, M.: Fine-tuning of algorithms using fractional experimental designs and local search. Oper. Res. 54(1), 99–114 (2006)CrossRefGoogle Scholar
  21. 21.
    Cortez, P., Rocha, M., Neves, J.: Time series forecasting by evolutionary neural networks. In: Rubuñal, J., Dorado, J. (eds.) Artificial Neural Networks in Real-Life Applications, Chapter III, pp. 47–70. Idea Group Publishing, Hershey (2006)Google Scholar
  22. 22.
    Niska, H., Hiltunena, T., Karppinenb, A., Ruuskanena, J., Kolehmainena, M.: Evolving the neural network model for forecasting air pollution time series. Eng. Appl. Artif. Intell. 17(2), 159–167 (2004)CrossRefGoogle Scholar
  23. 23.
    Chena, Y.-H., Chang, F.-J.: Evolutionary artificial neural networks for hydrological systems forecasting. J. Hydrol. 367(1–2), 125–137 (2009)CrossRefGoogle Scholar
  24. 24.
    Taguchi, G., Chowdhury, S., Taguchi, S.: Robust Engineering. McGraw Hill, New York (1999)zbMATHGoogle Scholar
  25. 25.
    Bose, R.C., Bush, K.A.: Orthogonal arrays of strength two and three. Ann. Math. Stat. 23, 508–524 (1952)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Khaw, J.F.C., Lim, B.S., Lim, L.E.N.: Optimal design of neural networks using the Taguchi method. Neurocomputing 7(3), 225–245 (1995)CrossRefGoogle Scholar
  27. 27.
    Kackar, R.N., Lagergren, E.S., Fillben, J.J.: Taguchi’s orthogonal arrays are classical designs of experiments. J. Res. Natl. Inst. Stand. Technol. 96, 577–591 (1991)CrossRefGoogle Scholar
  28. 28.
    Montgomery, D.C.: Experimental design for product and process design and development. J. R. Stat. Soc. Ser. D (The Statistician) 48(2), 159–177 (1999)CrossRefGoogle Scholar
  29. 29.
    Gunst, R.F., Mason, R.L.: Fractional factorial design. WIREs Comput. Stat. 1, 234–244 (2009)CrossRefGoogle Scholar
  30. 30.
    Sanger, T.D.: Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw. 2, 459–473 (1989)CrossRefGoogle Scholar
  31. 31.
    Møller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 6(4), 525–533 (1993)CrossRefGoogle Scholar
  32. 32.
    Yu, H., Wilamowski, B.M.: Levenberg-Marquardt Training, in Industrial Electronics Handbook. 5 Intelligent Systems. CRC Press, Boca Raton (2011). 12-1–12-15Google Scholar
  33. 33.
    Fletcher, R.: Practical Methods of Optimization, 2nd edn. Wiley, New York (1987)zbMATHGoogle Scholar
  34. 34.
    Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)MathSciNetCrossRefGoogle Scholar
  35. 35.
    Elman, J.L.: Finding structure in time. Cogn. Sci. 14, 179–211 (1990)CrossRefGoogle Scholar
  36. 36.
    Holland, J.H.: Adaptation in Natural and Artificial Systems. MIT Press, Cambridge (1992)CrossRefGoogle Scholar
  37. 37.
    Haupt, R., Haupt, S.E.: Practical Genetic Algorithms. Wiley, New York (1998)zbMATHGoogle Scholar
  38. 38.
    Michalewicz, Z.: Genetic Algorithms \(+\) Data Structures = Evolution Programming, 3rd edn. Springer, Berlin (1999)zbMATHGoogle Scholar
  39. 39.
    Roy, R.K.: A Primer on Taguchi Method. Van Nostrand Reinhold, New York (1990)zbMATHGoogle Scholar
  40. 40.
    Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation and active learning. In: Tesauro, G., Touretzky, D.S., Leen, T.K. (eds.) Proceedings of the 7th International Conference on Neural Information Processing Systems (NIPS 1994), pp. 231–238. MIT Press, Cambridge (1994)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Mauro Dell’Amico
    • 1
  • Natalia Selini Hadjidimitriou
    • 1
    Email author
  • Thorsten Koch
    • 2
  • Milena Petkovic
    • 2
  1. 1.University of Modena and Reggio EmiliaReggio EmiliaItaly
  2. 2.Zuse Institute BerlinBerlinGermany

Personalised recommendations