Advertisement

Training Deep Neural Networks with Low Precision Input Data: A Hurricane Prediction Case Study

  • Albert Kahira
  • Leonardo Bautista GomezEmail author
  • Rosa M. Badia
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11203)

Abstract

Training deep neural networks requires huge amounts of data. The next generation of intelligent systems will generate and utilise massive amounts of data which will be transferred along machine learning workflows. We study the effect of reducing the precision of this data at early stages of the workflow (i.e. input) on both prediction accuracy and learning behaviour of deep neural networks. We show that high precision data can be transformed to low precision before feeding it to a neural network model with insignificant depreciation in accuracy. As such, a high precision representation of input data is not entirely necessary for some applications. The findings of this study pave way for the application of deep learning in areas where acquiring high precision data is difficult due to both memory and computational power constraints. We further use a hurricane prediction case study where we predict the monthly number of hurricanes on the Atlantic Ocean using deep neural networks. We train a deep neural network model that predicts the number of hurricanes, first, by using high precision input data and then by using low precision data. This leads to only a drop in prediction accuracy of less than 2%.

Keywords

Deep neural network Low precision Hurricane prediction 

Notes

Acknowledgment

The authors would like to thank Dr. Alicia Sanchez, Dr. Louis-Philippe Caron and Dr. Dario Garcia for the many helpful discussions and providing data for this research work.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 713673.

Albert Kahira has received financial support through the “la Caixa” INPhINIT Fellowship Grant for Doctoral studies at Spanish Research Centres of Excellence, “la Caixa” Banking Foundation, Barcelona, Spain.”

This work is partly supported by the Spanish Government through Programa Severo Ochoa (SEV-2015-0493), by the Spanish Ministry of Science and Technology through TIN2015-65316 project, by the Generalitat de Catalunya under contracts 2014-SGR-1051 and 2014-SGR-1272.

References

  1. 1.
    Agrawal, A., et al.: Approximate computing: challenges and opportunities. In: IEEE International Conference on Rebooting Computing (ICRC), pp. 1–8. IEEE (2016)Google Scholar
  2. 2.
    Courbariaux, M., Bengio, Y., David, J.: Low precision arithmetic for deep learning. CoRR, abs/1412.7024 4 (2014)Google Scholar
  3. 3.
    Dean, J., et al.: Large scale distributed deep networks. In: Advances in Neural Information Processing Systems, pp. 1223–1231 (2012)Google Scholar
  4. 4.
    Grzywaczewski, A.: Training AI for self-driving vehicles: the challenge of scale. Technical report, NVIDIA Corporation (2017). https://devblogs.nvidia.com/parallelforall/training-self-driving-vehicles-challenge-scale
  5. 5.
    Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. In: International Conference on Machine Learning, pp. 1737–1746 (2015)Google Scholar
  6. 6.
    Halevy, A., Norvig, P., Pereira, F.: The unreasonable effectiveness of data. IEEE Intell. Syst. 24(2), 8–12 (2009)CrossRefGoogle Scholar
  7. 7.
    Liu, Y., et al.: Application of deep convolutional neural networks for detecting extreme weather in climate datasets. arXiv preprint arXiv:1605.01156 (2016)
  8. 8.
    Richman, M.B., Leslie, L.M., Ramsay, H.A., Klotzbach, P.J.: Reducing tropical cyclone prediction errors using machine learning approaches. Procedia Comput. Sci. 114, 314–323 (2017)CrossRefGoogle Scholar
  9. 9.
    Shafique, M., Hafiz, R., Javed, M.U., Abbas, S., Sekanina, L., Vasicek, Z., Mrazek, V.: Adaptive and energy-efficient architectures for machine learning: challenges, opportunities, and research roadmap. In: 2017 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 627–632. IEEE (2017)Google Scholar
  10. 10.
    Vanhoucke, V., Senior, A., Mao, M.Z.: Improving the speed of neural networks on CPUs. In: Proceedings of the Deep Learning and Unsupervised Feature Learning NIPS Workshop, vol. 1, p. 4. Citeseer (2011)Google Scholar
  11. 11.
    Wu, S., Li, G., Chen, F., Shi, L.: Training and inference with integers in deep neural networks. arXiv preprint arXiv:1802.04680 (2018)
  12. 12.
    Zhang, W., Han, L., Sun, J., Guo, H., Dai, J.: Application of multi-channel 3D-cube successive convolution network for convective storm nowcasting. arXiv preprint arXiv:1702.04517 (2017)
  13. 13.
    Zhao, M., Held, I.M., Vecchi, G.A.: Retrospective forecasts of the hurricane season using a global atmospheric model assuming persistence of SST anomalies. Mon. Weather Rev. 138(10), 3858–3868 (2010)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Albert Kahira
    • 1
    • 2
  • Leonardo Bautista Gomez
    • 1
    Email author
  • Rosa M. Badia
    • 1
    • 3
  1. 1.Barcelona Supercomputing CenterBarcelonaSpain
  2. 2.Universitat Politècnica de CatalunyaBarcelonaSpain
  3. 3.Spanish National Research Council (CSIC)MadridSpain

Personalised recommendations