Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

A real-time typhoon eye detection method based on deep learning for meteorological information forensics

  • 53 Accesses

  • 1 Citations


The development of meteorological satellite technology has made it feasible to observe cloud cover over the Earth’s surface, and the number of high-precision meteorological satellite images available has increased dramatically over the years. However, there exists a gap between meteorological satellite cloud images and the true information of the pictured clouds. Therefore, extracting the true atmospheric information from “forged” satellite images in real time is a challenging task. In this paper, we proposed a real-time typhoon eye detection method from meteorological satellite cloud images based on deep learning. This new approach is the first step in detecting hidden information in satellite cloud images and provides important data support to detect true typhoon information. We performed simulation experiments and the results showed that the proposed method performs well in identifying typhoons, where the positive sample accuracy rate, negative sample accuracy rate, and total average accuracy rate are 94.22%, 99.43%, and 96.83%, respectively. In the testing process, the average time needed to detect each sample is 6 ms, which fulfills the requirement for real-time typhoon eye detection. Our method outperforms the k-nearest neighbors (KNN) and support vector machine (SVM) algorithms.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6


  1. 1.

    Zhang, W., Cui, X.P.: Review of tropical cyclone generation. J. Trop. Meteorol. 29(2), 337–346 (2013)

  2. 2.

    Jin, L.: Progress in researches on artificial neural network theory and application in atmospheric science. Meteorol. Sci. Technol. 32(6), 385–392 (2004)

  3. 3.

    Shi, C.X., Wu, R.Z., Xiang, X.K.: Automatic segmentation of satellite image using hierarchical threshold and neural network. Q. J. Appl. Meteorol. 12(1), 70–78 (2001)

  4. 4.

    Zheng, J.J., Huang, F., Zhang, Y.S.: Research of cloud identification based on neural network. Comput. Eng. 30(18), 24–25 (2004)

  5. 5.

    Li, W.L.: Research on the recognition method of typhoon during its developing stage. Tianjin University (2003)

  6. 6.

    Vannoorenberghe, P., Flouzat, G.: A belief-based pixel labeling strategy for medical and satellite image segmentation. In: IEEE international conference on fuzzy systems, IEEE, pp. 1093–1098 (2006)

  7. 7.

    Ooi, W.S., Lim, C.P.: Fuzzy clustering of color and texture features for image segmentation: a study on satellite image retrieval. J. Intell. Fuzzy Syst. 17(3), 297–311 (2006)

  8. 8.

    Intajag, S., Paithoonwatanakij, K., Cracknell, A.P.: Iterative satellite image segmentation by fuzzy hit-or-miss and homogeneity index. IEE Proc. Vis. Image Signal Process. 153(2), 206–214 (2006)

  9. 9.

    Bao, C.L., Ma, W.M., Chen, W.M.: Satellite cloud maps study on formation and track of typhoon moving northward. J. Nat. Disasters 10(2), 12–18 (2001)

  10. 10.

    Ma, W.M., Wu, X.T., Chen, X.X.: Satellite cloud maps’ features of typhoon formation. Mar. Forecasts 17(3), 1–10 (2000)

  11. 11.

    Zeng, M.J., Yu, B., Zhou, Z.K.: Researches and applications of locating technology for typhoon center on satellite infrared cloud picture. J. Trop. Meteorol. 22(03), 35–41 (2006)

  12. 12.

    Kovordányi, R., Roy, C.: Cyclone track forecasting based on satellite images using artificial neural networks. ISPRS J. Photogramm. Remote Sens. 64(6), 513–521 (2009)

  13. 13.

    Lu, J., Zhang, C.J., Zhang, X.: Auto-recognition of typhoon cloud based on boundary features. J. Remote Sens. 14(5), 990–1003 (2010)

  14. 14.

    Wang, Y., Sun, J.-D., Han, Y., et al.: Segmentation of tropical cyclone cloud systems with compositive algorithms. Remote Sens. Technol. Appl. 26(3), 287–293 (2011)

  15. 15.

    Rüttgers, M., Lee, S., You, D.: Typhoon track prediction using satellite images in a Generative Adversarial Network. arXiv:1808.05382 (2018)

  16. 16.

    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

  17. 17.

    Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

  18. 18.

    Fang, W., Zhang, F., Sheng, V.S., Ding, Y.: A method for improving CNN-based image recognition using DCGAN. Comput. Mater. Continua 57(1), 167–178 (2018)

  19. 19.

    Zheng, Y., Jeon, B., Sun, L., Zhang, J., Zhang, H.: Student’s t-hidden Markov model for Unsupervised learning using localized feature selection. IEEE Trans. Circ. Syst. Video Technol. 28(10), 2586–2598 (2017)

  20. 20.

    Young, T., Hazarika, D., Poria, S., Cambria, E.: Recent trends in deep learning based natural language processing. IEEE Comput. Intell. Mag. 13(3), 55–75 (2018)

  21. 21.

    Cliche, M.: BB_twtr at SemEval-2017 Task 4: twitter sentiment analysis with CNNs and LSTMs. In: Proceedings of the 11th international workshop on semantic evaluation, pp 573–580 (2017)

  22. 22.

    Christiansen, E.M., Yang, S.J., Ando, D.M., Javaherian, A., Skibinski, G., Lipnick, S., Mount, E., O’Neil, A., Shah, K., Lee, A.K., Goyal, P., Fedus, W., Poplin, R., Esteva, A., Berndl, M., Rubin, L.L., Nelson, P., Finkbeiner, S.: In silico labeling: predicting fluorescent labels in unlabeled images. Cell. 173(3), 792–803 (2018)

  23. 23.

    Tu, Y., Lin, Y., Wang, J., Kim, J.-U.: Semi-supervised learning with generative adversarial networks on digital signal modulation classification. Comput. Mater. Continua 55(2), 243–254 (2018)

  24. 24.

    Zheng, Y., Sun, L., Wang, S., Zhang, J., Ning, J.: Spatially regularized structural support vector machine for robust visual tracking. IEEE Trans. Neural Netw. Learn. Syst. (2018). https://doi.org/10.1109/TNNLS.2018.2855686

  25. 25.

    Yu, S., Wu, Y., Li, W., Song, Z., Zeng, W.: A model for fine-grained vehicle classification based on deep learning. Neurocomputing 257, 97–103 (2017)

  26. 26.

    Zhou, S., Liang, W., Li, J., Kim, J.-U.: Improved VGG model for road traffic sign recognition. Comput. Mater. Continua 57(1), 11–24 (2018)

  27. 27.

    Wenwen, G., Lianyong, Q., Yanwei, X.: Privacy-aware multidimensional mobile service quality prediction and recommendation in distributed fog environment. Wirel. Commun. Mobile Comput. 2018, 8 (2018)

  28. 28.

    Lianyong, Q., Xiaolong, X., Wanchun, D., Jiguo, Y., Zhili, Z., Xuyun, Z.: Time-aware IoE service recommendation on sparse data. Mobile Inform. Syst. 2016, 12 (2016)

  29. 29.

    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. Comput. Sci. (2015). arXiv:1412.6980

Download references


This work is supported by the National Natural Science Foundation of China (Grant no. 61802199) and the Student Practice Innovation Training Program Fund of the Nanjing University of Information Science and Technology (Grant no. 2017103000170).

Author information

Correspondence to Liling Zhao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhao, L., Chen, Y. & Sheng, V.S. A real-time typhoon eye detection method based on deep learning for meteorological information forensics. J Real-Time Image Proc 17, 95–102 (2020). https://doi.org/10.1007/s11554-019-00899-2

Download citation


  • Deep learning
  • Image detection
  • Information forensics
  • Typhoon