Skip to main content

Usage of Multiple RTL Features for Earthquakes Prediction

  • Conference paper
  • First Online:
Computational Science and Its Applications – ICCSA 2019 (ICCSA 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11619))

Included in the following conference series:

Abstract

We construct a classification model, that predicts if an earthquake with the magnitude above a threshold will take place at a given location in a time range 30–180 days from now. A common approach is to use expert-generated features like Region-Time-Length (RTL) features as an input to the model. The proposed approach aggregates of multiple generated RTL features to take into account effects at various scales and to improve the quality of a machine learning model. For our data on Japan earthquakes 1992–2005 and predictions at locations given in this database, the best model provides precision as high as 0.95 and recall as high as 0.98.

Supported by Skoltech.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aksoy, S., Haralick, R.M.: Feature normalization and likelihood-based similarity measures for image retrieval. Pattern Recogn. Lett. 22(5), 563–582 (2001)

    Article  Google Scholar 

  2. Asencio-Cortés, G., Martínez-Álvarez, F., Morales-Esteban, A., Reyes, J.: A sensitivity study of seismicity indicators in supervised learning to improve earthquake prediction. Knowl.-Based Syst. 101, 15–30 (2016)

    Article  Google Scholar 

  3. Asim, K.M., Awais, M., Martínez-Álvarez, F., Iqbal, T.: Seismic activity prediction using computational intelligence techniques in northern Pakistan. Acta Geophys. 65(5), 919–930 (2017)

    Article  Google Scholar 

  4. Asim, K., Martínez-Álvarez, F., Basit, A., Iqbal, T.: Earthquake magnitude prediction in Hindukush region using machine learning techniques. Nat. Hazards 85(1), 471–486 (2017)

    Article  Google Scholar 

  5. Burnaev, E., Erofeev, P., Papanov, A.: Influence of resampling on accuracy of imbalanced classification. In: Eighth International Conference on Machine Vision (ICMV 2015), vol. 9875, p. 987521. International Society for Optics and Photonics (2015)

    Google Scholar 

  6. Burnaev, E., Erofeev, P., Smolyakov, D.: Model selection for anomaly detection. In: Eighth International Conference on Machine Vision (ICMV 2015), vol. 9875, p. 987525. International Society for Optics and Photonics (2015)

    Google Scholar 

  7. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794. ACM (2016)

    Google Scholar 

  8. Freund, Y., Schapire, R.: A short introduction to boosting. J. Jpn. Soc. Artif. Intell. 14, 771–780 (1999)

    Google Scholar 

  9. Friedman, J., Hastie, T., Tibshirani, R., et al.: Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). Ann. Stat. 28(2), 337–407 (2000)

    Article  Google Scholar 

  10. Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29, 1189–1232 (2001)

    Article  MathSciNet  Google Scholar 

  11. Gutenberg, B., Richter, C.: Seismicity of the Earth and associated phenomena (1951)

    Google Scholar 

  12. Ho, T.K.: Random decision forests. In: Proceedings of 3rd International Conference on Document Analysis and Recognition, vol. 1, pp. 278–282. IEEE (1995)

    Google Scholar 

  13. Huang, Q.: Seismicity pattern changes prior to large earthquakes-an approach of the RTL algorithm. Terr. Atmos. Oceanic Sci. 15(3), 469–492 (2004)

    Article  Google Scholar 

  14. Kozlovskaia, N., Zaytsev, A.: Deep ensembles for imbalanced classification. In: 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 908–913. IEEE (2017)

    Google Scholar 

  15. Panakkat, A., Adeli, H.: Neural network models for earthquake magnitude prediction using multiple seismicity indicators. Int. J. Neural Syst. 17(01), 13–33 (2007)

    Article  Google Scholar 

  16. Rouet-Leduc, B., Hulbert, C., Lubbers, N., Barros, K., Humphreys, C.J., Johnson, P.A.: Machine learning predicts laboratory earthquakes. Geophys. Res. Lett. 44(18), 9276–9282 (2017)

    Article  Google Scholar 

  17. Sobolev, G., Tyupkin, Y.: Low-seismicity precursors of large earthquakes in Kamchatka. Volcanol. Seismol. 18, 433–446 (1997)

    Google Scholar 

  18. Utsu, T., Ogata, Y., et al.: The centenary of the Omori formula for a decay law of aftershock activity. J. Phys. Earth 43(1), 1–33 (1995)

    Article  Google Scholar 

Download references

Acknowledgements

The research was partially supported by the Russian Foundation for Basic Research grant 16-29-09649 ofi m.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. Proskura .

Editor information

Editors and Affiliations

A Quality Metrics for Classification Problem

A Quality Metrics for Classification Problem

Introduce necessary definitions: Classification problem can be formulated as whether this object belongs to the target class or not.

  • True Positive—if the object belongs to the target class and we predict that it belongs.

  • True Negative—if the object doesn’t belong to the target class and we predict that it doesn’t.

  • False Positive—if the object doesn’t belong to the target class but we predict that it does.

  • False Negative—if the object belongs to the target class but we predict that it doesn’t.

The precision score quantifies the ability of a classifier to not label a negative example as positive. The is the probability that a positive prediction made by the classifier is positive. The score is in the range [0, 1] with 0 is the worst, and 1 is perfect. The precision score can be defined as:

$$\mathbf{Precision} = \frac{True Positive}{True Positive + False Positive}$$

The recall score quantifies the ability of the classifier to find all the positive samples. It defines what part of positive samples have been chosen by classifier as positive. The score is in the range [0, 1] with 0 is the worst, and 1 is perfect.

$$\mathbf{Recall} = \frac{True Positive}{True Positive + False Negative}$$

The F1-score is a single metric that combines both precision and recall via their harmonic mean. It measures the test accuracy and reaches its best value at 1 (perfect precision and recall) and worst at 0.

$$\mathbf{F1} = 2 \frac{Precision Recall}{Precision + Recall}$$

ROC AUC score counts the curve area under the Roc_curve. Roc_curve is the plot True Positive rate from the False Positive rate, which defines as

$$True Positive Rate = \frac{True Positive}{True Positive + False Negative}$$
$$False Positive Rate = \frac{False Positive}{False Positive + True Negative}$$

ROC AUC score measures the quality of binary classifier. The best value is 1, value 0.5 is equal to random classification.

PR AUC score counts the curve area under the Precision_Recall_curve: Precision from Recall. Precision-Recall is a useful measure of success of prediction when the classes are very imbalanced. The perfect classifier curve ends in (1.0, 1.0) and has area under it that equals 1.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Proskura, P., Zaytsev, A., Braslavsky, I., Egorov, E., Burnaev, E. (2019). Usage of Multiple RTL Features for Earthquakes Prediction. In: Misra, S., et al. Computational Science and Its Applications – ICCSA 2019. ICCSA 2019. Lecture Notes in Computer Science(), vol 11619. Springer, Cham. https://doi.org/10.1007/978-3-030-24289-3_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-24289-3_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-24288-6

  • Online ISBN: 978-3-030-24289-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics