# Proximity Forest: an effective and scalable distance-based classifier for time series

- 62 Downloads

## Abstract

Research into the classification of time series has made enormous progress in the last decade. The UCR time series archive has played a significant role in challenging and guiding the development of new learners for time series classification. The largest dataset in the UCR archive holds 10,000 time series only; which may explain why the primary research focus has been on creating algorithms that have high accuracy on relatively small datasets. This paper introduces Proximity Forest, an algorithm that learns accurate models from datasets with millions of time series, and classifies a time series in milliseconds. The models are ensembles of highly randomized Proximity Trees. Whereas conventional decision trees branch on attribute values (and usually perform poorly on time series), Proximity Trees branch on the proximity of time series to one exemplar time series or another; allowing us to leverage the decades of work into developing relevant measures for time series. Proximity Forest gains both efficiency and accuracy by stochastic selection of both exemplars and similarity measures. Our work is motivated by recent time series applications that provide orders of magnitude more time series than the UCR benchmarks. Our experiments demonstrate that Proximity Forest is highly competitive on the UCR archive: it ranks among the most accurate classifiers while being significantly faster. We demonstrate on a 1M time series Earth observation dataset that Proximity Forest retains this accuracy on datasets that are many orders of magnitude greater than those in the UCR repository, while learning its models at least 100,000 times faster than current state-of-the-art models Elastic Ensemble and COTE.

## Keywords

Time series classification Scalable classification Time-warp similarity measures Ensemble## Notes

### Acknowledgements

This research was supported by the Australian Research Council under Grant DE170100037. This material is based upon work supported by the Air Force Office of Scientific Research, Asian Office of Aerospace Research and Development (AOARD) under award number FA2386-17-1-4036. We are grateful to the editor and anonymous reviewers whose suggestions and comments have greatly strengthened the paper. The authors would also like to thank Prof Eamonn Keogh and all of the people who have contributed to the UCR time series classification archive.

## References

- Bagnall A, Lines J, Bostrom A, Large J, Keogh E (2017) The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min Knowl Discov 31(3):606–660MathSciNetGoogle Scholar
- Bagnall A, Lines J, Hills J, Bostrom A (2015) Time-series classification with COTE: the collective of transformation-based ensembles. IEEE Trans Knowl Data Eng 27(9):2522–2535Google Scholar
- Balakrishnan S, Madigan D (2006) Decision trees for functional variables. In: IEEE international conference on data mining (ICDM-06), pp 798–802Google Scholar
- Bernhardsson E (2013) Indexing with annoy. https://github.com/spotify/annoy. Accessed 23 March 2018
- Breiman L (2001) Random forests. Mach Learn 45(1):5–32zbMATHGoogle Scholar
- Chen L, Ng R (2004) On the marriage of lp-norms and edit distance. In: Proceedings of the thirtieth international conference on very large data bases, vol 30, pp 792–803. VLDB EndowmentGoogle Scholar
- Chen L, Özsu M T, Oria V (2005) Robust and fast similarity search for moving object trajectories. In: Proceedings of the 2005 ACM SIGMOD international conference on management of data, pp 491–502. ACMGoogle Scholar
- Chen Y, Keogh E, Hu B, Begum N, Bagnall A, Mueen A, Batista G (2015) The UCR time series classification archive . www.cs.ucr.edu/~eamonn/time_series_data/. Accessed 23 March 2018
- Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetzbMATHGoogle Scholar
- Deng H, Runger G, Tuv E, Vladimir M (2013) A time series forest for classification and feature extraction. Inf Sci 239:142–153MathSciNetzbMATHGoogle Scholar
- Douzal-Chouakria A, Amblard C (2012) Classification trees for time series. Pattern Recognit 45(3):1076–1091Google Scholar
- Geurts P, Ernst D, Wehenkel L (2006) Extremely randomized trees. Mach Learn 63(1):3–42zbMATHGoogle Scholar
- Górecki T, Łuczak M (2013) Using derivatives in time series classification. Data Min Knowl Discov 26(2):310–331MathSciNetGoogle Scholar
- Grabocka J, Wistuba M, Schmidt-Thieme L (2016) Fast classification of univariate and multivariate time series through shapelet discovery. Knowl Inf Syst 49(2):429–454Google Scholar
- Haghiri S, Ghoshdastidar D, von Luxburg U (2017) Comparison-based nearest neighbor search. arXiv e-prints, arXiv:1704.01460
- Haghiri S, Garreau D, von Luxburg U (2018) Comparison-based random forests. arXiv e-prints, arXiv:1806.06616
- Hamooni H, Mueen A (2014) Dual-domain hierarchical classification of phonetic time series. In: 2014 IEEE international conference on data mining, pp 160–169. IEEEGoogle Scholar
- Hills J, Lines J, Baranauskas E, Mapp J, Bagnall A (2014) Classification of time series by shapelet transformation. Data Min Knowl Discov 28(4):851–881MathSciNetzbMATHGoogle Scholar
- Ho TK (1995) Random decision forests. In: Proceedings of the third international conference on document analysis and recognition, 1995, vol 1, pp 278–282. IEEEGoogle Scholar
- Jeong YS, Jeong MK, Omitaomu OA (2011) Weighted dynamic time warping for time series classification. Pattern Recognit 44(9):2231–2240Google Scholar
- Karlsson I, Papapetrou P, Boström H (2016) Generalized random shapelet forests. Data Min Knowl Discov 30(5):1053–1085MathSciNetGoogle Scholar
- Keogh E, Wei L, Xi X, Lee S H, Vlachos M (2006) LB\_Keogh supports exact indexing of shapes under rotation invariance with arbitrary representations and distance measures. In: Proceedings of the 32nd international conference on very large data bases, pp 882–893. VLDB EndowmentGoogle Scholar
- Keogh EJ, Pazzani MJ (2001) Derivative dynamic time warping. In: Proceedings of the 2001 SIAM international conference on data mining, pp 1–11. SIAMGoogle Scholar
- Lemire D (2009) Faster retrieval with a two-pass dynamic-time-warping lower bound. Pattern Recognit 42(9):2169–2180zbMATHGoogle Scholar
- Lifshits Y (2010) Nearest neighbor search: algorithmic perspective. SIGSPATIAL Spec 2(2):12–15MathSciNetGoogle Scholar
- Lin J, Khade R, Li Y (2012) Rotation-invariant similarity in time series using bag-of-patterns representation. J Intell Inf Syst 39(2):287–315Google Scholar
- Lines J, Bagnall A (2015) Time series classification with ensembles of elastic distance measures. Data Min Knowl Discov 29(3):565MathSciNetGoogle Scholar
- Marteau PF (2009) Time warp edit distance with stiffness adjustment for time series matching. IEEE Trans Pattern Anal Mach Intell 31(2):306–318Google Scholar
- Marteau PF (2016) Times series averaging and denoising from a probabilistic perspective on time-elastic kernels. arXiv preprint, arXiv:1611.09194
- Muja M. FLANN-Fast library for approximate nearest neighbors. www.cs.ubc.ca/research/flann/. Accessed 23 March 2018
- Pękalska E, Duin RP, Paclík P (2006) Prototype selection for dissimilarity-based classifiers. Pattern Recognit 39(2):189–208zbMATHGoogle Scholar
- Petitjean F, Forestier G, Webb GI, Nicholson AE, Chen Y, Keogh E (2014) Dynamic time warping averaging of time series allows faster and more accurate classification. In: 2014 IEEE international conference on data mining, pp 470–479. IEEEGoogle Scholar
- Petitjean F, Forestier G, Webb GI, Nicholson AE, Chen Y, Keogh E (2016) Faster and more accurate classification of time series by exploiting a novel dynamic time warping averaging algorithm. Knowl Inf Syst 47(1):1–26Google Scholar
- Petitjean F, Gançarski P (2012) Summarizing a set of time series by averaging: from Steiner sequence to compact multiple alignment. Theor Comput Sci 414(1):76–91MathSciNetzbMATHGoogle Scholar
- Rakthanmanon T, Keogh E (2013) Fast shapelets: a scalable algorithm for discovering time series shapelets. In: Proceedings of the 13th SIAM international conference on data mining, pp 668–676. SIAMGoogle Scholar
- Sakoe H, Chiba S (1971) A dynamic programming approach to continuous speech recognition. In: Proceedings of the seventh international congress on acoustics, vol 3, pp 65–69. Budapest, HungaryGoogle Scholar
- Sakoe H, Chiba S (1978) Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans Acoust Speech Signal Process 26(1):43–49zbMATHGoogle Scholar
- Sathe S, Aggarwal CC (2017) Similarity forests. In: Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’17, pp 395–403. ACM. https://doi.org/10.1145/3097983.3098046
- Schäfer P (2015) The BOSS is concerned with time series classification in the presence of noise. Data Min Knowl Discov 29(6):1505–1530MathSciNetGoogle Scholar
- Schäfer P (2015) Scalable time series classification. Data Min Knowl Discov 2:1–26Google Scholar
- Schäfer P, Högqvist M (2012) SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In: Proceedings of the 15th international conference on extending database technology, EDBT ’12, pp 516–527. ACM. https://doi.org/10.1145/2247596.2247656
- Schäfer P, Leser U (2017) Fast and accurate time series classification with WEASEL. In: Proceedings of the 2017 ACM on conference on information and knowledge management, pp 637–646. ACMGoogle Scholar
- Senin P, Malinchik S (2013) SAX-VSM: Interpretable time series classification using SAX and vector space model. In: 2013 IEEE 13th international conference on data mining, pp 1175–1180. IEEEGoogle Scholar
- Stefan A, Athitsos V, Das G (2013) The move-split-merge metric for time series. IEEE Trans Knowl Data Eng 25(6):1425–1438Google Scholar
- Tan CW, Webb GI, Petitjean F (2017) Indexing and classifying gigabytes of time series under time warping. In: Proceedings of the 2017 SIAM international conference on data mining, pp 282–290. SIAMGoogle Scholar
- Ting K.M, Zhu Y, Carman M, Zhu Y, Zhou Z.H (2016) Overcoming key weaknesses of distance-based neighbourhood methods using a data dependent dissimilarity measure. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1205–1214. ACMGoogle Scholar
- Ueno K, Xi X, Keogh E, Lee DJ (2006) Anytime classification using the nearest neighbor algorithm with applications to stream mining. In: 6th international conference on data mining, 2006. ICDM’06, pp 623–632. IEEEGoogle Scholar
- Vlachos M, Hadjieleftheriou M, Gunopulos D, Keogh E (2006) Indexing multidimensional time-series. Int J Very Large Data Bases 15(1):1–20Google Scholar
- Wang X, Mueen A, Ding H, Trajcevski G, Scheuermann P, Keogh E (2013) Experimental comparison of representation methods and distance measures for time series data. Data Min Knowl Discov 26(2):275–309MathSciNetGoogle Scholar
- Yamada Y, Suzuki E, Yokoi H, Takabayashi K (2003) Decision-tree induction from time-series data based on a standard-example split test. In: Proceedings of the twentieth international conference on international conference on machine learning, ICML’03, pp 840–847. AAAI Press. http://dl.acm.org/citation.cfm?id=3041838.3041944
- Ye L, Keogh E (2011) Time series shapelets: a novel technique that allows accurate, interpretable and fast classification. Data Min Knowl Discov 22(1):149–182MathSciNetzbMATHGoogle Scholar