Proximity Forest: an effective and scalable distance-based classifier for time series

  • Benjamin LucasEmail author
  • Ahmed Shifaz
  • Charlotte Pelletier
  • Lachlan O’Neill
  • Nayyar Zaidi
  • Bart Goethals
  • François Petitjean
  • Geoffrey I. Webb


Research into the classification of time series has made enormous progress in the last decade. The UCR time series archive has played a significant role in challenging and guiding the development of new learners for time series classification. The largest dataset in the UCR archive holds 10,000  time series only; which may explain why the primary research focus has been on creating algorithms that have high accuracy on relatively small datasets. This paper introduces Proximity Forest, an algorithm that learns accurate models from datasets with millions of time series, and classifies a time series in milliseconds. The models are ensembles of highly randomized Proximity Trees. Whereas conventional decision trees branch on attribute values (and usually perform poorly on time series), Proximity Trees branch on the proximity of time series to one exemplar time series or another; allowing us to leverage the decades of work into developing relevant measures for time series. Proximity Forest gains both efficiency and accuracy by stochastic selection of both exemplars and similarity measures. Our work is motivated by recent time series applications that provide orders of magnitude more time series than the UCR benchmarks. Our experiments demonstrate that Proximity Forest is highly competitive on the UCR archive: it ranks among the most accurate classifiers while being significantly faster. We demonstrate on a 1M time series Earth observation dataset that Proximity Forest retains this accuracy on datasets that are many orders of magnitude greater than those in the UCR repository, while learning its models at least 100,000 times faster than current state-of-the-art models Elastic Ensemble and COTE.


Time series classification Scalable classification Time-warp similarity measures Ensemble 



This research was supported by the Australian Research Council under Grant DE170100037. This material is based upon work supported by the Air Force Office of Scientific Research, Asian Office of Aerospace Research and Development (AOARD) under award number FA2386-17-1-4036. We are grateful to the editor and anonymous reviewers whose suggestions and comments have greatly strengthened the paper. The authors would also like to thank Prof Eamonn Keogh and all of the people who have contributed to the UCR time series classification archive.


  1. Bagnall A, Lines J, Bostrom A, Large J, Keogh E (2017) The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min Knowl Discov 31(3):606–660MathSciNetGoogle Scholar
  2. Bagnall A, Lines J, Hills J, Bostrom A (2015) Time-series classification with COTE: the collective of transformation-based ensembles. IEEE Trans Knowl Data Eng 27(9):2522–2535Google Scholar
  3. Balakrishnan S, Madigan D (2006) Decision trees for functional variables. In: IEEE international conference on data mining (ICDM-06), pp 798–802Google Scholar
  4. Bernhardsson E (2013) Indexing with annoy. Accessed 23 March 2018
  5. Breiman L (2001) Random forests. Mach Learn 45(1):5–32zbMATHGoogle Scholar
  6. Chen L, Ng R (2004) On the marriage of lp-norms and edit distance. In: Proceedings of the thirtieth international conference on very large data bases, vol 30, pp 792–803. VLDB EndowmentGoogle Scholar
  7. Chen L, Özsu M T, Oria V (2005) Robust and fast similarity search for moving object trajectories. In: Proceedings of the 2005 ACM SIGMOD international conference on management of data, pp 491–502. ACMGoogle Scholar
  8. Chen Y, Keogh E, Hu B, Begum N, Bagnall A, Mueen A, Batista G (2015) The UCR time series classification archive . Accessed 23 March 2018
  9. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetzbMATHGoogle Scholar
  10. Deng H, Runger G, Tuv E, Vladimir M (2013) A time series forest for classification and feature extraction. Inf Sci 239:142–153MathSciNetzbMATHGoogle Scholar
  11. Douzal-Chouakria A, Amblard C (2012) Classification trees for time series. Pattern Recognit 45(3):1076–1091Google Scholar
  12. Geurts P, Ernst D, Wehenkel L (2006) Extremely randomized trees. Mach Learn 63(1):3–42zbMATHGoogle Scholar
  13. Górecki T, Łuczak M (2013) Using derivatives in time series classification. Data Min Knowl Discov 26(2):310–331MathSciNetGoogle Scholar
  14. Grabocka J, Wistuba M, Schmidt-Thieme L (2016) Fast classification of univariate and multivariate time series through shapelet discovery. Knowl Inf Syst 49(2):429–454Google Scholar
  15. Haghiri S, Ghoshdastidar D, von Luxburg U (2017) Comparison-based nearest neighbor search. arXiv e-prints, arXiv:1704.01460
  16. Haghiri S, Garreau D, von Luxburg U (2018) Comparison-based random forests. arXiv e-prints, arXiv:1806.06616
  17. Hamooni H, Mueen A (2014) Dual-domain hierarchical classification of phonetic time series. In: 2014 IEEE international conference on data mining, pp 160–169. IEEEGoogle Scholar
  18. Hills J, Lines J, Baranauskas E, Mapp J, Bagnall A (2014) Classification of time series by shapelet transformation. Data Min Knowl Discov 28(4):851–881MathSciNetzbMATHGoogle Scholar
  19. Ho TK (1995) Random decision forests. In: Proceedings of the third international conference on document analysis and recognition, 1995, vol 1, pp 278–282. IEEEGoogle Scholar
  20. Jeong YS, Jeong MK, Omitaomu OA (2011) Weighted dynamic time warping for time series classification. Pattern Recognit 44(9):2231–2240Google Scholar
  21. Karlsson I, Papapetrou P, Boström H (2016) Generalized random shapelet forests. Data Min Knowl Discov 30(5):1053–1085MathSciNetGoogle Scholar
  22. Keogh E, Wei L, Xi X, Lee S H, Vlachos M (2006) LB\_Keogh supports exact indexing of shapes under rotation invariance with arbitrary representations and distance measures. In: Proceedings of the 32nd international conference on very large data bases, pp 882–893. VLDB EndowmentGoogle Scholar
  23. Keogh EJ, Pazzani MJ (2001) Derivative dynamic time warping. In: Proceedings of the 2001 SIAM international conference on data mining, pp 1–11. SIAMGoogle Scholar
  24. Lemire D (2009) Faster retrieval with a two-pass dynamic-time-warping lower bound. Pattern Recognit 42(9):2169–2180zbMATHGoogle Scholar
  25. Lifshits Y (2010) Nearest neighbor search: algorithmic perspective. SIGSPATIAL Spec 2(2):12–15MathSciNetGoogle Scholar
  26. Lin J, Khade R, Li Y (2012) Rotation-invariant similarity in time series using bag-of-patterns representation. J Intell Inf Syst 39(2):287–315Google Scholar
  27. Lines J, Bagnall A (2015) Time series classification with ensembles of elastic distance measures. Data Min Knowl Discov 29(3):565MathSciNetGoogle Scholar
  28. Marteau PF (2009) Time warp edit distance with stiffness adjustment for time series matching. IEEE Trans Pattern Anal Mach Intell 31(2):306–318Google Scholar
  29. Marteau PF (2016) Times series averaging and denoising from a probabilistic perspective on time-elastic kernels. arXiv preprint, arXiv:1611.09194
  30. Muja M. FLANN-Fast library for approximate nearest neighbors. Accessed 23 March 2018
  31. Pękalska E, Duin RP, Paclík P (2006) Prototype selection for dissimilarity-based classifiers. Pattern Recognit 39(2):189–208zbMATHGoogle Scholar
  32. Petitjean F, Forestier G, Webb GI, Nicholson AE, Chen Y, Keogh E (2014) Dynamic time warping averaging of time series allows faster and more accurate classification. In: 2014 IEEE international conference on data mining, pp 470–479. IEEEGoogle Scholar
  33. Petitjean F, Forestier G, Webb GI, Nicholson AE, Chen Y, Keogh E (2016) Faster and more accurate classification of time series by exploiting a novel dynamic time warping averaging algorithm. Knowl Inf Syst 47(1):1–26Google Scholar
  34. Petitjean F, Gançarski P (2012) Summarizing a set of time series by averaging: from Steiner sequence to compact multiple alignment. Theor Comput Sci 414(1):76–91MathSciNetzbMATHGoogle Scholar
  35. Rakthanmanon T, Keogh E (2013) Fast shapelets: a scalable algorithm for discovering time series shapelets. In: Proceedings of the 13th SIAM international conference on data mining, pp 668–676. SIAMGoogle Scholar
  36. Sakoe H, Chiba S (1971) A dynamic programming approach to continuous speech recognition. In: Proceedings of the seventh international congress on acoustics, vol 3, pp 65–69. Budapest, HungaryGoogle Scholar
  37. Sakoe H, Chiba S (1978) Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans Acoust Speech Signal Process 26(1):43–49zbMATHGoogle Scholar
  38. Sathe S, Aggarwal CC (2017) Similarity forests. In: Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’17, pp 395–403. ACM.
  39. Schäfer P (2015) The BOSS is concerned with time series classification in the presence of noise. Data Min Knowl Discov 29(6):1505–1530MathSciNetGoogle Scholar
  40. Schäfer P (2015) Scalable time series classification. Data Min Knowl Discov 2:1–26Google Scholar
  41. Schäfer P, Högqvist M (2012) SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In: Proceedings of the 15th international conference on extending database technology, EDBT ’12, pp 516–527. ACM.
  42. Schäfer P, Leser U (2017) Fast and accurate time series classification with WEASEL. In: Proceedings of the 2017 ACM on conference on information and knowledge management, pp 637–646. ACMGoogle Scholar
  43. Senin P, Malinchik S (2013) SAX-VSM: Interpretable time series classification using SAX and vector space model. In: 2013 IEEE 13th international conference on data mining, pp 1175–1180. IEEEGoogle Scholar
  44. Stefan A, Athitsos V, Das G (2013) The move-split-merge metric for time series. IEEE Trans Knowl Data Eng 25(6):1425–1438Google Scholar
  45. Tan CW, Webb GI, Petitjean F (2017) Indexing and classifying gigabytes of time series under time warping. In: Proceedings of the 2017 SIAM international conference on data mining, pp 282–290. SIAMGoogle Scholar
  46. Ting K.M, Zhu Y, Carman M, Zhu Y, Zhou Z.H (2016) Overcoming key weaknesses of distance-based neighbourhood methods using a data dependent dissimilarity measure. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1205–1214. ACMGoogle Scholar
  47. Ueno K, Xi X, Keogh E, Lee DJ (2006) Anytime classification using the nearest neighbor algorithm with applications to stream mining. In: 6th international conference on data mining, 2006. ICDM’06, pp 623–632. IEEEGoogle Scholar
  48. Vlachos M, Hadjieleftheriou M, Gunopulos D, Keogh E (2006) Indexing multidimensional time-series. Int J Very Large Data Bases 15(1):1–20Google Scholar
  49. Wang X, Mueen A, Ding H, Trajcevski G, Scheuermann P, Keogh E (2013) Experimental comparison of representation methods and distance measures for time series data. Data Min Knowl Discov 26(2):275–309MathSciNetGoogle Scholar
  50. Yamada Y, Suzuki E, Yokoi H, Takabayashi K (2003) Decision-tree induction from time-series data based on a standard-example split test. In: Proceedings of the twentieth international conference on international conference on machine learning, ICML’03, pp 840–847. AAAI Press.
  51. Ye L, Keogh E (2011) Time series shapelets: a novel technique that allows accurate, interpretable and fast classification. Data Min Knowl Discov 22(1):149–182MathSciNetzbMATHGoogle Scholar

Copyright information

© The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Faculty of Information TechnologyMonash UniversityMelbourneAustralia

Personalised recommendations