Which Outlier Detection Algorithm Should I Use?

  • Charu C. AggarwalEmail author
  • Saket Sathe


Ensembles can be used to improve the performance of base detectors in several different ways. The first method is to use a single base detector in conjunction with a method like feature bagging and subsampling. The second method is to combine multiple base detectors in order to induce greater diversity. What is the impact of using generic ensemble methods on various base detectors? What is the impact of combining these ensemble methods into a higher-level combination? This chapter will discuss both these different ways of combining base detectors and also various ways in which one can squeeze the most out of ensemble methods.


Outlier Detection Ensemble Method Kernel Principal Component Analysis Base Detector Local Outlier Factor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    C. C. Aggarwal. Data Mining: The Textbook, Springer, 2015.Google Scholar
  2. 2.
    C. C. Aggarwal. Outlier Analysis, Second Edition, Springer, 2017.Google Scholar
  3. 3.
    C. C. Aggarwal and S. Sathe. Theoretical Foundations and Algorithms for Outlier Ensembles, ACM SIGKDD Explorations, 17(1), June 2015.Google Scholar
  4. 4.
    C. C. Aggarwal and P. S. Yu. Outlier Detection in High Dimensional Data, ACM SIGMOD Conference, 2001.Google Scholar
  5. 5.
    C. C. Aggarwal, C. Procopiuc, J. Wolf, P. Yu, and J.-S. Park. Fast Algorithms for Projected Clustering. ACM SIGMOD Conference, 1999.Google Scholar
  6. 6.
    L. Akoglu, E. Muller, and J Vreeken. ACM KDD Workshop on Outlier Detection and Description, 2013.
  7. 7.
    F. Angiulli, C. Pizzuti. Fast outlier detection in high dimensional spaces, PKDD Conference, 2002.hGoogle Scholar
  8. 8.
    T. Bandaragoda. Isolation-Based Anomaly Detection: A Re-examination, Masters dissertation, Monash University, 2015. Electronic copy at:
  9. 9.
    M. Breunig, H.-P. Kriegel, R. Ng, and J. Sander. LOF: Identifying Density-based Local Outliers, ACM SIGMOD Conference, 2000.Google Scholar
  10. 10.
    L. Brieman. Random Forests. Journal Machine Learning archive, 45(1), pp. 5–32, 2001.CrossRefGoogle Scholar
  11. 11.
    C. Campbell, and K. P. Bennett. A Linear-Programming Approach to Novel Class Detection. Advances in Neural Information Processing Systems, 2000.Google Scholar
  12. 12.
    G. O. Campos, A. Zimek, J. Sander, R. J. G. B. Campello, B. Micenkova, E. Schubert, I. Assent, and M. E. Houle. On the Evaluation of Unsupervised Outlier Detection: Measures, Datasets, and an Empirical Study. Data Mining and Knowledge Discovery, 30(4), pp. 891–927, 2016.
  13. 13.
    J. Chen, S. Sathe, C. Aggarwal, and D. Turaga. Outlier Detection with Autoencoder Ensembles. SIAM Conference on Data Mining, 2017.Google Scholar
  14. 14.
    A. Emmott, S. Das, T. Dietteerich, A. Fern, and W. Wong. Systematic Construction of Anomaly Detection Benchmarks from Real Data. arXiv:1503.01158, 2015.
  15. 15.
    M. Fernandez-Delgado, E. Cernadas, S. Barro, and D. Amorim. Do we Need Hundreds of Classifiers to Solve Real World Classification Problems? The Journal of Machine Learning Research, 15(1), pp. 3133–3181, 2014.MathSciNetzbMATHGoogle Scholar
  16. 16.
    M. Goldstein and S. Uchida. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data. PloS One, 11(4), e0152173, 2016.CrossRefGoogle Scholar
  17. 17.
    S. Guha, N. Mishra, G. Roy, and O. Schrijver. Robust Random Cut Forest Based Anomaly Detection On Streams. ICML Conference, pp. 2712–2721, 2016.Google Scholar
  18. 18.
    S. Hawkins, H. He, G. Williams, and R. Baxter. Outlier Detection using Replicator Neural Networks. Proceedings of the International Conference on Data Warehousing and Knowledge Discovery, pp. 170–180, Springer, 2002.Google Scholar
  19. 19.
    Z. He, X. Xu, and S. Deng. Discovering Cluster-based Local Outliers. Pattern Recognition Letters, Vol 24(910), pp. 1641–1650, 2003.CrossRefzbMATHGoogle Scholar
  20. 20.
    Z. He, S. Deng and X. Xu. A Unified Subspace Outlier Ensemble Framework for Outlier Detection, Advances in Web Age Information Management, 2005.Google Scholar
  21. 21.
    S. Hido, Y. Tsuboi, H. Kashima, M. Sugiyama, and T. Kanamori. Statistical Outlier Detection using Direct Density Ratio Estimation. Knowledge and information Systems, 26(2), pp. 309–336, 2011.CrossRefGoogle Scholar
  22. 22.
    T. K. Ho. Random decision forests. Third International Conference on Document Analysis and Recognition, 1995. Extended version appears in IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(8), pp. 832–844, 1998.Google Scholar
  23. 23.
    H. Hoffmann. Kernel PCA for Novelty Detection, Pattern Recognition, 40(3), pp. 863–874, 2007.CrossRefzbMATHGoogle Scholar
  24. 24.
    R. Jarvis and E. Patrick. Clustering Using a Similarity Meausre based on Shared Near Neighbors. IEEE Transactions on Computers, 100(11), pp. 1025–1034, 1973.CrossRefGoogle Scholar
  25. 25.
    H. Javitz, and A. Valdez. The SRI IDES Statistical Anomaly Detector. IEEE Symposium on Security and Privacy, 1991.Google Scholar
  26. 26.
    S. Khan and M. Madden. One-class Classification: Taxonomy of Study and Review of Techniques. Knowledge Engineering Review, 29(03), 345–374, 2014.CrossRefGoogle Scholar
  27. 27.
    F. Keller, E. Muller, K. Bohm. HiCS: High-Contrast Subspaces for Density-based Outlier Ranking, IEEE ICDE Conference, 2012.Google Scholar
  28. 28.
    J. Kim and C. Scott. Robust Kernel Density Estimation. Journal of Machine Learning Research, 13, pp. 2529–2565, 2012.
  29. 29.
    E. Knorr, and R. Ng. Algorithms for Mining Distance-based Outliers in Large Datasets. VLDB Conference, 1998.Google Scholar
  30. 30.
    E. Knorr, and R. Ng. Finding Intensional Knowledge of Distance-Based Outliers. VLDB Conference, 1999.Google Scholar
  31. 31.
    L. Latecki, A. Lazarevic, and D. Pokrajac. Outlier Detection with Kernel Density Functions. Machine Learning and Data Mining in Pattern Recognition, pp. 61–75, 2007.Google Scholar
  32. 32.
    A. Lazarevic, and V. Kumar. Feature Bagging for Outlier Detection, ACM KDD Conference, 2005.Google Scholar
  33. 33.
    F. T. Liu, K. M. Ting, and Z. H. Zhou. On Detecting Clustered Anomalies using SCiForest. Machine Learning and Knowledge Discovery in Databases, pp. 274–290, 2010.Google Scholar
  34. 34.
    F. T. Liu, K. M. Ting, and Z.-H. Zhou. Isolation Forest. ICDM Conference, 2008. Extended version appears in: ACM Transactions on Knowledge Discovery from Data (TKDD), 6(1), 3, 2012.Google Scholar
  35. 35.
    L. M. Manevitz and M. Yousef. One-class SVMs for Document Classification, Journal of Machine Learning Research, 2: pp, 139–154, 2001.zbMATHGoogle Scholar
  36. 36.
    F. Moosmann, B. Triggs, and F. Jurie. Fast Discriminative Visual Codebooks using Randomized Clustering Forests. Neural Information Processing Systems, pp. 985–992, 2006.Google Scholar
  37. 37.
    E. Muller, M. Schiffer, and T. Seidl. Statistical Selection of Relevant Subspace Projections for Outlier Ranking. ICDE Conference, pp, 434–445, 2011.Google Scholar
  38. 38.
    E. Muller, I. Assent, P. Iglesias, Y. Mulle, and K. Bohm. Outlier Ranking via Subspace Analysis in Multiple Views of the Data, ICDM Conference, 2012.Google Scholar
  39. 39.
    H. Nguyen, H. Ang, and V. Gopalakrishnan. Mining ensembles of heterogeneous detectors on random subspaces, DASFAA, 2010.Google Scholar
  40. 40.
    K. Noto, C. Brodley, and D. Slonim. FRaC: A Feature-Modeling Approach for Semi-Supervised and Unsupervised Anomaly Detection. Data Mining and Knowledge Discovery, 25(1), pp. 109–133, 2012.MathSciNetCrossRefGoogle Scholar
  41. 41.
    G. Orair, C. Teixeira, W. Meira Jr, Y. Wang, and S. Parthasarathy. Distance-Based Outlier Detection: Consolidation and Renewed Bearing. Proceedings of the VLDB Endowment, 3(1–2), pp. 1469–1480, 2010.CrossRefGoogle Scholar
  42. 42.
    L. Ott, L. Pang, F. Ramos, and S. Chawla. On Integrated Clustering and Outlier Detection. Advances in Meural Information Processing Systems, pp. 1359–1367, 2014.Google Scholar
  43. 43.
    S. Papadimitriou, H. Kitagawa, P. Gibbons, and C. Faloutsos, LOCI: Fast Outlier Detection using the Local Correlation Integral, ICDE Conference, 2003.Google Scholar
  44. 44.
    H. Paulheim and R. Meusel. A Decomposition of the Outlier Detection Problem into a Set of Supervised Learning Problems. Machine Learning, 100(2–3), pp. 509–531, 2015.MathSciNetCrossRefzbMATHGoogle Scholar
  45. 45.
    J. Pickands. Statistical inference using extreme order statistics. The Annals of Statistics, 3(1), pp. 119–131, 1975.MathSciNetCrossRefzbMATHGoogle Scholar
  46. 46.
    J. Pickands. Multivariate extreme value distributions. Proceedings of the 43rd Session International Statistical Institute, 2, pp. 859–878, 1981.Google Scholar
  47. 47.
    S. Ramaswamy, R. Rastogi, and K. Shim. Efficient Algorithms for Mining Outliers from Large Data Sets. ACM SIGMOD Conference, pp. 427–438, 2000.Google Scholar
  48. 48.
    D. Rocke and D. Woodruff. Identification of Outliers in Multivariate Data. Journal of the American Statistical Association 91, 435, pp. 1047–1061, 1996.MathSciNetCrossRefzbMATHGoogle Scholar
  49. 49.
    V. Roth. Kernel Fisher Discriminants for Outlier Detection. Neural Computation, 18(4), pp. 942–960, 2006.MathSciNetCrossRefzbMATHGoogle Scholar
  50. 50.
    S. Sathe and C. Aggarwal. LODES: Local Density Meets Spectral Outlier Detection, SIAM Conference on Data Mining, 2016.Google Scholar
  51. 51.
    S. Sathe and C. Aggarwal. Subspace Outlier Detection in Linear Time with Randomized Hashing. ICDM Conference, 2016.Google Scholar
  52. 52.
    B. Scholkopf, R. C. Williamson, A. J. Smola, J. Shawe-Taylor, and J. C. Platt. Support-vector Method for Novelty Detection, Advances in Neural Information Processing Systems, 2000.Google Scholar
  53. 53.
    B. Scholkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, 13(7), pp. 1443–1472, 2001.CrossRefzbMATHGoogle Scholar
  54. 54.
    M. Shyu, S. Chen, K. Sarinnapakorn, L. Chang. A novel anomaly detection scheme based on principal component classifier. ICDMW, 2003.Google Scholar
  55. 55.
    B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall, 1986.Google Scholar
  56. 56.
    S. C. Tan, K. M. Ting, and T. F. Liu. Fast Anomaly Detection for Streaming Data. IJCAI Conference, 2011.Google Scholar
  57. 57.
    J. Tang, Z. Chen, A. W.-C. Fu, and D. W. Cheung. Enhancing Effectiveness of Outlier Detections for Low Density Patterns. PAKDD Conference, 2002.Google Scholar
  58. 58.
    D. Tax. One Class Classification: Concept-learning in the Absence of Counter-examples, Doctoral Dissertation, University of Delft, Netherlands, 2001.
  59. 59.
    D. Tax and R. Duin. Combining One-Class Classifiers. Multiple Classifier Systems, pp. 299–308, 2001.Google Scholar
  60. 60.
    D. Tax and R. Duin. Support Vector Data Description. Machine learning, 54(1), 45-66, 2004.Google Scholar
  61. 61.
    D. Tax, and P. Juszczak. Kernel Whitening for One-Class Classification. Pattern Recognition with Support Vector Machines, pp. 40–52, 2002.Google Scholar
  62. 62.
    K. M. Ting, G. Zhou, F. Liu, and S. C. Tan. Mass Estimation and its Applications. ACM KDD Conference, pp. 989–998, 2010. Extended version of paper appears as “Mass Estimation. Machine Learning, 90(1), pp. 127–160, 2013.”Google Scholar
  63. 63.
    K. M. Ting, Y. Zhu, M. Carman, and Y. Zhu. Overcoming Key Weaknesses of Distance-Based Neighbourhood Methods using a Data Dependent Dissimilarity Measure. ACM KDD Conference, 2016.Google Scholar
  64. 64.
    C. Williams and M. Seeger. Using the Nyström method to speed up kernel machines. NIPS Conference, 2000.Google Scholar
  65. 65.
    G. Williams, R. Baxter, H. He, S. Hawkings, and L. Gu. A Comparative Study of RNN for Outlier Detection in Data Mining. IEEE ICDM Conference, 2002.Google Scholar
  66. 66.
    K. Yamanishi, J.-I. Takeuchi, G. Williams, and P. Milne. Online Unsupervised Outlier Detection using Finite Mixtures with Discounting Learning Algorithms. ACM KDD Conference, pp. 320–324, 2000.Google Scholar
  67. 67.
    K. Zhang, M. Hutter, and H. Jin. A New Local Distance-Based Outlier Detection Approach for Scattered Real-World Data. Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 813–822, 2009.Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.IBM T. J. Watson Research CenterYorktown HeightsUSA

Personalised recommendations