Theory of Outlier Ensembles

Chapter

Abstract

Outlier detection is an unsupervised problem, in which labels are not available with data records (Aggarwal, Outlier analysis, 2017, [2]). As a result, it is generally more challenging to design ensemble analysis algorithms for outlier detection. In particular, methods that require the use of labels in intermediate steps of the algorithm cannot be generalized to outlier detection.

References

  1. 1.
    C. C. Aggarwal. Outlier Ensembles: Position Paper, ACM SIGKDD Explorations, 14(2), pp. 49–58, December, 2012.Google Scholar
  2. 2.
    C. C. Aggarwal. Outlier Analysis, Second Edition, Springer, 2017.Google Scholar
  3. 3.
    C. C. Aggarwal and P. S. Yu. Outlier Detection in Graph Streams. IEEE ICDE Conference, 2011.Google Scholar
  4. 4.
    C. C. Aggarwal and S. Sathe. Theoretical Foundations and Algorithms for Outlier Ensembles, ACM SIGKDD Explorations, 17(1), June 2015.Google Scholar
  5. 5.
    L. Brieman. Bagging Predictors. Machine Learning, 24(2), pp. 123–140, 1996.Google Scholar
  6. 6.
    L. Brieman. Random Forests. Journal Machine Learning archive, 45(1), pp. 5–32, 2001.Google Scholar
  7. 7.
    G. Brown, J. Wyatt, R. Harris, and X. Yao. Diversity creation methods: a survey and categorisation. Information Fusion, 6:5(20), 2005.Google Scholar
  8. 8.
    R. Bryll, R. Gutierrez-Osuna, and F. Quek. Attribute Bagging: Improving Accuracy of Classifier Ensembles by using Random Feature Subsets. Pattern Recognition, 36(6), pp. 1291–1302, 2003.Google Scholar
  9. 9.
    P. Buhlmann, B. Yu. Analyzing bagging. Annals of Statistics, pp. 927–961, 2002.Google Scholar
  10. 10.
    P. Buhlmann. Bagging, Subagging and Bragging for Improving Some Prediction Algorithms. Recent advances and trends in nonparametric statistics, Elsevier, 2003.Google Scholar
  11. 11.
    A. Buja, W. Stuetzle. Observations on bagging. Statistica Sinica, 16(2), 323, 2006.Google Scholar
  12. 12.
    M. Denil, D. Matheson, and N. De Freitas. Narrowing the Gap: Random Forests In Theory and in Practice. ICML Conference, pp. 665–673, 2014.Google Scholar
  13. 13.
    T. Dietterich. Ensemble Methods in Machine Learning. First International Workshop on Multiple Classifier Systems, 2000.Google Scholar
  14. 14.
    Y. Freund and R. Schapire. A Decision-theoretic Generalization of Online Learning and Application to Boosting. Computational Learning Theory, 1995.Google Scholar
  15. 15.
    Y. Freund and R. Schapire. Experiments with a New Boosting Algorithm. ICML Conference, pp. 148–156, 1996.Google Scholar
  16. 16.
    J. Friedman. On Bias, Variance, 0/1loss, and the Curse-of-Dimensionality. Data Mining and Knowledge Discovery, 1(1), pp. 55–77, 1997.Google Scholar
  17. 17.
    S. Geman, E. Bienenstock, and R. Doursat. Neural Networks and the Bias/Variance Dilemma. Neural computation, 4(1), pp. 1–58, 1992.Google Scholar
  18. 18.
    T. K. Ho. Random decision forests. Third International Conference on Document Analysis and Recognition, 1995. Extended version appears in IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(8), pp. 832–844, 1998.Google Scholar
  19. 19.
    T. K. Ho. Nearest Neighbors in Random Subspaces. Lecture Notes in Computer Science, Vol. 1451, pp. 640–648, Proceedings of the Joint IAPR Workshops SSPR’98 and SPR’98, 1998. http://link.springer.com/chapter/10.1007/BFb0033288
  20. 20.
    R. Kohavi and D.H. Wolpert. Bias plus variance decomposition for zero-one loss functions, ICML Conference, 1996.Google Scholar
  21. 21.
    E. Kong and T. Dietterich. Error-Correcting Output Coding Corrects Bias and Variance. Proceedings of the Twelfth International Conference on Machine Learning, pp. 313–321, 1995.Google Scholar
  22. 22.
    A. Lazarevic, and V. Kumar. Feature Bagging for Outlier Detection, ACM KDD Conference, 2005.Google Scholar
  23. 23.
    F. T. Liu, K. M. Ting, and Z.-H. Zhou. Isolation Forest. ICDM Conference, 2008. Extended version appears in: ACM Transactions on Knowledge Discovery from Data (TKDD), 6(1), 3, 2012.Google Scholar
  24. 24.
    R. Michalski, I. Mozetic, J. Hong and N. Lavrac. The Multi-Purpose Incremental Learning System AQ15 and its Testing Applications to Three Medical Domains, Proceedings of the Fifth National Conference on Artificial Intelligence, pp. 1041–1045, 1986.Google Scholar
  25. 25.
    S. Rayana, L. Akoglu. Less is More: Building Selective Anomaly Ensembles with Application to Event Detection in Temporal Graphs. SDM Conference, 2015.Google Scholar
  26. 26.
    S. Rayana, L. Akoglu. Less is More: Building Selective Anomaly Ensembles. ACM Transactions on Knowledge Disovery and Data Mining, to appear, 2016.Google Scholar
  27. 27.
    L. Rokach. Pattern classification using ensemble methods, World Scientific Publishing Company, 2010.Google Scholar
  28. 28.
    M. Salehi, C. Leckie, M. Moshtaghi, and T. Vaithianathan. A Relevance Weighted Ensemble Model for Anomaly Detection in Switching Data Streams. Advances in Knowledge Discovery and Data Mining, pp. 461–473, 2014.Google Scholar
  29. 29.
    G. Seni and J. Elder. Ensemble Methods in Data Mining: Improving Accuracy through Combining Predictions. Synthesis Lectures in Data Mining and Knowledge Discovery, Morgan and Claypool, 2010.Google Scholar
  30. 30.
    R. Tibshirani. Bias, Variance, and Prediction Error for Classification Rules, Technical Report, Statistics Department, University of Toronto, 1996.Google Scholar
  31. 31.
    G. Valentini and T. Dietterich. Bias-variance Analysis of Support Vector Machines for the Development of SVM-based Ensemble Methods. Journal of Machine Learning Research, 5, pp. 725–774, 2004.Google Scholar
  32. 32.
    A. Zimek, M. Gaudet, R. Campello, J. Sander. Subsampling for efficient and effective unsupervised outlier detection ensembles, KDD Conference, 2013.Google Scholar
  33. 33.
    Z.-H. Zhou. Ensemble Methods: Foundations and Algorithms. Chapman and Hall/CRC Press, 2012.Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.IBM T. J. Watson Research CenterYorktown HeightsUSA

Personalised recommendations