## Abstract

Concept drift is a serious problem confronting machine learning systems in a dynamic and ever-changing world. In order to manage concept drift it may be useful to first quantify it by measuring the distance between distributions that generate data before and after a drift. There is a paucity of methods to do so in the case of multidimensional numeric data. This paper provides an in-depth analysis of the PCA-based change detection approach, identifies shortcomings of existing methods and shows how this approach can be used to measure a drift, not merely detect it.

This is a preview of subscription content, log in to check access.

## References

- 1.
Abdi H (2007) The eigen-decomposition: eigenvalues and eigenvectors. Encycl Measurement Stat 304–308

- 2.
Abdi H (2007) Singular value decomposition (SVD) and generalized singular value decomposition. Encycl Measurement Stat 907–912

- 3.
Abdi H, Williams LJ (2010) Principal component analysis. Wiley Interdiscip Rev Comput Stat 2(4):433–459

- 4.
Benavoli A, Corani G, Demšar J, Zaffalon M (2017) Time for a change: a tutorial for comparing multiple classifiers through bayesian analysis. J Mach Learn Res 18(1):2653–2688

- 5.
Blythe DA, Von Bunau P, Meinecke FC, Muller KR (2012) Feature extraction for change-point detection using stationary subspace analysis. IEEE Trans Neural Netw Learn Syst 23(4):631–643

- 6.
Cieslak DA, Hoens TR, Chawla NV, Kegelmeyer WP (2012) Hellinger distance decision trees are robust and skew-insensitive. Data Min Knowl Discov 24(1):136–158. https://doi.org/10.1007/s10618-011-0222-1

- 7.
Cule M, Samworth R (2010) Theoretical properties of the log-concave maximum likelihood estimator of a multidimensional density. Electr J Stat 4:254–270

- 8.
Gama J, Žliobaite I, Bifet A, Pechenizkiy M, Bouchachia A (2014) A survey on concept drift adaptation. ACM Comput Surv (CSUR) 46(4):44

- 9.
Goldenberg I, Webb GI (2019) Survey of distance measures for quantifying concept drift and shift in numeric data. Knowl Inf Syst 60:591–615. https://doi.org/10.1007/s10115-018-1257-z

- 10.
Groeneboom P, Jongbloed G, Witte BI (2012) A maximum smoothed likelihood estimator in the current status continuous mark model. J Nonparametr Stat 24(1):85–101

- 11.
Hoens TR, Polikar R, Chawla NV (2012) Learning from streaming data with concept drift and imbalance: an overview. Progress Artif Intell 1(1):89–101

- 12.
Joyce JM (2011) Kullback–Leibler divergence. Springer, Berlin, pp 720–722

- 13.
Kuncheva LI, Faithfull WJ (2014) PCA feature extraction for change detection in multidimensional unlabeled data. IEEE Trans Neural Netw Learn Syst 25(1):69–80

- 14.
Long JS, Ervin LH (2000) Using heteroscedasticity consistent standard errors in the linear regression model. Am Stat 54(3):217–224

- 15.
Muller HG, Stadtmuller U et al (1987) Estimation of heteroscedasticity in regression analysis. Ann Stat 15(2):610–625

- 16.
Qahtan AA, Alharbi B, Wang S, Zhang X (2015) A PCA-based change detection framework for multidimensional data streams: change detection in multidimensional data streams. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, ACM, pp 935–944

- 17.
Tofallis C (2009) Least squares percentage regression. J Mod Appl Stat Methods. https://doi.org/10.2139/ssrn.1406472

- 18.
Wand MP, Jones MC (1994) Kernel smoothing. Chapman and Hall/CRC, London

- 19.
Webb GI, Hyde R, Cao H, Nguyen HL, Petitjean F (2016) Characterizing concept drift. Data Min Knowl Discov 30(4):964–994. https://doi.org/10.1007/s10618-015-0448-4

- 20.
Webb GI, Lee LK, Goethals B et al (2018) Analyzing concept drift and shift from sample data. Data Min Knowl Disc 32:1179–1199. https://doi.org/10.1007/s10618-018-0554-1

## Author information

### Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Appendices

### Appendix

### Experiments

Some of the experiments conducted involved generating data that have known theoretical, or “true”, Hellinger distance. We describe here the process used to generate these data. Datasets were generated from the multivariate normal distribution. Generation of random samples was done through an “inverted” PCA approach:

First generate independent univariate normal variable and then use a rotation to introduce dependency between them. HD could be attributed to difference in either mean, variance or correlation. Data were generated for each value of HD between 0 and 1 with step 0.01 and for various sample sizes between 100 and 10,000.

### Difference is due to difference in mean

The samples are drawn from distributions that have the same rotation and equal covariance matrices \((V_1=V_2=V)\), but different means \((M_1\ne M_2)\). To generate distributions that differ in mean while retaining identical covariance, we use the following equality.

Let \(\varDelta =(\mu _1-\mu _2)'V^{-1}(\mu _1-\mu _2)\). If *V* is diagonal, then \(\varDelta =\sum _{i=1}^{n}\frac{\mu ^2_i}{\sigma ^2_i}\). We split into *n* randomly selected addends that sum to \(\varDelta \) and assign them to correspondent PCA components. The procedure to generate samples is described by Algorithm 7

### Difference is due to different variance, with the same mean and rotation(eigenvectors)

We use the following equality to generate distributions that differ in variance without any change in mean or rotation.

If we set

Then it follows that

We then use Algorithm 8 to generate the two samples.

### HD is due to different correlations with the same mean and variance

We use the following equality to generate distributions that differ in correlation matrices without any change in mean or variance. Covariance matrix \(V=DRD\), where *D* is a diagonal matrix of standard deviations and *R* is a corresponding correlation matrix.

Numerical approximation was used to generate a correlation matrix that yields the desired HD. In the first experiment \(R_1\) was set to identity matrix. In the second experiment \(R_1\) was set to the matrix where all off-diagonal elements equal − 0.1. Diagonal elements for the second matrix \(R_2\) were set to one and off-diagonal to \(\alpha \), where \(\alpha \) was numerically approximated for each value of HD.

## Rights and permissions

## About this article

### Cite this article

Goldenberg, I., Webb, G.I. PCA-based drift and shift quantification framework for multidimensional data.
*Knowl Inf Syst* **62, **2835–2854 (2020). https://doi.org/10.1007/s10115-020-01438-3

Received:

Revised:

Accepted:

Published:

Issue Date:

### Keywords

- Principal component analysis
- Drift detection
- Hellinger distance