1 Introduction

Anomaly detection (AD), also known as outlier detection, is a unique class of machine learning that has a wide range of important applications, including intrusion detection in networks and control systems, fault detection in industrial manufacturing procedures, diagnosis of certain diseases in medical areas by identifying outlying patterns in medical images or other health records, cyber-security, etc. AD algorithms are identification processes that are able to single out items or events that are different from an expected pattern, or those that have significantly lower frequencies compared to others in a dataset [8, 14].

In the past, there has been substantial effort in using traditional machine learning techniques for both supervised and unsupervised AD such as principal component analysis (PCA) [6, 7], one-class support vector machine (OC-SVM) [12, 22, 29], isolation forests [18], clustering based methods such as k-means, and Gaussian mixture model (GMM) [4, 16, 35], etc. Notwithstanding, they often become inefficient when being used in high-dimensional problems because of high complexity and the absence of an integrated efficient dimensionality reduction approach. There is recently a growing interest in using deep learning techniques to tackle this issue. Nonetheless, most previous work still relies on two-staged or separate training in which a low-dimensional space is firstly learned via an autoencoder. For example, the work in [13] simply proposes a hybrid architecture with a deep belief network to reduce the dimensionality of the input space and separately applies the learned feature space to a conventional OC-SVM. Robust deep autoencoder (RDA) [34] uses a structure that combines robust PCA and dimensionality reduction by autoencoder. However, this two-stage method is not able to learn efficient features for AD problems, especially when the dimensionality grows higher because of decoupled learning stages. More similar to our approach, deep clustering embedding (DEC) [31] is a state-of-the-art algorithm that integrates unsupervised autoencoding network with clustering. Even though clustering is often considered as a possible solution to AD tasks, DEC is designed to jointly optimize the latent feature space and clustering, thus would learn a latent feature space that is more efficient to clustering rather than AD.

End-to-end training of dimensionality reduction and AD has recently received much interest, such as the frameworks using deep energy-based model [33], autoencoder combined with Gaussian mixture model [36], generative adversarial networks (GAN) [21, 32]. Nonetheless, these methods are based on density estimation techniques to detect anomalies as a by-product of unsupervised learning, therefore might not be efficient for AD. They might assign high density if there are many proximate anomalies (a new cluster or mixture might be established for them), resulting in false negative cases.

One-class support vector machine is one of the most popular techniques for unsupervised AD. OC-SVM is known to be insensitive to noise and outliers in the training data. Still, the performance of OC-SVM in general is susceptible to the dimensionality and complexity of the data [5], while their training speed is also heavily affected by the size of the datasets. As a result, conventional OC-SVM may not be desirable in big data and high-dimensional AD applications. To tackle these issues, previous work has only performed dimensionality reduction via deep learning and OC-SVM based AD separately. Notwithstanding, separate dimensionality reduction might have a negative effect on the performance of the consequential AD, since important information useful for identifying outliers can be interpreted differently in the latent space. On the other hand, to the best of our knowledge, studies on the application of kernel approximation and stochastic gradient descent (SGD) on OC-SVM have been lacking: most of the existing works only apply random Fourier features (RFF) [20] to the input space and treat the problem as a linear support vector machine (SVM); meanwhile, [5, 23] have showcased the prospect of using SGD to optimize SVM, but without the application of kernel approximation.

Another major issue in joint training with dimensionality reduction and AD is the interpretability of the trained models, that is, the capability to explain the reasoning for why they detect the samples as outliers, with respect to the input features. Very recently, explanation for black-box deep learning models has been brought about and attracted a respectable amount of attention from the machine learning research community. Especially, gradient-based explanation (attribution) methods [2, 3, 26] are widely studied as protocols to address this challenge. The aim of the approach is to analyse the contribution of each neuron in the input space of a neural network to the neurons in its latent space by calculating the corresponding gradients. As we will demonstrate, this same concept can be applied to kernel-approximated SVMs to score the importance of each input feature to the margin that separates the decision hyperplane.

Driven by those reasoning, in this paper we propose AE-1SVM that is an end-to-end autoencoder based OC-SVM model combining dimensionality reduction and OC-SVM for large-scale AD. RFFs are applied to approximate the RBF kernel, while the input of OC-SVM is fed directly from a deep autoencoder that shares the objective function with OC-SVM such that dimensionality reduction is forced to learn essential pattern assisting the anomaly detecting task. On top of that, we also extend gradient-based attribution methods on the proposed kernel-approximate OC-SVM as well as the whole end-to-end architecture to analyse the contribution of the input features on the decision making of the OC-SVM.

The remainder of the paper is organised as follows. Section 2 reviews the background on OC-SVM, kernel approximation, and gradient-based attribution methods. Section 3 introduces the combined architecture that we have mentioned. In Sect. 4, we derive expressions and methods to obtain the end-to-end gradient of the OC-SVM’s decision function with respect to the input features of the deep learning model. Experimental setups, results, and analyses are presented in Sect. 5. Finally, Sect. 6 draws the conclusions for the paper.

2 Background

In this section, we briefly describe the preliminary background knowledge that is referred to in the rest of the paper.

2.1 One-Class Support Vector Machine

OC-SVM [22] for unsupervised anomaly detection extends the idea of support vector method that is regularly applied in classification. While classic SVM aims to find the hyperplane to maximize the margin separating the data points, in OC-SVM the hyperplane is learned to best separate the data points from the origin. SVMs in general have the ability to capture non-linearity thanks to the use of kernels. The kernel method maps the data points from the input feature space in \(\mathcal {R}^d\) to a higher-dimensional space in \(\mathcal {R}^D\) (where D is potentially infinite), where the data is linearly separable, by a transformation \(\mathcal {R}^d \rightarrow \mathcal {R}^D\). The most commonly used kernel is the radial basis function (RBF) kernel defined by a similarity mapping between any two points x and \(x'\) in the input feature space, formulated by \( K(x,x') = \exp (-\frac{\Vert {x-x'}\Vert ^2}{2\sigma ^2})\), with \(\sigma \) being a kernel bandwidth.

Let \(\mathsf {w}\) and \(\rho \) denote the vectors indicating the weights of all dimensions in the kernel space and the offset parameter determining the distance from the origin to the hyperplane, respectively. The objective of OC-SVM is to separate all data points from the origin by a maximum margin with respect to some constraint relaxation, and is written as a quadratic program as follows:

$$\begin{aligned} \min _{\mathsf {w}, \xi , \rho } \frac{1}{2} \Vert {\mathsf {w}}\Vert ^2 - \rho + \frac{1}{\nu n}\sum _{i=1}^{n}\xi _i, \\ \text {subject to } \mathsf {w}^T \phi (x_i) \ge \rho - \xi _i, \xi _i \ge 0 \nonumber . \end{aligned}$$
(1)

where \(\xi _i\) is a slack variable and \(\nu \) is the regularization parameter. Theoretically, \(\nu \) is the upper bound of the fraction of anomalies in the data, and also the main tuning parameter for OC-SVM. Additionally, by replacing \(\xi _i\) with the hinge loss, we have the unconstrained objective function as

$$\begin{aligned} \min _{\mathsf {w}, \rho } \frac{1}{2} \Vert {\mathsf {w}}\Vert ^2 - \rho + \frac{1}{\nu n}\sum _{i=1}^{n}\max (0, \rho - \mathsf {w}^T \phi (x_i)). \end{aligned}$$
(2)

Let \(g(x) = \mathsf {w}{.}\phi (x_i) - \rho \), the decision function of OC-SVM is

$$\begin{aligned} f(x) = {{\,\mathrm{sign}\,}}(g(x)) = {\left\{ \begin{array}{ll} 1 &{} \text{ if } g(x) \ge 0 \\ -1 &{} \text{ if } g(x) < 0 \end{array}\right. }. \end{aligned}$$
(3)

The optimization problem of SVM in (2) is usually solved as a convex optimization problem in the dual space with the use of Lagrangian multipliers to reduce complexity while increasing solving feasibility. LIBSVM [9] is the most popular library that provides efficient optimization algorithms to train SVMs, and has been widely adopted in the research community. Nevertheless, solving SVMs in the dual space can be susceptible to the data size, since the function K between each pair of points in the dataset has to be calculated and stored in a matrix, resulting in an \(O(n^2)\) complexity, where n is the size of the dataset.

2.2 Kernel Approximation with Random Fourier Features

To address the scalability problem of kernel machines, approximation algorithms have been introduced and widely applied, with the most two dominant being Nyströem [30] and random Fourier features (RFF) [20]. In this paper, we focus on RFF since it has lower complexity and does not require pre-training. The method is based on the Fourier transform of the kernel function, given by a Gaussian distribution , where is the identity matrix and \(\sigma \) is an adjustable parameter representing the standard deviation of the Gaussian process.

From the distribution p, D independent and identically distributed weights \(\omega _1, \omega _2, ..., \omega _D\) are drawn. In the original work [20], two mappings are introduced, namely the combined cosine and sine mapping as \(z_\omega (x) = \begin{bmatrix} cos(\omega ^T x)&sin(\omega ^T x) \end{bmatrix} ^T\) and the offset cosine mapping as \(z_\omega (x) = \sqrt{2} cos(\omega ^T x + b)\), where the offset parameter \(b \sim U (0, 2\pi )\). It has been proven in [28] that the former mapping outperforms the latter one in approximating RBF kernels due to the fact that no phase shift is introduced as a result of the offset variable. Therefore, in this paper, we only consider the combined sine and cosine mapping. As such, the complete mapping is defined as follows:

$$\begin{aligned} z(x) = \sqrt{\frac{1}{D}}\begin{bmatrix} cos(\omega _1^T x)&...&cos(\omega _D^T x)&sin(\omega _1^T x)&...&sin(\omega _D^T x) \end{bmatrix} ^T, \end{aligned}$$
(4)

Applying the kernel approximation mappings to (2), the hinge loss can be replaced by \(\max (0, \rho - \mathsf {w}^T z(x_i))\). The objective function itself is then equivalent to a OC-SVM in the approximated kernel space \(\mathcal {R}^D\), and thus the optimization problem is more trivial, despite the dimensionality of \(\mathcal {R}^D\) being higher than that of \(\mathcal {R}^d\).

2.3 Gradient-Based Explanation Methods

Gradient-based methods exploit the gradient of the latent nodes in a neural network with respect to the input features to rate the attribution of each input to the output of the network. In the recent years, many research studies [2, 19, 26, 27] have applied this approach to explain the classification decision and sensitivity of input features in deep neural networks and especially convolutional neural networks. Intuitively, an input dimension \(\mathbf {x}_i\) has larger contribution to a latent node \(\mathbf {y}\) if the gradient of \(\mathbf {y}\) with respect to \(\mathbf {x}_i\) is higher, and vice versa.

Instead of using purely gradient as a quantitative factor, various extensions of the method has been developed, including Gradient*Input [25], Integrated gradients [27], or DeepLIFT [24]. The most recent work [2] showed that these methods are strongly related and proved conditions of equivalence or approximation between them. In addition, other non gradient-based can be re-formulated to be implemented easily like gradient-based.

3 Deep Autoencoding One-Class SVM

In this section, we present our combined model, namely Deep autoencoding One-class SVM (AE-1SVM), based on OC-SVM for anomaly detecting tasks in high-dimensional and big datasets. The model consists of two main components, as illustrated in Fig. 1 (Left). The first component is a deep autoencoder network for dimensionality reduction and feature representation of the input space. The second one is an OC-SVM for anomaly prediction based on support vectors and margin. The RBF kernel is approximated using random Fourier features. The bottleneck layer of the deep autoencoder network is forwarded directly into the Random features mapper as the input of the OC-SVM. By doing this, the autoencoder network is pressed to optimize its variables to represent the input features in the direction that supports the OC-SVM in separating the anomalies from the normal class.

Fig. 1.
figure 1

(Left) Illustration of the Deep autoencoding One-class SVM architecture. (Right) Connections between input layer and hidden layers of a neural network

Let us denote \(\mathbf {x}\) as the input of the deep autoencoder, \(\mathbf {x}'\) as the reconstructed value of \(\mathbf {x}\), and x as the latent space of the autoencoder. In addition, \(\theta \) is the set of parameters of the autoencoder. As such, the joint objective function of the model regarding the autoencoder parameters, the OC-SVM’s weights, and its offset is as follows:

$$\begin{aligned} Q(\theta ,\mathsf {w}, \rho ) = \alpha L(\mathbf {x}, \mathbf {x}') + \frac{1}{2} \Vert {\mathsf {w}}\Vert ^2 - \rho + \frac{1}{\nu n}\sum _{i=1}^{n}\max (0, \rho - \mathsf {w}^T z(x_i)) \end{aligned}$$
(5)

The components and parameters in (5) are described below

  • \(L(\mathbf {x}, \mathbf {x}')\) is the reconstruction loss of the autoencoder, which is normally chosen to be the L2-norm loss \(L(\mathbf {x}, \mathbf {x}') = \Vert {\mathbf {x} - \mathbf {x}'}\Vert _2^2\).

  • Since SGD is applied, the variable n, which is formerly the number of training samples, becomes the batch size since the hinge loss is calculated using the data points in the batch.

  • z is the random Fourier mappings as defined in (4). Due to the random features being data-independent, the standard deviation \(\sigma \) of the Gaussian distribution has to be fine-tuned correlatively with the parameter \(\nu \).

  • \(\alpha \) is a hyperparameter controlling the trade-off between feature compression and SVM margin optimization.

Overall, the objective function is optimized in conjunction using SGD with backpropagation. Furthermore, the autoencoder network can also be extended to a convolutional autoencoder, which is showcased in the experiment section.

4 Interpretable Autoencoding One-Class SVM

In this section, we outline the method for interpreting the results of AE-1SVM using gradients and present illustrative example to verify its validity.

4.1 Derivations of End-to-End Gradients

Considering an input x of an RFF kernel-approximated OC-SVM with dimensionality \(R^d\). In our model, x is the bottleneck representation of the latent space in the deep autoencoder. The expression of the margin g(x) with respect to the input x is as follows:

$$\begin{aligned} g(x)&= \sum _{j=1}^{D}\mathsf {w}_j z_{\omega _j}(x) - \rho = \sqrt{\frac{1}{D}}\sum _{j=1}^{D}\big [\mathsf {w}_jcos(\sum _{k=1}^{d}\omega _{jk}x_k) + \mathsf {w}_{D+j}sin(\sum _{k=1}^{d}\omega _{jk}x_k)\big ] - \rho . \end{aligned}$$

As a result, the gradient of the margin function on each input dimension \(k = 1, 2, ..., d\) can be calculated as

$$\begin{aligned} \frac{\partial g}{\partial x_k}&= \sqrt{\frac{1}{D}}\sum _{j=1}^{D}\omega _{jk}\big [-\mathsf {w}_j sin(\sum _{k=1}^{d}\omega _{jk}x_k) +\mathsf {w}_{j+D} cos(\sum _{k=1}^{d}\omega _{jk}x_k)\big ]. \end{aligned}$$
(6)

Next, we can derive the gradient of the latent space nodes with respect to the deep autoencoder’s input layer (extension to convolutional autoencoder is straightforward). In general, considering a neural network with M input neurons \(x_m, m = 1, 2, ..., M\), and the first hidden layer having N neurons \(u_n, n = 1, 2, ..., N\), as depicted in Fig. 1 (Right). The gradient of \(u_n\) with respect to \(x_m\) can be derived as

$$\begin{aligned} G(x_m, u_n) = \frac{\partial u_n}{\partial x_m} = w_{mn}\sigma '(x_m w_{mn} + b_{mn})\sigma (x_m w_{mn} + b_{mn}), \end{aligned}$$
(7)

where \(\sigma (x_m w_{mn} + b_{mn}) = u_n\), \(\sigma (.)\) is the activation function, \(w_{mn}\) and \(b_{mn}\) are the weight and bias connecting \(x_m\) and \(u_n\). The derivative of \(\sigma \) is different for each activation function. For instance, with a sigmoid activation \(\sigma \), the gradient \(G(x_m, u_n)\) is computed as \(\displaystyle w_{mn}u_n(1-u_n)\), while \(G(x_m, u_n)\) is \(w_{mn}(1-u_n^2)\) for tanh activation function.

To calculate the gradient of neuron \(y_l\) in the second hidden layer with respect to \(x_m\), we simply apply the chain rule and sum rule as follows:

$$\begin{aligned} G(x_m, y_l) = \frac{\partial y_l}{\partial x_m}&= \sum _{n=1}^{N}\frac{\partial y_l}{\partial u_n}\frac{\partial u_n}{\partial x_m} = \sum _{n=1}^{N}G(u_n, y_l)G(x_m, u_n). \end{aligned}$$
(8)

The gradient \(G(u_n, y_l)\) can be obtained in a similar manner to (7). By maintaining the values of G at each hidden layer, the gradient of any hidden or output layer with respect to the input layer can be calculated. Finally, combining this and (6), we can get the end-to-end gradient of the OC-SVM margin with respect to all input features. Besides, state-of-the-art machine learning frameworks like TensorFlow also implements automatic differentiation [1] that simplifies the procedures for computing those gradient values.

Using the obtained values, the decision making of the AD model can be interpreted as follows. For an outlying sample, the dimension which has higher gradient indicates a higher contribution to the decision making of the ML model. In other words, the sample is further to the boundary in that particular dimension. For each mentioned dimension, if the gradient is positive, the value of the feature in that dimension is lesser than the lower limit of the boundary. In contrast, if the gradient holds a negative value, the feature exceeds the level of the normal class.

Fig. 2.
figure 2

Illustrative example of gradient-based explanation methods. (Left) The encoded 2D feature space from a 4D dataset. (Right) The gradient of the margin function with respect to the four original features for each testing point. Only the coordinates of first two dimensions are annotated.

4.2 Illustrative Example

Figure 2 presents an illustrative example of interpreting anomaly detecting results using gradients. We generate 1950 four-dimensional samples as normal instances, where the first two features are uniformly generated such that they are inside a circle with center C(0.5, 0.5). The third and fourth dimensions are drawn uniformly in the range \([-0.2, 0.2]\) so that the contribution of them are significantly less than the other two dimensions. In contrast, 50 anomalies are created which have the first two dimensions being far from the mentioned circle, while the last two dimensions has a higher range of \([-2, 2]\). The whole dataset including both the normal and anomalous classes are trained with the proposed AE-1SVM model with a bottleneck layer of size 2 and sigmoid activation.

The figure on the left shows the representation of the 4D dataset on a 2-dimensional space. Expectedly, it captures most of the variability from only the first two dimensions. Furthermore, we plot the gradients of 9 different anomalous samples, with the two latter dimensions being randomized, and overall, the results have proven the aforementioned interpreting rules. It can easily be observed that the contribution of the third and fourth dimensions to the decision making of the model is always negligible. Among the first two dimensions, the ones having the value of 0.1 or 0.9 has the corresponding gradients perceptibly higher than those being 0.5, as they are further from the boundary and the sample can be considered “more anomalous” in that dimension. Besides, the gradient of the input 0.1 is always positive due to the fact that it is lower than the normal level. In contrast, the gradient of the input 0.9 is consistently negative.

5 Experimental Results

We present qualitative empirical analysis to justify the effectiveness of the AE-1SVM model in terms of accuracy and improved training/testing time. The objective is to compare the proposed model with conventional and state-of-the-art AD methods over synthetic and well-known real world dataFootnote 1.

5.1 Datasets

We conduct experiments on one generated datasets and five real-world datasets (we assume all tasks are unsupervised AD) as listed below in Table 1. The descriptions of each individual dataset is as follows:

  • Gaussian: This dataset is taken into account to showcase the performance of the methods on high-dimensional and large data. The normal samples are drawn from a normal distribution with zero mean and standard deviation \(\sigma =1\), while \(\sigma =5\) for the anomalous instances. Theoretically, since the two groups have different distributional attributes, the AD model should be able to separate them.

  • ForestCover: From the ForestCover/Covertype dataset [11], class 2 is extracted as the normal class, and class 4 is chosen as the anomaly class.

  • Shuttle: From the Shuttle dataset [11], we select the normal samples from classes 2, 3, 5, 6, 7, while the outlier group is made of class 1.

  • KDDCup99: The popular KDDCup99 dataset [11] has approximately 80% proportion as anomalies. Therefore, from the 10-percent subset, we randomly select 5120 samples from the outlier classes to form the anomaly set such that the contamination ratio is 5%. The categorical features are extracted using one-hot encoding, making 118 features in the raw input space.

  • USPS: We select from the U.S Postal Service handwritten digits dataset [15] 950 samples from digit 1 as normal data, and 50 samples from digit 7 as anomalous data, as the appearance of the two digits are similar. The size of each image is 16 \(\times \) 16, resulting in each sample being a flatten vector of 256 features.

  • MNIST: From the MNIST dataset [17], 5842 samples of digit ‘4’ are chosen as normal class. On the other hand, the set of outliers contains 100 digits from classes ‘0’, ‘7’, and ‘9’. This task is challenging due to the fact that many digits ‘9’ are remarkably similar to digit ‘4’. Each input sample is a flatten vector with 784 dimensions.

Table 1. Summary of the datasets used for comparison in the experiments.

5.2 Baseline Methods

Variants of OC-SVM and several state-of-the-art methods are selected as baselines to compare the performance with the AE-1SVM model. Different modifications of the conventional OC-SVM are considered. First, we take into account the version where OC-SVM with RBF kernel is trained directly on the raw input. Additionally, to give more impartial justifications, a version where an autoencoding network exactly identical to that of the AE-1SVM model is considered. We use the same number of training epochs to AE-1SVM to investigate the ability of AE-1SVM to force the dimensionality reduction network to learn better representation of the data. The OC-SVM is then trained on the encoded feature space, and this variant is also similar to the approach given in [13].

The following methods are also considered as baselines to examine the anomaly detecting performance of the proposed model:

  • Isolation Forest [18]: This ensemble method revolves around the idea that the anomalies in the data have significantly lower frequencies and are different from the normal points.

  • Robust Deep Autoencoder (RDA) [34]: In this algorithm, a deep autoencoder is constructed and trained such that it can decompose the data into two components. The first component contains the latent space representation of the input, while the second one is comprised of the noise and outliers that are difficult to reconstruct.

  • Deep Clustering Embeddings (DEC) [31]: This algorithm combines unsupervised autoencoding network with clustering. As outliers often locate in sparser clusters or are far from their centroids, we apply this method into AD and calculate the anomaly score of each sample as a product of its distance to the centroid and the density of the cluster it belongs to.

5.3 Evaluation Metrics

In all experiments, the area under receiver operating characteristic (AUROC) and area under the Precision-Recall curve (AUPRC) are applied as metrics to evaluate and compare the performance of AD methods. Having a high AUROC is necessary for a competent model, whereas AUPRC often highlights the difference between the methods regarding imbalance datasets [10]. The testing procedure follows the unsupervised setup, where each dataset is split with 1:1 ratio, and the entire training set including the anomalies is used for training the model. The output of the models on the test set is measured against the ground truth using the mentioned scoring metrics, with the average scores and approximal training and testing time of each algorithm after 20 runs being reported.

5.4 Model Configurations

In all experiments, we employ the sigmoid activation function and implement the architecture using TensorFlow [1]. We discover that for the random Fourier features, a standard deviation \(\sigma = 3.0\) produces satisfactory results for all datasets. For other parameters, the network configurations of AE-1SVM for each individual dataset are as in Table 2 below.

Table 2. Summary of network configurations and training parameters of AE-1SVM used in the experiments.

For the MNIST dataset, we additionally implement a convolutional autoencoder with pooling and unpooling layers: conv1(\(5\times 5\times 16\)), pool1(\(2 \times 2\)), conv2(\(5\times 5\times 9\)), pool2(\(2 \times 2\)) and a feed-forward layer afterward to continue compressing into 49 dimensions; the decoder: a feed-forward layer afterward of \(49\times 9\) dimensions, then deconv1(\(5\times 5\times 9\)), unpool1(\(2 \times 2\)), deconv2(\(5\times 5\times 16\)), unpool2(\(2 \times 2\)), then a feed-forward layer of 784 dimensions. The dropout rate is set to 0.5 in this convolutional autoencoder network.

For each baseline methods, the best set of parameters is selected. In particular, for different variants of OC-SVM, the optimal values for parameter \(\nu \) and the RBF kernel width are exhaustively searched. Likewise, for Isolation forest, the fraction ratio is tuned around the anomalies rate for each dataset. For RDA, DEC, as well as OC-SVM variants that involves auto-encoding network for dimensionality reduction, the autoencoder structures exactly identical to AE-1SVM are used, while the \(\lambda \) hyperparameter in RDA is also adjusted as it is the most important factor of the algorithm.

5.5 Results

Firstly, for the Gaussian dataset, the histograms of the decision scores obtained by different methods are presented in Fig. 3. It can clearly be seen that AE-1SVM is able to single out all anomalous samples, while giving the best separation between the two classes.

Fig. 3.
figure 3

Histograms of decision scores of AE-1SVM and other baseline methods.

Table 3. Average AUROC, AUPRC, approximal train time and test time of the baseline methods and proposed method. Best results are displayed in boldface.

For other datasets, the comprehensive results are given in Table 3. It is obvious that AE-1SVM outperforms conventional OC-SVM as well as the two-staged structure with decoupled autoencoder and OC-SVM in terms of accuracy in all scenarios, and is always among the top performers. As we restrict the number of training epochs for the detached autoencoder to be same as that for AE-1SVM, its performance declines significantly and in some cases its representation is even worse than the raw input. This proves that AE-1SVM can attain more efficient features to support AD task given the similar time.

Other observations can also be made from the results. For ForestCover, only the AUROC score of Isolation Forest is close, but the AUPRC is significantly lower, with three time less than that of AE-1SVM, suggesting that it has to compensate a higher false alarm rate to identify anomalies correctly. Similarly, Isolation Forest slightly surpasses AE-1SVM in AUROC for Shuttle dataset, but is subpar in terms of AUPRC, thus can be considered less optimal choice. Analogous patterns can as well be noticed for other datasets. Especially, for MNIST, it is shown that the proposed method AE-1SVM can also operate under a convolutional autoencoder network in image processing contexts.

Regarding training time, AE-1SVM outperforms other methods for ForestCover, which is the largest dataset. For other datasets that have high sample size, namely KDDCup99 and Shuttle, it is still one of the fastest candidates. Furthermore, we also extend the KDDCup99 experiment and train AE-1SVM model on a full dataset, and acquire promising results in only about 200 s. This verifies the effectiveness and potential application of the model in big-data circumstances. On top of that, the testing time of AE-1SVM is a notable improvement over other methods, especially Isolation Forest and conventional OC-SVM, suggesting its feasibility in real-time environments.

5.6 Gradient-Based Explanation in Image Datasets

We also investigate the use of gradient-based explanation methods on the image datasets. Figure 4 illustrates the unsigned gradient maps of several anomalous digits in the USPS and MNIST datasets. The MNIST results are given by the version with convolutional autoencoder. Interesting patterns proving the correctness of gradient-based explanation approach can be observed from Fig. 4 (Left). The positive gradient maps revolve around the middle part of the images where the pixels in the normal class of digits ‘1’ are normally bright (higher values), indicating the absence of those pixels contributes significantly to the reasoning that the samples ‘7’ are detected as outliers. Likewise, the negative gradient maps are more intense on the pixels matching the bright pixels outside the center area of its corresponding image, meaning that the values of those pixels in the original image exceeds the range of the normal class, which is around the zero (black) level. Similar perception can be acquired from Fig. 4 (Right), as it shows the difference between each samples of digits ‘0’, ‘7’, and ‘9’, to digit ‘4’.

Fig. 4.
figure 4

(Left) The USPS experiment. (Right) The MNIST experiment. From top to bottom rows: original image, positive gradient map, negative gradient map, and full gradient map.

6 Conclusion

In this paper, we propose the end-to-end autoencoding One-class Support Vector Machine (AE-1SVM) model comprising of a deep autoencoder for dimensionality reduction and a variant structure of OC-SVM using random Fourier features for anomaly detection. The model is jointly trained using SGD with a combined loss function to both lessen the complexity of solving support vector problems and force dimensionality reduction to learn better representation that is beneficial for the anomaly detecting task. We also investigate the application of applying gradient-based explanation methods to interpret the decision making of the proposed model, which is not feasible for most of the other anomaly detection algorithms. Extensive experiments have been conducted to verify the strengths of our approach. The results have demonstrated that AE-1SVM can be effective in detecting anomalies, while significantly enhance both training and response time for high-dimensional and large-scale data. Empirical evidence of interpreting the predictions of AE-1SVM using gradient-based methods has also been presented using illustrative examples and handwritten image datasets.