1 Introduction

The recent financial crisis has intensified theoretical and empirical research on the underlying instabilities of economic systems. Several papers have been concerned with the development of early warning signals that could give policymakers and market participants warnings on an upcoming financial crisis. In macroeconomics, most contributions are based on “dynamic stochastic general equilibrium” (DSGE) models. These models seek to understand the economy under the assumption that the set of prices will result from an overall equilibrium under perfectly rational behavior. Unfortunately, when the crisis came, some serious limitations of such macro-financial models became apparent and they were unable to capture the observed large movements in financial markets and the macroeconomy.

A good understanding of the dynamic behavior of macro-financial systems is still lacking. It may be that something is missing from these conventional economic models, preventing them from describing the behavior of financial markets more accurately. In November 2010, the then president of the European central bank, J.-C. Trichet, addressed the ECB Central Banking Conference stating that “macro-models failed to predict the crisis and seemed incapable of explaining what was happening to the economy in a convincing manner”, and moreover “In the face of the crisis, we felt abandoned by conventional tools” (Trichet 2010). The lessons of the financial crisis for macroeconomic and financial analysis are profound. They lead us to ask questions such as: What are the determinants of crises? Can crises be predicted? Can crises be avoided or ameliorated using early warnings?

During the last decades, a growing number of researchers have come to recognize that economic systems should be considered as complex systems (Sornette 2009; Scheffer et al. 2009, 2012; Ball 2012; Farmer et al. 2012; Hommes 2013; Hommes and Iori 2015; Battiston et al. 2016). Unlike traditional economic theories under the assumption of general equilibrium, they describe economic systems as disequilibrium processes. From the complex systems point of view, market crashes are mainly endogenously driven events resulting from complex interactions and nonlinear feedback. The complexity approach offers new ways of thinking about the underlying mechanisms causing market crashes and may offer potential for predicting such crashes. Hope that this may work is also provided by examples of other disciplines where complexity theory has already successfully been applied: ecology, physics, engineering, psychology, biology, etc. Within those fields, tools for analyzing complex dynamical systems have been developed, based on detecting signs of “critical slowing down” prior to critical transitions. These tools are universal and model-free in that they can be applied to observed univariate time series from a wide variety of complex systems, independent of the systems’ details.

The main aim of this paper is to test the possibility, suggested by Scheffer et al. (2009) and Battiston et al. (2016), among others, that financial crises are instances of critical transitions, like those observed and studied in other complex systems. To address this, we ask whether there is empirical evidence, indicating that financial time series behave consistently with complexity theory, which predicts “critical slowing down” prior to critical transitions (crises in this context). If this would be the case, the consequences would be far-reaching, opening up possibilities to construct early warning signals for financial crises based on complexity theory. In fact, it will turn out that our results are mixed, suggesting that financial systems do not belong to the universality class of ecological/physical/biological complex systems and indicating that further research is required to identify the ways in which the mechanisms leading to global instability within financial systems differ from those in natural systems. Note that the angle we take in this research—to see whether financial systems behave as ‘usual’ complex systems—is rather different from, and therefore complements, related recent work on, for instance, bubble detection (Phillips et al. 2011), development of financial stress indices (Hubrich and Tetlow 2015) and graphical analysis (Flood et al. 2016).

The “critical slowing down” approach to extracting early warning signals from observed univariate time series is based on the slowing down of the dynamics of a complex system as the control parameter approaches a critical parameter value at which the current equilibrium loses stability. Several methods to extract signals of critical slowing down from time series data have been developed. Kleinen et al. (2003) observed that power spectral properties changed as the earth system moved closer to a bifurcation point in a hemispheric thermohaline circulation (THC) model. Held and Kleinen (2004) used a trend in the decay rate of climate sub-systems as an indicator of critical slowing down. The first-order autocorrelation function (AR(1)) obtained from time series was used as a measure of the decay rate of the system. This method was applied to the North Atlantic THC model, providing an early warning signal for the climate system. Livina and Lenton (2007) developed another way of detecting critical slowing down by using detrended fluctuation analysis (DFA). This analysis was originally developed by Peng et al. (1994) to detect long-range correlations in DNA sequences. Livina and Lenton (2007) found an early warning signal for an upcoming critical transition in the North Atlantic THC system by investigating model output, as well as Greenland ice core paleotemperature data. The subsequent work by Dakos et al. (2008) studied critical slowing down in a number of eight empirical paleoclimatological time series, for which they found increased autocorrelation prior to critical transitions. Lenton et al. (2012) contributed to the early warning signal literature by offering some general guidelines for choosing the parameter settings of the analysis. They improved the robustness of DFA techniques and gave additional examples showing evidence of critical slowing down in both paleodata and climate model output. Kefi et al. (2013) expanded the theory of critical slowing down to a broader class of situations where a system becomes increasingly sensitive to perturbations even without catastrophic transitions, making clear that critical slowing down could even be used in a more general setting as an early warning signal. Not all studies draw positive conclusions regarding the applicability of early warning indicators; Gsell et al. (2016) studied early warning indicators prior to five well-documented case studies of freshwater critical transitions, but concluded that a priori knowledge about the mechanisms underlying the critical transitions is required for successfully monitoring early warning indicators.

Encouraged by the successes of early warning signals in other complex systems, some efforts have been made to explore the possibility of constructing early warnings from economic and financial time series. For instance, Tan and Cheong (2014) observed critical slowing down in the U.S. housing market. They detected strong early warning signals associated with a sequence of coupled regime shifts during the period of sub-prime mortgage loans transition and the sub-prime crisis. They also found weaker signals during the Asian financial crisis and technology bubble crisis. Gresnigt et al. (2015) interpreted financial market crashes as earthquakes and proposed a modeling framework which allows for creating probability predictions on a future market crash in the medium term. Huang et al. (2017) introduced an early warning method for financial markets based on manifold learning. However, up until now, no evidence of early warning signals has been found in time series data of stock markets. Footnote 1 By using the information dissipation length (IDL) as an indicator, Quax et al. (2013) detected early warning signals prior to Lehman Brothers’ collapse in multivariate USD/EUR interest rate swap (IRS) data. Liu et al. (2015) also reported an increase in a so-called dynamic network marker in USD/EUR interest rate swaps. Note, however, that the measures used in these studies are inherently multivariate, while here we specifically focus on the univariate time series approach to the detection of critical slowing down.

In our study, four financial crises are analyzed: Black Monday 1987, the 1997 Asian Crisis, the Dot-com Crash in 2000 and the 2008 Financial Crisis. Following Dakos et al. (2008), the lag-1 sample autocorrelation and the variance are considered as early warning indicators to examine whether financial systems slow down before the critical time (the onset of the corresponding crisis) is reached. The lag-1 mutual information is also considered, as it is a nonlinear extension of the lag-1 autocorrelation, and since it is a univariate information theoretical measure of serial dependence, the natural univariate counterpart of the IDL.

This paper is organized as follows. In Sect. 2, we provide some theoretical background of nonlinear dynamical systems and bifurcations underlying critical transitions. We describe the data and early warning systems (EWS) methodology in Sects. 3 and 4, respectively. Subsequently, we analyze how the resulting early warning indicators perform for financial time series. The results are presented and discussed in Sect. 5, and we present some robustness checks. Section 6 provides a summary and conclusions.

2 Critical transitions

The mechanism driving critical transitions in complex systems mathematically is a bifurcation, which is an abrupt qualitative change in the behavior of a dynamical system when one or more control parameters change. Thompson et al. (1994) review various types of bifurcations in dissipative dynamical systems. Critical transitions are associated with a so-called dangerous bifurcation in which a stable equilibrium loses stability as a control parameter passes a critical value (Sieber and Thompson 2012). Critical transitions observed in complex systems are specifically associated with such dangerous bifurcations (Thompson et al. 1994; Sieber and Thompson 2012). Figure 1 illustrates the typical scenario considered. For a range of values of the control parameter \(\rho \) there are three equilibria, one of which is unstable and two of which are stable. When the system starts in the upper stable equilibrium and the control parameter \(\rho \) is increased, the upper stable equilibrium branch merges with the unstable branch at a critical value of the parameter, at which the stable and unstable equilibria both disappear. Hence, beyond the critical parameter value there no longer is an upper stable equilibrium; the state variable accelerates downwards, moving quickly to the vicinity of the lower stable equilibrium. This type of bifurcation in which a stable and unstable equilibrium annihilate is known as a saddle-node bifurcation, while the fast transition of the state variable to a new stable state when the control parameter is moved beyond the critical value is known as a critical transition.

Fig. 1
figure 1

A saddle-node bifurcation. The red lines indicate stable equilibria of the state variable x as a function of the control parameter \(\rho \) and the green dotted line an unstable equilibrium. The solid arrows indicate whether the state variable is moving up or down in the state space, depending on the control parameter and the state x. The stable top equilibrium branch meets the unstable equilibrium branch at a critical parameter value \(\rho _\mathrm{crit}\). If the system starts in the upper stable equilibrium, and the control parameter is subsequently increased, slowly or in steps, to just beyond the critical parameter value \(\rho _\mathrm{crit}\), this induces a critical transition of the state variable to the lower stable equilibrium. (Color figure online)

The control parameter is a parameter affecting the dynamics of the state variable, which is either constant or changing slowly or in small steps, so that the state variable may be considered to be fast relative to the control variable (separation of time scales). For instance, the interest rate set by the central bank can be considered a control parameter affecting financial market dynamics, and hence the long-term behavior of markets due to a change in the number and/or stability of the equilibria. Slow variables (as opposed to parameters) can sometimes also play the role of a control parameter for the dynamics of the fast variables. Think, for instance, of slow changes in sea water temperature, which may affect shorter-term local weather phenomena through changes in evaporation and precipitation. So generally speaking, the control parameter can either be a parameter or a slowly changing variable and need not be exogenous to the system, and a change in the control parameter affects the long-term dynamical behavior of the (fast) variables.

When referring to the long-term behavior of the system, we call the stable equilibrium states (fixed point) attractors and the unstable equilibrium states (fixed point) repellers. Note that if after a critical transition the control parameter is reversed, the system typically does not jump back to the old attractor immediately, but remains close to the new attractor (hysteresis). Another phenomenon can occur if the state of the system is affected by noise (i.e., a sequence of small shocks). In that case, due to some exceptionally large shock, or a sequence of smaller shocks in the same direction, the system may undergo a so-called noise-induced early transition from one attractor to another attractor, already before the control parameter has reached the critical value.

Fig. 2
figure 2

Critical transitions induced by a saddle-node bifurcation (Source: Lenton 2011). Panels a, b and c describe the critical slowing down as an early warning indicator that the system lost resilience on the way to the critical point. Local minima represent stable attractors, while the position of the ball represents the present state of the system. a Far from bifurcation: small variance and fast fluctuations. b Approaching the bifurcation: larger but slower fluctuation with increasing variance; c at the bifurcation point: irreversible transition to a new local minimum

Empirically a gradual increase in the control parameter toward the critical value is in principle detectable, since the parameter change induces an effect known as critical slowing down; as the control parameter approaches the critical parameter value, the system becomes progressively slow in responding to small shocks away from the stable equilibrium, giving rise to an increase in the autocorrelation as well as the variance of the state variable (Kuehn 2011; Wagener 2013). In the presence of multiple state variables, the same holds for the largest eigenvalue of the linearized dynamics around the equilibrium, so one can in principle still infer critical slowing down of the system based on a single observed state variable or a linear combination of state variables. Critical slowing down is illustrated in Fig. 2. Panels a, b and c of Fig. 2 show the behavior of a dynamical system approaching a saddle-node bifurcation. The local minima of the potential well represent stable attractors, and the ball shows the present state of the system. While approaching the bifurcation point (the critical parameter value \(\rho _{\mathrm{crit}}\)), the local minimum on the right becomes shallower, and the recovery of the ball in response to small perturbations is increasingly slowing down. When the local minimum finally disappears, the ball quickly rolls into the minimum on the left, that is, the system undergoes a critical transition from the right equilibrium to the left equilibrium.

Early warning signals for impending critical transitions can be constructed by monitoring early warning indicators, which are measures for the characteristic recovery time of the system, based on univariate time series observations from the system. In the EWS literature, it is typically assumed that the observed time series are generated by a smooth first-order autoregressive discrete time nonlinear dynamical system driven by white noise, where \(\rho \) represents a control parameter, that is,

$$\begin{aligned} X_{t} = F(X_{t-1}, \rho ) + G(X_{t-1},\rho ) \varepsilon _t, \end{aligned}$$
(1)

where \(X_t\) is a (possibly vector-valued) state of the system at time \(t \in \mathbb {Z}\), \(F(X_{t-1}, \rho )\) specifies the deterministic part of the system, while \(G(X_{t-1},\rho ) \varepsilon _t\) models the stochastic part with \(\varepsilon _t\) a white noise process. A fixed point corresponds with an equilibrium state X of the deterministic ‘skeleton’ (i.e., the system without the noise term), satisfying \(X = F(X, \rho )\). Fixed points can either be locally stable, in the case of which it is called an attracting fixed point or a locally unstable repelling fixed point.

Consider the upper stable equilibrium in Fig. 2, while the control parameter is approaching the critical value. By increasing the control parameter \(\rho \), the system approaches the saddle-node bifurcation, which is the instability threshold where the largest eigenvalue of the Jacobian matrix \(DF_\rho (\bar{X})\) of the steady state crosses the unit circle at \(+1\). This induces a critical transition of the system to a new stable state. This scenario is described in detail in Rahmstorf (2001), Lenton et al. (2008) and Sieber and Thompson (2012). It offers ways to empirically provide early warnings before the critical transition actually happens. By investigating the statistical properties of statistical early warning indicators of the system approaching a critical transition, one may even predict the time of the transition in advance, up to some forecast error.

Before discussing the data and the early warning indicators used in this paper, we note that in the literature also attempts have been made to develop alternative early warning systems to monitor the risk of financial systems, following the so-called signal approach, which employs binomial/multinomial logit/probit models (Berg and Patillo 1999; Kolari et al. 2002; Bussière and Fratzscher 2006), multivariate probability models (Demyanyk and Hasan 2010), Markov switching models (Abiad 2003, 2007), binomial tree approaches (Davis and Karim 2008). However, these models had limited capacity to give warnings ahead of the financial crisis in 2008, which encourages pursuing alternative approaches. In what follows, by exploring early warning indicators prior to some crises in financial history, we will assess whether statistical evidence for critical slowing down would have been obtained in the approach of a number of historical crises, based on univariate financial time series, along the lines (Dakos et al. 2008) did this for paleoclimate data.

3 Data

The approach requires univariate time series with clear critical transitions from one level to another. Since stock market (log) prices rather than returns can display sharp downward drops during crises, the analysis focuses on (log) prices rather than the more commonly analyzed (log) returns. Moreover, returns typically have autocorrelations close to zero and never close to one. Therefore, we focus on analyzing time series of daily stock market log prices here. Four financial crises are considered: Black Monday (October 19, 1987), the Asian Crisis, the Dot-com Bubble and the 2008 Financial Crisis. Although the direct causes of these crises are different, they share the common characteristic that the corresponding stock prices for these events displayed similar sharp collapses during the respective crises.

A particular time series is studied for each crisis. For instance, the Standard & Poor 500 (S&P 500) index is used to study Black Monday 1987 and the 2008 financial crisis. Since the Hongkong Hang Seng index is more likely to be associated with critical slowing down in the Asian stock markets, we use Hang Seng index to analyze the Asian Crisis. The Dot-com Bubble was an information technology crisis which was boosted by the rapid growth of equity values in the internet sector. Hence, we choose the NASDAQ Composite, related with technology companies and growth companies, as the time series to be analyzed. The most recent 2008 financial crisis was followed by a credit crisis. In addition with the S&P 500, we analyze critical slowing down in the TED spread, since it is an indicator of perceived credit risk. Moreover, as the volatility index of the S&P 500, the VIX is also considered. Some further details on the events, such as the critical point in time associated with the collapses, are provided in Table 1.

Table 1 Summary of events and time series used in the analysis

Daily time series data of the stock indices—the S&P 500, the NASDAQ Composite and the Hang Seng index—were downloaded from Thomson Reuters Datastream for the period from May 1986 until May 2011. The TED spread is defined as the difference between the three-month LIBOR and the three-month T-bill interest rate. These data sets were also obtained from datastream. The volatility index (VIX) was obtained from the online Chicago Board Options Exchange (CBOE) databaseFootnote 2 (http://www.cboe.com/micro/vix/historical.aspx).

Since the optimal sample size to be analyzed prior to the crises is unknown, we decided to use two sample sizes: 200 and 500 trading days, in combination with moving estimation windows of 100 and 250, respectively (see below for details). Moving estimation windows of fewer than 100 observations we deemed too short to obtain reliable estimates of the time-varying AR coefficient, mutual information and variance, and a total sample size of over 500 trading days (about two years) too long for capturing local trends in time. The critical transition points are determined by visual inspection aided by original records in international newspapers and/or online articles. We use the maximum value of the stock index in the corresponding period to define the critical point in time, at which the decline started. This explains why, for instance, October 13, 1987, is reported as the critical point for the crash of Black Monday 1987, rather than Black Monday (October 19, 1987) itself. Since the random growth in stock index data is in relative terms and not absolute, logarithms of the data are taken, thus linearizing the exponential growth present in the original prices series, as well as stabilizing the variance of the residuals that will be analyzed (Lütkepohl and Xu 2012).

4 Methodology

4.1 Detrending

In order to achieve mean-stationarity of the data, the first step is to remove the trend pattern from the original time series. The fluctuations obtained after detrending are further analyzed by calculating persistence indicators, as described in the next subsection. Subtracting a moving average is the most commonly used technique for detrending (Dakos et al. 2008). Here we follow this approach using a weighting scheme based on Gaussian kernel smoothing, thus allowing data near the given time point to receive larger weights.

Consider the case where we wish to inspect the last T observations \(\{z_{t}\}_{t=1}^T\) prior to a crisis (as noted above we will consider \(T=200\) and 500), where z denotes the logarithm of original price index with fixed time-step \( \Delta t=1\) trading day. This time series is detrended by smoothing across time using a Gaussian kernel function

$$\begin{aligned} G(s)=\frac{1}{\sqrt{2\pi }\sigma }e^{-\frac{s^2}{2\sigma ^2}}, \end{aligned}$$
(2)

so that the moving average is given by

$$\begin{aligned} \mathrm{MA}_t=\frac{\sum _{r=1}^{T} G(r-t)z_{r}}{\sum _{r=1}^{T} G(r-t)}, \qquad t= 1, \ldots , T. \end{aligned}$$
(3)

Note that the smoothing step involves only the T observations prior to the critical event (the associated decline of which starts at \(t=T+1\)). The ‘residuals’ or detrended fluctuations are obtained by subtracting the moving average from the logarithm of the original time series, that is,

$$\begin{aligned} y_t = z_t-\mathrm{MA}_t, \qquad t=1, \ldots T, \end{aligned}$$
(4)

which by construction fluctuates around 0. In this step also, only observations prior to the critical event are used to construct the de-trended time series \(\{y_t\}_{t=1}^{T}\). Below we refer to these detrended time series using as the ‘residuals’ (from the trend removal step) as well as the ‘detrended fluctuations’ interchangeably.

In nonparametric regression, the choice of the bandwidth \(\sigma \) involves a bias-variance trade-off. A large bandwidth would lead to over-smoothed, biased, estimates of the local mean in which details of the trend are washed out, and the magnitude of peaks and troughs is underestimated. Conversely, using a too small bandwidth will result in using only a few local data points, which will lead to a large variance of the detrended signal. The aim is to choose the bandwidth in such a way that we filter out the slower trends from the data while keeping the details of the fluctuations around the local equilibrium value. In this paper, we will focus on the results obtained for a bandwidth \(\sigma \) of 10 trading days; later we also discuss the results obtained for a range of \(\sigma \)-values in a robustness check.

4.2 Leading indicators

After having obtained the detrended fluctuations \(\{y_t\}_{t=1}^T\), the early waring indicators are estimated using a moving estimation window across time allowing to inspect trends. An increase in autocorrelation and variance is expected in the case of critical slowing down. Three measures for persistence are considered: the AR(1) coefficient, the lag-1 Mutual Information (MI(1)) and the Standard deviation (SD). The AR(1) and SD measures are well-established (Livina and Lenton 2007; Dakos et al. 2008) and the most commonly applied statistics in DFA; we add the MI(1) measure here as a straightforward nonlinear generalization of the AR(1) measure.

AR(1) indicator When approaching a critical transition, the system becomes increasingly persistent, meaning that consecutive observations of the fluctuations around equilibrium of the system become increasingly similar to each other. The lag-1 autocorrelation can be shown to increase as the parameter approaches a critical parameter value (Scheffer et al. 2009), and it is widely applied in the literature on critical transitions to monitor critical slowing down. It can be estimated using a first-order autoregressive model, given by

$$\begin{aligned} y_t=e^{-\kappa \Delta t} y_{t-1}+ \varepsilon _t, \end{aligned}$$
(5)

which is known as an AR(1) model. In Eq. (5), \(y_t\) is the observed detrended fluctuation at time t, \(\Delta t = t_{j+1} - t_{j} = 1\) trading day, \(\varepsilon _t\) is a zero mean innovation, \(\kappa \) indicates the magnitude of the recovery rate and \(\lambda = e^{-\kappa \Delta t}\) is the AR(1) coefficient (the lag 1 autoregressive coefficient). In a saddle-node bifurcation scenario, \(\kappa \) tends to zero (\(\lambda \) to 1) on the way to the bifurcation point. Recall, however, that a noise-induced critical transition may actually occur before \(\lambda \) reaches 1, so to see whether a system behaves as a ‘usual’ complex system we should not necessarily expect to observe an increase in the autocorrelation all the way to 1, but just a preferable significant increase in the autocorrelation.

The random disturbances in Eq. (5) are assumed to be white noise with zero mean. Therefore, the first-order autocorrelation coefficient \(\lambda \) is approximated as constant in a local time window of length n. We estimate \(\lambda \) by ordinary least-square (OLS) regression of the model

$$\begin{aligned} y_{t}=\lambda y_{t-1} + \varepsilon _t. \end{aligned}$$
(6)

over the set of indices \(t=j-n+1, \ldots , j\), \(j=n, \ldots , T\). As j increases, the local window slides from left to right, and a time series of estimated AR(1) coefficients is obtained, varying with the index j. An increase in this time-varying AR(1) coefficient is expected when the system approaches a saddle-node bifurcation.

Besides the smoothing bandwidth, the estimation window size n also is a very important parameter, the choice of which is also a trade-off. A smaller window size allows us to track short-term changes in the time-varying autocorrelation. However, taking a too small window size with very few observations will make the estimation of autocorrelation less precise. Following Dakos et al. (2008), we use half the size of the analyzed time series T as the sliding window size n, i.e., \(n=T/2\), and since we use two different sample sizes (\(T=200\) and 500), this gives us two different estimation window sizes: \(n=100\) and 250.

Mutual Information indicator While the AR(1) indicator captures linear relationships, the mutual information (MI) takes into account also nonlinear correlations. This measure is included here as a natural extension of the first-order AR(1) parameter usually employed. In information theory, the MI measures the dependence between two random variables. It quantifies the amount of information shared between two random variables. It determines how similar the joint probability density p(XY) is to the product of marginal densities, p(X)p(Y). The more the dependent X and Y are, the more the information X and Y carry about each other; this amount of information is quantified by their MI.

The time delayed mutual information between a variable measured at time t and time \(t-\eta \) is given by

$$\begin{aligned} I(X_t,X_{t-\eta })=\int p(x_t,x_{t-\eta })(\eta ) \log \frac{p(x_t,x_{t-\eta })}{p(x_t)p(x_{t-\eta })} \, \mathrm{d} x_t \, \mathrm{d} x_{t-\eta }, \end{aligned}$$
(7)

where the time \(\eta \) is the lag. The marginal probability density functions are \(p(x_t)\) and \(p(x_{t-\eta })\), and \(p(x_t,x_{t-\eta })\) is the joint probability density function of the variable measured at time t and the same variable measured at time \(t-\eta \). In this paper, the lag-1 (\(\eta =1\)) MI indicator is estimated based on the observations in the moving window. The R package “tseriesChaos” is used to estimate the MI, which implements the method proposed by Hegger et al. (1999) where the integral in Eq. (7) is replaced with the corresponding sum. We follow Quax et al. (2013) and discretize the data into \(h=10\) bins.

Standard deviation indicator An increased slowing down induces an increased amplitude when approaching the critical parameter. This corresponds with an increase in the SD indicator (Scheffer et al. 2009), which we estimate as the sample standard deviation within the window,

$$\begin{aligned} \widehat{\mathrm{St.Dev.}}_j=\sqrt{\frac{1}{n-1} \sum _{t=j-n+1}^j (y_t - \hat{\mu }_j)^2}, \qquad j=n, \ldots , T. \end{aligned}$$
(8)

The time-dependent variance estimate produced by the moving window procedure also is a very commonly used early warning indicators preceding a critical transition. In particular, Carpenter and Brock (2006) and Biggs et al. (2009) have advocated its use by noting that the variance seems to be more robust than the AR(1) coefficient as an EWS and more easy to generalize to multivariate cases.

4.3 Establishing trends

For each indicator observed across time, i.e., AR(1), MI(1) and SD, following Dakos et al. (2008) we test the trend over time for significance using Kendall’s rank correlation \(\tau \) (Kendall’s tau) between the time-varying indicator and the time variable. This nonparametric statistical measure for the degree of concordance between two pairs of ordinal variables is given by

$$\begin{aligned} \hat{\tau }=\frac{C-D}{N}, \end{aligned}$$

where C is the number of concordant pairs, D is the number of discordant pairs, and \(N=n(n-1)/2\) is the total number of different pair combinations. The quantity \(\tau \) is in the range \([-\,1,1]\). If Kendall’s \(\tau \) is close to 1, the agreement between the two rankings is perfect. A high value of Kendall’s \(\tau \) suggests a strong trend. In the presence of critical slowing down, one expects to find a significant upward trend as indicated by a significantly positive value of Kendall’s \(\tau \).

In our analysis, we use two ways to assess the significance of the trend. We first report the ‘naive’ p values of the observed trends, ignoring the temporal dependence of the observed subsequent values of the indicators. Since this assumption is likely to be violated due to partial overlap of prices in the estimation windows, we subsequently re-assess the statistical significance in a more sophisticated way, by employing a bootstrap method that corrects for the dependence in the observed indicators as the moving estimation window moves forward.

5 Results

The evidence for critical slowing down is evaluated in two steps. Firstly, we investigate the early warning signals before real critical transitions. By using six historical time series, we examine four well-known extreme financial events in history—Black Monday 1987, the Asian Crisis, the Dot-com Bubble and the 2008 Financial Crisis. Secondly, we control the rate of false positives (i.e., the likelihood of spurious early warnings) by estimating the probability of obtaining a similar or more extreme trend in the indicator by chance, using a bootstrap method. In the end, we perform a robustness check with respect to changing the parameters set by the user in the analysis.

5.1 Financial time series

We analyze time series associated with the following four financial crises. Due to space considerations, we provide and discuss graphs only for pre-crisis sample size \(T=200\); we later provide t-statistics for the trends in the indicators also for \(T=500\).

Black Monday 1987 During a single day, October 19, 1987, the Dow Jones Industrial Average (DJIA) index lost nearly 22%. By the end of that month, most of the major exchanges had dropped by more than 20%. Stock markets around the world crashed, beginning in Hong Kong, spreading to Europe, and hitting the USA after other markets had declined by a significant margin. This event marked the beginning of a global stock market decline, making “Black Monday” one of the most dramatic days in recent financial history.

Figure 3 shows the time-varying early warning indicators during about half a year preceding “Black Monday” based on the Standard & Poor’s 500 index (S&P 500) time series. The original time series in Fig. 3a is the logarithm of the daily S&P 500 index. The time series displayed starts 200 trading days before the crash and ends 100 days after it. Stock markets raced upward during the first half of 1987, but experienced a great depreciation in the last few months. The vertical dashed line indicates Black Monday which we identify with (the start of) the critical transition. Since we are interested in EWSs before the critical transition, the time-varying indicators are strictly based on the data before the dashed line. To facilitate explanation, we align the x-axis of the critical transition with 0 to clearly distinguish the days before and after it. The red graph corresponds to the smoothed time series obtained by the Gaussian kernel smoother. The two-headed arrow shows the width of the moving window. Figure 3b shows the residuals, that is, the detrended time series used to estimate the early warning indicators.

Fig. 3
figure 3

Early warning indicators for “Black Monday” based on the S&P 500 time index, pre-crisis sample size \(T=200\). a Logarithm of the daily S&P 500 index. b Residuals time series. c AR(1) indicator. d MI(1) indicator. e SD indicator. The vertical dashed line in (a) indicates the critical transition. The double-headed arrow shows the width of the moving window used to compute the indicators shown in panels (c), (d) and (e). The red graph denotes the smoothed time series used for filtering. (Color figure online)

Figure 3c–e shows the time-varying early warning indicators, AR(1), MI(1), and SD. They show that the great crash on “Black Monday” is preceded by overall upward trends in these indicators. All of these positive trends are confirmed by a positive Kendall rank correlation coefficient \(\tau \). The (naive) p values corresponding to the trends of the indicators strongly suggest an increase in the indicators during the period preceding the critical transition, suggesting that the S&P 500 time series indeed slows down before the critical transition. As noted above, we will apply a bootstrap technique to obtain p values that take into consideration temporal dependence of the time-varying indicators.

Fig. 4
figure 4

Early warning indicators for the Asian Crisis based on the Hang Seng index, pre-crisis sample size \(T=200\). a Logarithm of the daily Hang Seng index. b Residuals time series. c AR(1) indicator. d MI(1) indicator. e SD indicator. The vertical dashed line in (a) marks the critical transition. The double-headed arrow in Panel (a) shows the width of the moving window used to compute the indicators shown in (c), (d) and (e). The red graph denotes the smoothed time series used for filtering. (Color figure online)

The Asian Crisis Using the same techniques, we examine the Hang Seng time series. Figure 4 shows the analysis of early warning indicators around one and a half year before the Asian Crisis. Panel (a) displays the logarithm of daily Hang Seng index from November 1995 to July 1998. This time series increases in the beginning but collapses around mid-1997, which illustrates the Asian financial crisis in July 1997. The Asian Crisis exhibited a series of currency devaluations along with stock market declines. The currency market first failed in Thailand because its government no longer pegged their local currency, the Thai Baht, to the US dollar. The currency crisis rapidly caused stock market declines spreading throughout South Asia. Thailand, South Korea and Indonesia were most affected by the crisis. As a result of the crisis, the stock markets in Japan and most of Southeast Asia fluctuated dramatically.

Figure 4 has a similar format as Figure 3. The smooth red curve in panel (a) shows the moving average used for filtering. The dashed arrow shows the width of the moving window, which is again half the size of the analyzed time series length. Panel (b) shows the residuals used to estimate the early warning indicators. The results for the Asian crisis, at least for \(T=200\) (\(n=100\)), are mixed; there is a significant positive trend in the SD indicator (panel (e)), but the AR(1) and MI(1) indicators shown in panels (c) and (d) show a significant downward trend before the critical transition of July 1997.

Fig. 5
figure 5

Early warning indicators for the Dot-com bubble based on the NASDAQ Composite index (\(T=200\)). a Logarithm of the daily NASDAQ index. b Residuals time series. c AR(1) indicator. d MI(1) indicator. e SD indicator. The vertical dashed line in (a) indicates the critical transition. The double-headed arrow shows the width of the moving window used to compute the indicators shown in (c), (d) and (e). The red graph denotes the smoothed time series used for filtering. (Color figure online)

The Dot-com Bubble Figure 5 presents the analysis of early warning indicators from \(n=100\) trading days before the Dot-com bubble collapse. Boosted by the rise of commercial growth of the internet, the NASDAQ Composite index experienced a speculative bubble, as shown in Fig. 5a. It peaked around the year 2000, the latter part followed a typical boom and bust cycle; when the bubble “burst,” the stock prices of dot-com companies fell dramatically. Some companies went out of business completely, such as Pets.com. Some others survived but their stocks declined by more than 80%, such as Cisco and Amazon.com.

As for the Asian Crisis, the analysis of the early warning indicators shows mixed results for the NASDAQ index before the Dot-com bubble. The bubble collapse is preceded by an overall upward trend in the standard deviation, but a downward trend in the AR(1) and MI(1) estimates.

Fig. 6
figure 6

Early warning indicators for the 2008 financial crisis using the S&P 500 index (I.) TED spread (II.) and VIX (III.). Pre-crisis sample size \(T=200\). For each of the analyses, a Logarithm of the daily original time series. b Residuals time series. c AR(1) indicator. d MI(1) indicator. e SD indicator. The vertical dashed lines in panels (a) indicate the critical transition. The dashed arrows show the width of the moving window used to compute the indicators shown in (c), (d) and (e). The red graphs denote the smoothed time series used for filtering. (Color figure online)

The 2008 Financial Crisis The financial crisis of 2008 is known as the most severe financial crisis since the Great Depression of the 1930s. It was triggered by the bursting of US housing bubble, which peaked approximately in 2005–2006. Banks began to give out more loans than ever before to potential home owners. When the housing bubble finally burst in the latter half of 2007, the secondary mortgage market collapsed and over 100 mortgage lenders went bankrupt during 2007 and 2008. Several major financial institutions failed, including Lehman Brothers, Merrill Lynch, Washington Mutual, Citigroup. The world wide economies experienced a great recession and stock markets around the world declined.

Figure 6a–c shows the early warning indicators around one year before the 2008 financial crisis based on three different time series, the S&P 500, the TED spread and the VIX. The VIX is a commonly used estimated time series index of the implied volatility of S&P 500 over the next 30 days.

The analyses show mixed results. For the S&P 500 index (I.), the AR(1) and MI(1) indicators as well as the SD indicator show a downward trend. The analysis of the TED spread (II.) shows strong upward trends in the AR(1) and MI(1) indicators, suggesting a slowing down before the critical transition around the time Lehman Brothers went bankrupt. However, for the same period, the SD indicator shows a downward trend. The analysis of the VIX (III.) shows significant downward trends in the AR(1) and MI(1) indicators preceding the 2008 critical transition. Also the SD indicator has a significant downward trend.

Table 2 Studies of early warning indicators for critical transitions in different time series for various sample sizes T, window lengths n; \(\tau \) is the estimated Kendall’s \(\tau \) coefficient

A summary of the naive p values is provided in Table 2. The trends observed in the AR(1), MI(1) and SD indicators preceding the crises are all either significantly positive or significantly negative. The only cases where all indicators are significantly positive are Black Monday with sample size \(T=200\) and the Asian Crisis for \(T=500\). Overall these results are highly significant, but often with the unexpected negative sign. This may be related to the fact that only naive p values were reported so far.

5.2 Bootstrap time series

Above we have reported the results based on naive p values, which ignores the dependence among the sequentially observed values of the indicators. To take this into account, for each empirically observed value of Kendall’s \(\tau \) we also calculate Kendall’s \(\tau \) for a large number of bootstrap time series. This allows us to assess a p value corrected for serial dependence. The observed bootstrap distribution of tau values is used to assess the likelihood that the observed value of the trend statistic in the original data has occurred by chance in the absence of an actual trend, taking into account temporal dependence. The corrected p value is the estimated probability, in the absence of a trend, that a \(\tau \) value equal to or larger than that obtained from the original time series is observed among the bootstrap replications.

Dakos et al. (2008) applied a bootstrap of the residuals from the smoothing step. However, this treats the residuals as being independent, which doesn’t seem to be realistic in cases where there is autocorrelation in the residuals. Instead, we therefore decided to bootstrap the log returns of the original prices time series. Bootstrap log price time series (levels) are then easily obtained again by taking cumulative sums. The motivation for this approach is that log returns are known to be hardly correlated. Preliminary comparisons of the returns bootstrap with the residuals bootstrap showed that the latter is slightly less conservative (i.e., leads to smaller p values, with the possible risk of over-rejection due to ignoring the dependence mentioned above). Since refinements of the log returns bootstrap, such as, for instance, taking into account GARCH(1,1) effects, gave very similar results as the log returns bootstrap, we decided to only report the results for the log returns bootstrap in this paper.

Fig. 7
figure 7

Histograms of Kendall’s tau obtained for \(B=1000\) residuals bootstrap replications from the S&P500 time series from 1950 to 1987, for a AR(1), b MI(1) and c SD indicators. The blue dashed lines indicate Kendall’s tau for the empirically observed data preceding Black Monday. The arrows indicate the realizations for which the segments trend statistic is equal to or higher than the trend statistic based on the historical data prior to Black Monday (p values indicated by percentages in blue). (Color figure online)

The probability \(P(\tau \ge \tau ^{*})\) of obtaining an equally large or larger trend statistic by chance is estimated by the bootstrap p value, which is the fraction of bootstrap trend statistics with trend at least as high as the trend statistics based on the original time series. These p values are indicated by percentages in Fig. 7. For the AR(1) and MI(1) indicators, the (one-sided) p values are 4.5 and \(5.2\%\), respectively, meaning that the bootstrap p values after taking into account temporal dependence in the indicators are 4.5 and \(5.2\%\). Therefore, the trends in the AR(1) indicator for the S&P500 index prior to Black Monday are significant at the conventional \(5\%\) significance level, and for the MI(1) indicator at the \(10\%\) level. For the SD indicator, the bootstrap p value is \(30.9\%\), i.e., insignificant at the conventional significance levels of 10, \(5\%\) or less.

Table 3 Likelihood of obtaining trend statistic estimates by chance, estimated by the fraction of bootstrap estimated trend statistics (Kendall \(\tau \)-values) being larger than the trend statistic of the original residual records. Number of bootstrap replications \(B=1000\), \(\sigma =10\)

Table 3 shows the likelihood of obtaining the observed or a larger trend statistic estimate by chance, as estimated by the fraction of bootstrapped trend statistic Kendall’s \(\tau \) being larger than or equal to the trend statistic of the original residual records. The results shown in Table 3; the p values can be interpreted as the probability of observing, by chance, a trend in the respective indicator that is as large or larger than the trend in the indicator for the observed time series data.

First focusing on the smaller sample size (\(n=100\), top panel), one can observe evidence of a positive trend in the AR(1) indicator, and to a lesser extent in the MI(1) indicator, prior to the 1987 Black Monday crash. We also observe some weak evidence for a positive trend in the SD indicator prior to the burst of the Dot-com bubble. If we strictly focus on results that are significant at the \(5\%\) level of significance or smaller, the only remaining significant result is a positive trend in the AR(1) coefficient prior to the 1987 Black Monday crash. In this respect, our results do not strongly confirm the finding of Guttal et al. (2016) that the SD indicator increases significantly before financial crises. Note, however, that our results also do not contradict theirs; the difference in significance may be attributed to the use of a different set of crises, and differences in pre-crisis sample size T, estimation window n, smoothing window size \(\sigma \), and details of the bootstrap methods used. Also note that Guttal et al. (2016) obtained their results for estimation window sample size \(n=500\) (about two years), which, as indicated above, we a priori considered unrealistically large to detect trends in the indicators reliably, as we expected those trends to develop on smaller time scales (of several months, up to half a year, say). For the larger sample size (\(n=250\), bottom panel), we observe a marginally significant positive trend in the SD indicator before the 2008 Crisis based on the S&P 500 time series, and a very significant (at the 2% significance level) positive trend in the MI(1) for the same crisis based on the TED spread. This latter result deserves some further attention, in particular because the results obtained for MI(1) and AR(1) are very similar for all results reported in Tables 2 and 3, except for the TED spread prior to the 2008 crisis for \(n=250\).

Fig. 8
figure 8

Early warning indicators for the 2008 financial crisis using the TED spread for pre-crisis sample size \(T=500\). For each of the analyses, a logarithm of the daily original time series. b Residuals time series. c AR(1) indicator. d MI(1) indicator. e SD indicator. The vertical dashed lines in panels (a) indicate the critical transition. The dashed arrows show the width of the moving window used to compute the indicators shown in (c), (d) and (e). The red graph denotes the smoothed time series used for filtering. (Color figure online)

At first sight, one may be tempted to conclude that the MI(1) indicator apparently is able to pick up some nonlinear dependence within the time series that the AR(1) indicator is not able to pick up. However, visual inspection of the time series of residuals from the smoothing step (Fig. 8, panel b) raised our concern that the trend may have been caused primarily by changes in the sample marginal distribution as the estimation window was moved forward. Figure 9 shows the histograms of the first and second half of the pre-crisis TED spread residuals. The histograms show that the residuals have a more fat-tailed distribution in the first half of the pre-crisis period than in the second half. This is confirmed by the excess kurtosis, which is 6.90 for the first half and only 0.22 for the second half of the pre-crisis residuals.

It is known that although theoretically the mutual information is invariant under changes in the marginal distribution of the time series considered, the estimation of the mutual information involves a discretization step (binning) which in practice affects the estimates. Typically (and also in the algorithm as implemented in the R package “tseriesChaos” we used), the bins are taken equally sized between the largest and smallest observed values. Now imagine the case where the data are normally distributed versus the case where the data are leptokurtic (i.e., the distribution has fatter tails and the density is more peaked near the center of the distribution, relative to the normal). If we use the same number of bins in both cases (10 here), the bins near the mode of the distribution will be much fuller for the leptokurtic data than for normal data, and vice versa for the bins in the tails. This may well affect the MI(1) estimates.

We therefore wish to check whether the small p value observed based on the TED Spread for \(n=250\) is not just a result of the marginal distribution changing as the moving window moves forward, while the marginal distribution of the bootstrap-based residual time series is (by construction) constant over time. To test this, in an additional numerical experiment we modified the algorithm for the estimation of MI(1) by adding a preliminary step; we transformed the marginal distribution of each time series for which MI(1) was to be calculated to the standard normal distribution via the empirical CDF and the inverse standard normal CDF. Note that in theory this marginal transformation does not change the mutual information, but as we argued above, such a marginal transformation may well affect the numerical estimate of the mutual information.

Upon using this new (independent of sample marginals) way to estimate MI(1), the trend in the MI(1) estimate changed from \(\tau =0.769\) to \(\tau =-\,0.143\), which in fact has the same sign as the trend observed for the AR(1) indicator (\(\tau =-\,0.383\), see Table 2). Correspondingly, the p value changed from 0.018 to an insignificant value of 0.576. In our view, this indicates that indeed there was a significant positive trend in the original MI(1) indicator just because the shape of the marginal distribution was more leptokurtic in the first part of the data than in the second and hence that the significant result was spurious. Moreover, we rule out the possibility that the change in distribution is a sign of critical slowing down, as it is primarily caused by a small period with larger amplitude near the end of the first half of the sample. The smaller sample does not contain this period, which explains why a similar spurious result is not observed there. Taken together, the results obtained with MI(1) here suggest that although the MI(1) indicator initially seemed promising for its ability to pick up nonlinear dependence, the standard binning-based algorithm for MI estimation is too sensitive to details of the sample marginal distribution to be applied directly in EWS settings.

Fig. 9
figure 9

Histograms of the first and second half of the pre-crisis TED spread residuals based on an equal number of bins (10)

5.3 Robustness of parameters

In addition to varying the sample size, we also wish to check the robustness of the results with respect to changing other user-set parameters of the analysis. Apart from the pre-crisis sample size, there are two other key parameters to be set by the user—the smoothing bandwidth \(\sigma \) and the moving window size n, which we now do not necessarily take to be half the pre-crisis sample size. The bandwidth is an important parameter when filtering out long-term trends from the original time series. There is a trade-off involved in the bandwidth choice; a too narrow bandwidth would not only remove the long run trends but also the short run fluctuations we intend to study, while a too wide bandwidth might leave some slow trends, which may lead to spurious detection of trends in the indicators. A similar trade-off also plays a role in the estimation window size; a smaller window size is better to track short run changes, but a too small window size with too few sample points would make the estimations less reliable.

To check the robustness of our analysis with respect to those parameters, we perform an additional analysis by using various rolling estimation window (sample) size and bandwidth combinations. The contour plot in Fig. 10 shows the influence of these parameters on the observed trends of AR(1) indicators for Black Monday 1987. The white dot indicates the parameters used in our early warning analysis. The black dot indicates the combination of window size and bandwidth size that shows the strongest positive trends. The distribution of Kendall’s tau shown in Fig. 10 confirms the presence of strong positive trends of Kendall’s tau in the contour plot. The contour plot also indicates that although the results clearly depend on the parameters chosen, they depend on them quite smoothly, in particular around the maximum, so that there is a large set of parameters for which significant results are obtained for the trend in the AR(1) coefficient prior to Black Monday 1987.

Fig. 10
figure 10

Analysis of the robustness of parameters in the case of Black Monday 1987: window size and bandwidth. a Contour plots of the rolling window size and bandwidth size. The black dot indicates the window size and bandwidth size with the strongest positive trends. The white dot indicates the parameters used in the analysis in this paper (b) Histogram of Kendall’s tau values represented in (a)

6 Summary and discussion

In the theory of “Critical slowing down,” an upcoming critical transition is characterized by increasing first-order autocorrelation and increasing variance. These indicators can therefore serve as warning signals for upcoming critical transitions. The theory has been successfully confirmed in paleoclimate time series prior to climate transitions by Held and Kleinen (2004) and Livina and Lenton (2007). The methodology was developed further by Dakos et al. (2008). In this paper, we investigated whether financial systems behave consistently with this theory, by checking whether crises are also preceded by critical slowing down. To this end, four financial crises were analyzed by using six time series, for two different pre-crisis sample sizes, by estimating three early warning indicators—AR(1), MI(1) and SD—over time using a moving estimation window.

The basic (naive) results show positive as well as negative trends in the indicators preceding all crises. In order to assess the significance of these trends, we use bootstrap time series to obtain a reference distribution for the observed trends in the indicators. Focusing on sample size \(n=100\), the bootstrap p values confirm the significance (at the 5% significance level) of a positive trend in the AR(1) indicator preceding Black Monday 1987 in the S&P 500 time series, while also the MI(1) indicator is small (nearly significant at the 5% level). Some of the p values for trends in the SD indicator also turned out to be small and nearly significant at the \(5\%\) level: for the Dot-com bubble for \(n=100\), and for the 2008 Financial Crisis (S&P 500) for \(n=250\). Notably, the MI(1) indicator based on the TED Spread showed a very significant positive trend (\(p=0.018\)) before the 2008 Financial Crisis (\(n=250\)), that was not picked up by the AR(1) indicator. However, after carefully eliminating the effect of changes in the marginal distribution over time, actually a downward trend in MI(1) was observed, which hence was no longer significant (\(p=0.576\)), indicating that the initially significant bootstrap p values results were spurious. This only leaves one significant positive trend: in the S&P 500 prior to Black Monday 1987.

There are a number of possible reasons why we found so little evidence supporting the idea that financial time series show critical slowing down prior to financial crises. Firstly, so far the tools used for detecting critical slowing down are based on a linear approximation to the true dynamics. With the increase in the complexity of the nonlinear dynamics, the prediction of bifurcations may become harder. Sieber and Thompson (2012) tried to extend the techniques using nonlinear features. However, they did not find discernible trends in the nonlinear case.

Secondly, the transitions in complicated financial systems may occur far from local bifurcations and do not necessarily correspond to critical transitions. For instance, there may be an early escape from a stable equilibrium due to exogenous shocks—a so-called noise-induced transition (Thompson and Sieber 2010). In particular, the emergence of new technology and financial instruments nowadays makes financial markets more complex. This could explain the failure to detect the 2008 financial crisis, despite the use of advanced economic and financial models.

Thirdly, the approach is based on one-dimensional systems with only one state variable and one control variable, while real financial systems are far more complicated. A multivariate approach may be required to capture the essence of dynamics of financial systems. It is known, for instance, that in periods of market stress returns on individual stocks are more correlated than in more tranquil periods.

Fourthly, asset pricing models are usually based on the assumption that the fundamental price follows a geometric random walk process. Bottom-up heterogeneous agent models in which agents are boundedly rational typically describe endogenous fluctuations around this fundamental price, where the latter is often taken to be a unit-root process (Boswijk et al. 2007). This means that (log) prices are naturally characterized by an AR(1) process with AR coefficient equals to or close to 1, which does not fit well with the scenario of an attracting fixed point losing stability.

Finally, a fifth reason why critical slowing down may not be a suitable scenario for financial time series data, or more generally social systems, is that these systems differ from most natural systems due to the presence of smart agents (“atoms that can think”) whose main activities are based on their expectations on future price developments. Agents learn and adapt their behavior and may respond to news announcements, and in particular to early warning signals of a crisis. Their expectations and adaptive behavior in response to the risk of an upcoming market devaluation may actually trigger or accelerate a self-fulfilling crisis. This touches upon the issue whether being able to detect an upcoming financial crisis may cause a panic reaction of the agents and thus trigger an upcoming crisis even earlier. These nonlinear expectations feedback mechanisms and adaptive behavior may make detection of early warning signals for social and economic systems much harder than for natural complex systems.

In recognition that economic/financial systems may behave differently than many natural systems, and that critical slowing down may not be a typical phenomenon preceding market collapses, we conclude with a number of remarks regarding possible future approaches toward developing EWSs for financial crises. There are many possibilities still open here. For instance, forecasting techniques based on pattern recognition might be exploited to extract signals from financial time series data. Such signals could be based on, for instance, machine learning algorithms for forecasting prices. One could also imagine the use of not one but a number of different learning algorithms, the forecasts of which could be combined into a single signal. In view of the increased correlations between stock returns in periods of market stress, also multivariate approaches, for instance based on increased cross-sectional correlations between individual stocks, are promising for future EWS studies based on financial time series data. Finally, the use of complex network techniques to monitor the evolving structure of financial-economic networks may provide alternative early warning indicators for crises.