Encyclopedia of Computational Neuroscience

Living Edition
| Editors: Dieter Jaeger, Ranu Jung

Determining Network Structure from Data: Nonlinear Modeling Methods

  • Bjoern SchelterEmail author
  • Marco Thiel
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4614-7320-6_439-2


Essential Tremor Granger Causality Phase Synchronization Causal Influence Synchronization Index 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


In this entry we focus on inference of network structures from data. One possible approach to studying networks is to model the nodes, such as neurons, and generate networks and run simulations and observe the network behavior. This approach requires on a priori assumptions about the constituent parts; for instance, Hodgkin-Huxley neurons may be coupled and the resulting network behavior is investigated. The model behaviors can be compared to the measured neuronal signals through statistical analysis. However, this approach only provides indirect information about the network structure. An alternative approach to studying network structure is to use parametric, semiparametric, and nonparametric analyses of the observed signals and reconstruct the network connections. Such analyses are essential tools for systems in which network structure is not known, such as when anatomical connections are not known or information is not sufficient.

A particular challenge when inferring the network structure from data lies in the fact that analysis techniques to investigate interactions are often bivariate, investigating pairwise connections. Often, analyses cannot distinguish between direct and indirect interactions. A good measure of coupling between nodes should be able to distinguish direct and indirect interactions. To achieve this, a multivariate data analysis approach needs to be used. For linear systems multivariate data analysis approaches have been developed over the past decades. However, for nonlinear systems, the development of multivariate analysis techniques is in its infancy. Nonlinear approaches typically require computationally intensive algorithms and large data sets which have limited their application.

In this entry, we provide a survey of approaches for inferring the network structure from nonlinear systems using nonlinear data-based modeling.

Detailed Description


Mathematicians and physicists often generate models of complex networks starting with first principles. However, for complex biological systems found in neuroscience, the dynamics of neuronal behavior underlying the measured signals is poorly understood, leaving an approach based on first principles ideal. Therefore, an understanding of the behavior can only be based upon the analysis of measured data of the dynamics, i.e., time series from EEG or other population level signals, and point processes, from action potential timing. This approach is called data-based modeling.

Time series and point process analysis have different roots originating in mathematics, physics, and engineering. The underlying assumptions of the sources of the signals are different resulting in different approaches to the analysis. In mathematical statistics, the model has been based on linear stochastic systems; in physics the models have been of nonlinear deterministic systems. While methodological developments evolved independently across disciplines, over the past decade cross-fertilization has resulted in novel approaches to data-based modeling of nonlinear stochastic systems (Schelter et al. 2006b).

Reconstruction of networks depends on the detection of coupling between areas, particularly causal influences between two different processes, for instance, between brain areas or between brain areas and the periphery. The goal is to detect changes in coupling that may be caused by the underlying diseases. Detection of changes in coupling may lead to improved diagnosis and treatment strategies especially for neurological diseases.

The approach proposed here considers the brain as a dynamic system from which we measure the activity through the electroencephalogram (EEG), magnetoencephalogram (MEG), or another representation of the underlying neuronal activity. By applying the network analysis to the data sets recorded from normal patients and those suffering from neurological diseases, we hope to gain an understanding of the underlying mechanisms generating these dysfunctions (Volkmann et al. 1996; Tass et al. 1998; Hellwig et al. 2001). However, there is a wide variety of applications beyond neurosciences to which the linear as well as nonlinear data analysis techniques presented here can be applied successfully.

Various linear analysis techniques have been proposed to determine interdependency between dynamic processes and to determine causal influences in multivariate systems (Schelter et al. 2006a). Using the Fourier transform, a signal can be converted to the frequency domain and the interdependencies analyzed using the cross-spectrum and coherence. But these tools are not sufficient to adequately describe interdependence within a multivariate system where correlation arises not because the two are directly coupled but because they receive common inputs. To enable a differentiation between direct and indirect influences in multivariate systems, multivariate linear approaches have been developed (Dahlhaus 2000; Baccala and Sameshima 2001; Ding et al. 2006).

Furthermore, multivariate network analysis can uncover directed interactions, which enables deeper insight into the basic mechanisms underlying such networks. These tools can determine not only if nodes are connected but the direction of the connection. In some cases in which communication is present in both directions, it is possible that they can communicate in distinct frequency bands. To detect directed nonlinear connections, appropriate analysis techniques are needed. Granger causality (Granger 1996) is utilized to determine causal influences when the coupling is linear. This probabilistic approach to determining causality is based on the principle that a cause precedes the effect in time and this is formulated in terms of predictability. Granger causality is done by fitting autoregressive models to the signals and determining if the error in prediction of the next time point of one signal can be improved by the knowledge of the other. A graphical approach for modeling Granger-causal relationships in multivariate processes has been discussed (Eichler 2007). More generally, networks may be determined by the use of Granger causality between the nodes, but unlike linear cross-correlation, this approach tries to address spurious causalities caused by confounding by unobserved processes (Eichler 2005).

Nonlinear systems can show behaviors that are impossible in linear systems (Pikovsky et al. 2001); for example, nonlinear systems can synchronize. In the seventeenth century, Huygens observed synchronization between two pendulum clocks on the same wall. These clocks are coupled self-sustained oscillators. The process of synchronization is an adaptation of certain characteristics of the two processes. The oscillations between the clocks were always antiphase and restored synchronization after being perturbed (Pikovsky et al. 2000); for general systems, different phase relations are conceivable. A weaker form of synchronization has been observed between two coupled chaotic oscillators. These oscillators can synchronize their phases while their amplitudes stay almost uncorrelated (Pikovsky et al. 2001; Boccaletti et al. 2002; Wiesenfeldt et al. 2001). Several forms of synchronization have been described, ranging from phase synchronization via lag synchronization to almost complete synchronization (Boccaletti et al. 2002). Generalized synchronization is the most general case where one signal is related to the other through any arbitrary function.

While a battery of various analysis techniques is available for linear stochastic systems, for nonlinear systems techniques are still in their infancy. This entry is dedicated to the inference of the network structure based on the observation of nonlinear processes utilizing nonlinear data-based modeling approaches. The entry is subdivided into Phase-Based Approaches, Recurrence-Based Approaches, Information Theoretic Approaches, and Linear Approaches. We should note that it is impossible to provide a complete survey of all approaches.

We begin this entry by introducing the general concept underlying most of the approaches to provide a quick introduction into the topic.

General Concept

One challenge when inferring the network structure from data lies in distinguishing direct and indirect interactions. Given a network with direct and indirect interactions, a bivariate analysis can be applied to every pairwise combination of nodes of the network to detect all connections. The resulting network could be fully connected. But the indirect connections are often weaker than direct ones. By setting some finite statistical power, we can hope to distinguish the direct ones from the indirect ones. But such an approach indeed relies on the assumption that indirect connections are the weakest in the network and that the statistical power is actually too low to detect the interaction. In general, both assumptions are not valid and bear the risk that erroneous conclusions are drawn; this leads to several false-positive conclusions.

Multivariate approaches attempts to correct for these problems with bivariate approaches. The theory behind multivariate approach is that if all contributing processes have been observed, it should be possible to infer whether or not a connection is indirect. This is done by partializing the information of the third processes. Practically, there are several approaches to do this. Here, we just provide an outline for how this is done. To test if two systems are direct or indirect, we will first assume for simplicity that there are only the two systems in question and one other system that could potentially influence them. Because we have observed all the systems, it is possible to determine the amount of information transfer from the third system onto the two systems of interest. If this extra information from the third process suffices to fully explain the information transfer between the first two systems, then we can conclude that the connection is indirect. If the third cannot explain all the information transfer, then we conclude that they are directly connected. Whether the connection between the two systems of interest is causal can be determined by only using past points of each process to predict the future behaviors of the other processes.

In the following, we present some concepts how the partialization is performed in practice. A complete coverage of all possible approaches is beyond the scope of this entry.

Phase-Based Approaches

Synchronization analysis is a common approach to detect interactions between nonlinear self-sustained oscillators (Pikovsky et al. 2001). Following the observations and pioneering work of Huygens, the synchrony has been observed in many different systems including systems with a limit cycle or a chaotic attractor. Several different types of synchronization have been observed for these systems, ranging from phase synchronization, as the weakest form of synchronization via lag synchronization, to generalized or complete synchrony (Rosenblum et al. 1996; Rosenblum et al. 1997; Kocarev and Parlitz 1996; Pecora and Carroll 1990).

Phase synchronization analysis is a powerful tool because it detects even weak coupling between oscillators. For example, in some chaotic oscillators, very weak coupling between oscillators can synchronize their phases but not their amplitudes (Rosenblum et al. 1996). To quantify the process of synchronization, different measures have been proposed (Tass et al. 1998; Mormann et al. 2000; Rosenblum et al. 2001). The first approach presented here is a measure based on circular statistics, which is the so-called mean phase coherence (Mormann et al. 2000). We will introduce it by first reviewing phase synchronization that in weakly coupled self-sustained oscillators. For a more detailed introduction to synchronization including phase synchronization, we refer to the literature, e.g., Pikovsky et al. (2001).

Self-Sustained Oscillators

To discuss synchronization, we must start with a simple model of each node. One of the simplest behaviors with nonlinear dynamics is a self-sustained oscillator. In general these oscillators can be described using the very general differential equation:
$$ \dot{\mathbf{X}}(t)=f\left(\mathbf{X}(t),\boldsymbol{\alpha} (t),\mathbf{U}(t)\right), $$

where X(t) is a multidimensional variable to ensure an oscillatory behavior. The external influence U(t) as well as the parameters α(t) are vectors. In this case, external driving is neglected in the following and the parameters α i are assumed to be constant in time.

Two coupled oscillators can be written as follows:
$$ \begin{array}{l}{\dot{\mathbf{X}}}_1(t)={\mathbf{f}}_1\left({\mathbf{X}}_1(t),{\boldsymbol{\alpha}}_1\right)+{\upepsilon}_{1,2}{\mathbf{h}}_1\left({\mathbf{X}}_1(t),{\mathbf{X}}_2(t)\right)\\ {}{\dot{\mathbf{X}}}_2(t)={\mathbf{f}}_2\left({\mathbf{X}}_2(t),{\boldsymbol{\alpha}}_2\right)+{\upepsilon}_{2,1}{\mathbf{h}}_2\left({\mathbf{X}}_2(t),{\mathbf{X}}_1(t)\right)\end{array} $$

where the coupling from oscillator j onto oscillator i is the coefficient. If ε i,j is nonzero, then the system is considered coupled. h 1 (.) and h 2 (.) are the coupling functions and can be any arbitrary function. Usually, diffusive coupling is assumed, i.e.,

h 1(X 1(t), X 2(t)) = (X 2(t) − X 1(t)) and for h 2 accordingly.

Phase Synchronization

The phase of a limit cycle oscillator is a monotonically increasing function with Φ(t)| t = pT = p2π = pωT,

where p denotes the number of completed cycles, T is the time needed for one complete cycle, and ω is the eigenfrequency of the oscillator. But, more generally, we can consider the phase of any oscillation from a differential equation

\( {\dot{\Phi}}_i(t)={\omega}_i,i=1,\dots, N \) where the ω i are the frequencies of the uncoupled oscillators with i denoting the i-th oscillator.

Differential equations describing the phase in a coupled two-oscillator system with different frequencies n and m can be written as follows:
$$ \begin{array}{l}\kern0.6em n{\dot{\Phi}}_1(t)=n{\omega}_1+{\upepsilon}_{1,2}n{H}_1\left({\Phi}_1,{\Phi}_2\right)\\ {}m{\dot{\Phi}}_2(t)=m{\omega}_2+{\upepsilon}_{2,1}m{H}_2\left({\Phi}_2,{\Phi}_1\right),\end{array} $$
which can be written as the phase difference between the two oscillators
$$ n{\dot{\Phi}}_1(t)-m{\dot{\Phi}}_2(t)=n{\omega}_1-m{\omega}_2+{\upepsilon}_{1,2}n{H}_1\left({\Phi}_1,{\Phi}_2\right)-{\upepsilon}_{2,1}m{H}_2\left({\Phi}_2,{\Phi}_1\right) $$

This equation represents the generalized phase difference (Pikovsky et al. 2001).

In the case of ε 2,1 = ε 1,2 then Φ 1,2 n,m = nΦ1mΦ2 and Δω = 1 2; the above differential equation can be rewritten as
$$ {\dot{\Phi}}_{1,2}^{\mathit{n,m}}(t)=\Delta \omega +{\varepsilon}_{1,2}H\left({\Phi}_{1,2}^{\mathit{n,m}}\right) $$

with a new periodic function H(.).

This differential equation has a fixed point characterized by the following equation:
$$ \Delta \omega +{\varepsilon}_{1,2}H\left({\Phi}_{1,2}^{n,m}\right)=0 $$

In this case, the phase difference is constant with time. Thus, both phases maintain a fixed relationship and the system is considered to be n:m phase synchronized.

If some stochastic influence is present in the oscillators, which in turn alters the phase difference dynamics, certain fluctuations are possible but these are restricted by an appropriately chosen constant, i.e.,
$$ \left|{\Phi}_{1,2}^{n,m} \mod 2\uppi \right|\le \mathrm{const} $$

In this case there will be variability between the phases, but the phase synchronization will still be preserved.

The Mean Phase Coherence

If the weakly coupled self-sustained oscillators are phase synchronized, the above equation can be reformulated to yield a single number to quantify the phase synchrony.

If there is a sharp peak in the phase difference distribution of Φ 1,2 n,m mod 2π, it indicates that two phases have a coherent motion; if the distribution is flat, then it indicates that the two have independently evolving phases. Based on circular statistics, the phase difference distribution can be quantified as follows:
$$ \left|{R}_{12}^{n,m}\right|=\left|\frac{1}{T}{\displaystyle \sum_{t=1}^T{e}^{i{\Phi}_{1,2}^{n,m}(t)}}\right| $$

This quantity is normalized between 0 and 1 and will be one for perfectly synchronized phases and zero for independent signals (Tass et al. 1998; Mardia and Jupp 2000; Mormann et al. 2000).

Often the phase of a signal cannot be directly measured. In which case, we can use the Hilbert transform to calculate a 90° phase shifted version of the time series measured, with which we can use to calculate the instantaneous phase
$$ {X}_h(t)=\frac{1}{\uppi}P.V.{\displaystyle \underset{-\infty }{\overset{\infty }{\int }}\frac{X\left(\uptau \right)}{t-\uptau}}\;\mathrm{d}\uptau $$
where P.V. refers to Cauchy’s principal value. This leads to
$$ V(t)=X(t)+i{X}_h(t)=A(t){e}^{i\Phi (t)} $$

where i is the imaginary number and through Euler’s formula can be written as a complex exponent where Φ(t) is the phase. Alternative approaches to obtain the phases are conceivable based on, e.g., wavelet transformations (Le van Quyen et al. 2001; Bandrivskyy et al. 2004). Note that this transformation is applicable for signals in which a frequency is well defined; different approaches are needed for broad band signals (see below) or point processes (Smirnov et al. 2007).

Multivariate Phase Synchronization

If multiple signals are being analyzed, a multivariate phase synchronization, also referred to as partial phase synchronization, can be used. For an N-dimensional process, the following matrix of pairwise interactions can be generated:
$$ {R = (R_{ij} ^{n,m} )_{i = 1,.,N,j = 1,.,N}} $$
This is called the synchronization matrix containing all pairwise phase synchronization measures. The inverse PR = R −1 of this matrix leads to the definition of the n:m partial phase synchronization index
$${\mathbf{R}_{kl|\mathbf{Y}} = {{|PR{}_{kl}|} \over {\sqrt {PR_{kk}PR} _{ll} }}}$$

between oscillators k and l. It is conditioned on the remaining processes which are summarized as Y. It can be analytically shown (Dahlhaus 2000; Schelter et al. 2006b) that this matrix inversion partializes the information of the third processes, as introduced in the “General Concept” section. Indirect interactions are characterized by a vanishing partial phase synchronization. If the bivariate phase synchronization index R kl is considerably different from zero while the corresponding multivariate partial phase synchronization index R kl | Y is approximately zero, then the connection is most likely due to indirect coupling between the processes k and l (Schelter et al. 2006b). A rigorous statistical test using natural surrogates or twin surrogates (Nawrath et al. 2010) can be used to determine significance.


Three coupled stochastic Roessler oscillators (Roessler 1976) are an example of a system of weakly coupled self-sustained phase-coherent stochastic oscillators:
$$ {\dot{X}}_j=-{\omega}_j\;{Y}_j-{Z}_j+\left[{\displaystyle \sum_{i,i\ne j}\;}{\varepsilon}_{j,i}\;\left({X}_i-{X}_j\right)\right]+{\sigma}_j\;{\eta}_j $$
$$ {\dot{Y}}_j={\omega}_j\;{X}_j+a\;{Y}_j $$
$$ {\dot{Z}}_j=b+\left({X}_j-c\right)\;{Z}_j $$

with i,j = 1,…,3 and parameters are set to a = 0.15, b = 0.2, c = 10, ω 1 = 1.03, ω 2 = 1.01, and ω 3 = 0.99 yielding chaotic behavior in the deterministic case. The noise term, η j , is Gaussian distributed with mean of zero and standard deviation σ j = 1.5. Both the bidirectional couplings between oscillators 1 and 3 and 1 and 2 are varied between 0 and 0.3. The oscillators 2 and 3 are not directly coupled.

The 1:1 mean phase coherence and the partial phase synchronization indices are depicted in Fig. 1. The bivariate synchronization index mean phase coherence R 12 ,as well as R 13 , increases for increasing coupling strength, indicating phase synchronization (Fig. 1a upper triangular). Once there is a sufficiently strong coupling between oscillators 1 and 2 as well as 1 and 3, indirect coupling is seen between 2 and 3 with a nonvanishing bivariate synchronization index R 23 (Fig. 1a upper triangular). This high but spurious phase synchronization is caused by the common influence from oscillator 1. In the same figure (Fig. 1a below the diagonal), the results of partial phase synchronization analysis are shown. While R 12 | 3 and R 13 | 2 are essentially unchanged compared to the bivariate synchronization indices, R 23 | 1 stays almost always below 0.1 and therefore indicates this measure is due to spurious synchronization, indicating that there is an absence of a direct coupling between oscillators 2 and 3. Hence, the true underlying (multivariate) network structure is correctly revealed by the analysis (Fig. 1b).
Fig. 1

(a) Phase synchronization as well as partial phase synchronization analysis. (b) Corresponding network as inferred from the observations

Phase Dynamics Modeling

Another approach in reconstructing networks is to estimate the coupling functions H 11, Φ2) and H 22, Φ1) from observations of the dynamics. These reconstructions can, for instance, be based on approximating the functions H 1 and H 2 with trigonometric functions. With this approach, it has been shown that a reliable detection of interactions (Smirnov et al. 2007 and references therein) and therefore the reconstruction of the network topology are possible (Kralemann et al. 2011).

Recurrence-Based Approaches

If the processes are not phase-coherent, which is typically the case for many observed systems, the approach based on the direct calculation of the phases is not feasible because neither the phase nor the onset of phase synchronization can be explicitly defined. Bivariate analysis of noncoherent systems has been approached by testing synchronization using alternate notion of the phase, i.e., a phase that builds on the general idea of curvature (Osipov et al. 2003). However, such a definition of the phase restricts the analysis to systems where the phase trajectory corresponds to a curve with a positive curvature on some projection plane. An alternative approach is based on recurrence analysis.

Recurrence Analysis

A synchronization measure based on the recurrences of a trajectory can also be used. This approach considers phase synchronization in a statistical sense. The notion of recurrences in phase space was originally introduced by Poincare (1890). A representation of these recurrences is given by the recurrence matrix (Eckmann et al. 1987; Marwan et al. 2007):
$$ RP{\left(\varepsilon \right)}_{i,j}=\Theta \left(\varepsilon -\left|\right|\mathbf{X}(i)-\mathbf{X}(j)\left|\right|\right),\kern1em i,j=1,\dots, nn $$
where X(i) and X(j) denote trajectories of length n in phase space, Θ(.) is the Heaviside function, | · || is an appropriate norm, and ε is a predefined threshold. If only a scalar time series has been observed, the state of the system, i.e., the vector X(i), can typically be reconstructed by delay embedding (Packard et al. 1980; Takens 1981; Sauer et al. 1991), where each dimension represents the data at some time lag τ in the past:
$$ \mathbf{X}\left(\mathbf{i}\right)=\left[\mathbf{x}\left(\mathbf{i}\right),\mathbf{x}\left(\mathbf{i}-\uptau \right),\dots, \mathbf{x}\left(\mathbf{i}-\mathbf{k}\uptau \right)\right] $$
When a series of points stays within a small distance apart, say within a small tube of radius ε around the other, that section of the trajectory corresponds to a diagonal of 1’s in the recurrence matrix. Thus, an estimate of the probability
$$ \widehat{p}\left(\varepsilon, \uptau \right)=R{R}_{\tau}\left(\varepsilon \right)=\frac{1}{L-\uptau}{\displaystyle \sum_{i=1}^{L-\uptau}R}P{\left(\varepsilon \right)}_{i,i+\uptau} $$

that a system recurs to the ε-neighborhood of a former point of the trajectory after τ time steps is given by the diagonal-wise calculated recurrence rate RR τ (ε).

This measure can be considered as a generalized autocorrelation function (Romano et al. 2005), since it also describes higher-order correlations among the points of the trajectory. It measures the adaptation of the time scales of two interacting systems by comparing their corresponding recurrence matrices; if the distances between diagonal lines in their recurrence matrices coincide, the time scales of the systems adapt to each other. To quantify this, the cross-correlation coefficient of the probabilities of recurrence can be calculated as follows:
$$ CP{R}_{kl}=\left\langle {\widehat{p}}_k\left(\varepsilon, \uptau \right){\widehat{p}}_l\left(\varepsilon, \uptau \right)\right\rangle, \kern1emCP{R}_{kl}\in \left[0,1\right] $$

In the case when the two systems have locked phase dynamics, the probability of recurrence is simultaneously high for both systems and the CPR kl will differ from zero significantly. This synchronization measure has been demonstrated to be effective for detecting coupling in a general class of non-phase-coherent and nonstationary systems and even for time series corrupted by strong noise (Romano et al. 2005).

Multivariate Recurrence Analysis

Recurrence analysis can also be applied to a multivariate system with N interdependent and non-phase-coherent processes. To treat the multivariate case can be generated as a matrix of the bivariate CPR synchronization indices:
$${CPR = (CPR_{ij} )_{i,j = 1,...,N}}$$
is used. This matrix has the same symmetry properties as the synchronization matrix for phase-coherent systems. Using the inverse CPR −1, the generalized partial synchronization index can be calculated:
$$ CP{R}_{kl\Big|\mathbf{Y}}=\frac{\left| CP{R}_{kl}^{-1}\right|}{\sqrt{ CP{R}_{kk}^{-1}\kern0.24emCP{R}_{ll}^{-1}}} $$

This measure quantifies phase synchronization between two oscillators k and l, conditioned on third processes Y of a multivariate possibly non-phase-coherent and noisy oscillatory process (Nawrath et al. 2009).

Information Theoretic Approaches

Information theory-based approaches complement the nonlinear approaches presented above. The mutual information between time series X and Y can be calculated as follows:
$$ I\left(X;Y\right)={\displaystyle \sum_{y\in Y}{\displaystyle \sum_{x\in X}p}}\left(x,y\right) \log \frac{p\left(x,y\right)}{p(x)p(y)} $$
where p(x) represents the probability that the signal X will have the amplitude x and p(x,y) is the joint probability that the time series X will have the value x while simultaneously the time series Y will have the value y. Transfer entropy calculates the direction of information flow between two systems by testing how the information from the past of the two systems predicts the future of one of the signals:
$$ {T}_{X\to Y}\left(k,l\right)={\displaystyle \sum p}\left({y}_{t+1},{y}_t^{(k)},{x}_t^{(l)}\right) \log \frac{p\left({y}_{t+1}\Big|{y}_t^{(k)},{x}_t^{(l)}\right)}{p\left({y}_{t+1}\Big|{y}_t^{(k)}\right)} $$

This approach utilizes the fundamental concepts of the information theoretic approaches (Kantz and Schreiber 1997). The p(yt+1|yt (k),xt (l)) are the conditional probability density functions where the probability of yt+1 given the previous k points of the times series yt (k),xt (l).

The information and the transfer information as presented are bivariate; however multivariate extensions are straightforward as the (conditional) probability density functions can include more processes. The challenge in calculating these measures in higher dimensional densities is that they quickly become computationally intensive to find all the conditional probabilities. Solutions are possible and can, for instance, be found in Palus and Stefanovska (2003), Runge et al. (2012), or Wibral et al. (2012) and references therein.

Linear Approaches

Although linear approaches are strictly speaking not part of the nonlinear data-based modeling approaches, we include them here. They are indeed very powerful in revealing the true underlying network structure despite the fact that they are technically speaking not applicable. Due to space limitations, we can only discuss one of the linear approaches, although many more exist in the literature. We refer the reader, for instance, to Schelter et al. (2006a) and references therein.

The concept of Granger causality (Granger 1969) is based on the commonsense conception that causes precede their effects in time. Partial directed coherence
$$ {\left|\uppi \right.}_{i\leftarrow j}\left(\omega \right)\Big|=\frac{\left|{\mathbf{A}}_{ij}\left(\omega \right)\right|}{\sqrt{{\displaystyle \sum_m{\left|{\mathbf{A}}_{mj}\left(\omega \right)\right|}^2}}} $$
$$ \mathbf{A}\left(\omega \right)=I-{\displaystyle \sum_{r=1}^p\mathbf{a}}(r)\;{\mathrm{e}}^{-\mathrm{i}\omega r} $$
for an N-dimensional vector autoregressive process of order p (VAR[p] process)
$$ \mathbf{X}(t)={\displaystyle \sum_{r=1}^p\mathbf{a}}(r)\kern0.5em \mathbf{X}\left(t-r\right)+\varepsilon (t), $$

where ε(t) is a multivariate Gaussian white noise process with covariance matrix Σ. This measure was introduced by Baccala and Sameshima (2001) as a measure of Granger causality.

Partial directed coherence |π ij (ω)| provides a measure for the directed, linear influences from X j (t) onto X i (t) at frequency ω. It is estimated by fitting an N-dimensional VAR[p] model to the data and directly using the above equations with the parameter estimates substituted for the true parameters.

Particularly important for nonlinear systems is the corresponding pointwise α-significance level for the partial directed coherence (Schelter et al. 2006c)
$$ {\left(\frac{{\widehat{C}}_{ij}\left(\omega \right)\;{\upchi}_{1,1-\alpha}^2}{n\;{\displaystyle {\sum}_m{\left|{\widehat{\mathbf{A}}}_{mj}\left(\omega \right)\right|}^2}}\right)}^{1/2} $$

where χ 2 1,1−α is the 1−α quantile of the χ 2-distribution with one degree of freedom. The values \( {C}_{ij}\left(\omega \right)={\varSigma}_{ii}\left[{\displaystyle \sum_{l,m=1}^p{\mathbf{H}}_{jj}}\left(l,m\right)\left( \cos \left( l\omega \right) \cos \left( m\omega \right)+ \sin \left( l\omega \right) \sin \left( m\omega \right)\right)\right] \)

can be calculated based on the inverse H = V −1 of the covariance matrix V of the VAR[p] process X(t), which is composed of the entries

V ij (l, m) = cov(X i (tl), X j (tm)) for i,j = 1,…,N and l,m = 1,…,p (Luetkepohl 1993).

We emphasize that the significance level depends on the order of the vector autoregressive process; higher model orders lead to higher significance levels which in turn indicates that for higher models the ability to detect weak couplings decreases.

Partial directed coherence by construction is a multivariate analysis technique. It has been developed for linear stochastic processes. However, in neurophysiology, nonlinear stochastic processes are generating the time series. In most of these cases, the dependence structure is reflected in the linear second-order structure; for this reason the partial directed coherence discloses the network structure also for nonlinear processes.

As an example system for which to test these tools, we will introduce a system of coupled stochastic van der Pol oscillators (van der Pol 1922):
$$ \begin{array}{l}{\ddot{X}}_1=\mu \left(1-{X}_1^2\right){\dot{x}}_1-{\omega}_1^2{X}_1+\sigma {\eta}_1+{\varepsilon}_{12}\left({X}_2-{X}_1\right)+{\varepsilon}_{13}\left({X}_3-{X}_1\right)+{\varepsilon}_{14}\left({X}_4-{X}_1\right)\\ {}{\ddot{X}}_2=\mu \left(1-{X}_2^2\right){\dot{x}}_2-{\omega}_2^2{X}_2+\sigma {\eta}_2+{\varepsilon}_{23}\left({X}_3-{X}_2\right)+{\varepsilon}_{24}\left({X}_4-{X}_2\right)+{\varepsilon}_{21}\left({X}_1-{X}_2\right)\\ {}{\ddot{X}}_3=\mu \left(1-{X}_3^2\right){\dot{x}}_3-{\omega}_3^2{X}_3+\sigma {\eta}_3+{\varepsilon}_{34}\left({X}_4-{X}_3\right)+{\varepsilon}_{31}\left({X}_1-{X}_3\right)+{\varepsilon}_{32}\left({X}_2-{X}_3\right)\\ {}{\ddot{X}}_4=\mu \left(1-{X}_4^2\right){\dot{x}}_4-{\omega}_4^2{X}_4+\sigma {\eta}_4+{\varepsilon}_{41}\left({X}_1-{X}_4\right)+{\varepsilon}_{42}\left({X}_2-{X}_4\right)+{\varepsilon}_{43}\left({X}_3-{X}_4\right)\end{array} $$

where the oscillators are given coefficients ω 1 = 1.5, ω 2 = 1.48, ω 3 = 1.53, ω 4 = 1.44, σ = 1.5, and Gaussian distributed white noise η i . This system is simulated for ith n = 50,000 data points for each process to generate signals to apply the tools for reconstructing the network topology. Note that although the four oscillators are diffusively coupled and therefore their interactions are still linear, the system is nonlinear. The nonlinearity parameter μ is fixed to μ = 5, leading to a highly nonlinear behavior of the van der Pol oscillators. The unidirectional and bidirectional couplings between these four nonidentical oscillators are set to ε12 = ε21 = 0.2, ε24 = ε42 = 0.2, ε31 = 0.2, and ε34 = 0.2.

The causal influences are summarized in the graph in Fig. 2a. Estimated partial directed coherences as well as the spectra are given in Fig. 2b. The order of the vector autoregressive process is chosen to be p = 200. This high model order is required to reproduce the spectra with sufficient accuracy, as opposed to nonparametric spectral estimates. The corresponding 5 % significance levels are indicated by gray lines. Partial directed coherence correctly detects the causal influences in the van der Pol system. Note that the significance level depends on the investigated frequency. At the peaks in the spectra of the van der Pol oscillators, the significance level is slightly higher than at the remaining frequencies. Thus, only those partial directed coherencies are statistically significant that correspond to a direct causal influences between the oscillators.
Fig. 2

Causal influences for the example of four coupled stochastic van der Pol oscillators (a). Corresponding spectra (diagonal) and partial directed coherence (off-diagonal) (b). The corresponding 5 % significance levels are indicated by gray lines. The simulated causal influences are reproduced correctly since only those partial directed coherences that correspond to direct interactions are statistically significant at the oscillation frequencies


So far, the multivariate analysis techniques presented here have been applied to simulated time series. To illustrate their performance in physiological applications, examples of patients suffering from essential tremor are presented.

The pathophysiological basis of essential tremor, a common neurological disease with a prevalence of 0.4–4 % (Louis et al. 1998), is not precisely known. Essential tremor manifests itself mainly in the upper limbs, when the hands are in a postural outstretched position. Usually the trembling frequency of the hands is 4–10 Hz. To elucidate the tremor generating mechanisms in essential tremor, relationships between the brain and trembling muscles are of particular interest. For unilaterally activated tremor, tremor-correlated cortical activity to the contralateral tremor side has been revealed by magnetoencephalography (MEG) and electroencephalography (EEG) for Parkinsonian tremor (Volkmann et al. 1996; Hellwig et al. 1999) and by electroencephalography for essential tremor (Hellwig et al. 2001). In bilaterally activated essential tremor, however, a more complex interrelation structure has been observed by simultaneous electroencephalographic recordings from the scalp and electromyographic (EMG) recordings from the extensor muscles (Hellwig et al. 2003). In addition to contralateral coherences, also ipsilateral coherences between the sensorimotor cortex and the muscles have been detected.

In this study, patients were seated in a comfortable chair having their forearms supported while their hands were outstretched to activate tremor. Data were sampled at 1,000 Hz. The EEG data as well as the EMG data were preprocessed applying a band-pass filter between 30 and 200 Hz to avoid aliasing and movement artifacts; the EMG was rectified afterward. The EEG was then high-pass filtered above 0.5 Hz to avoid baseline fluctuations and also anti-aliasing filtered. Scalp electrodes over the left and right sensorimotor cortex and the EMG of the left and right wrist extensor are analyzed.

It is of importance to investigate whether the cortex imposes its oscillatory activity on the muscles via the corticospinal tract or whether the muscle activity is just reflected in the cortex via proprioceptive afferences. To get these deeper insights into the tremor generating mechanisms, partial directed coherence is applied to data recorded from these patients.

In Fig. 3, the results of the partial directed coherence analysis for the two EMG and the two EEG channels are shown for an exemplary patient. Both directions from the cortex to the muscles and vice versa are observed. A significant influence from the right EEG onto the left EMG, from the left EMG onto the right EEG, from the left EEG onto the right EMG, and from the right EMG onto the left EEG is detected. Especially, the partial directed coherence from the right EMG to the left EEG is rather large.
Fig. 3

Partial directed coherence analysis for a patient suffering from essential tremor. A significant influence from the right EEG onto the left EMG (3rd row/1st column), from the left EMG onto the right EEG (1st row/3rd column), from the left EEG onto the right EMG (4th row/2nd column), and from the right EMG onto the left EEG (2nd row/4th column) is detected

The graph summarizing the results of partial directed coherence analysis for the essential tremor patient is presented in Fig. 4. The influences between both EEGs are marked by dashed arrows. This is due to the fact that partial directed coherence is not significant exactly at the tremor frequency but over a range of frequencies close to the tremor frequency. Since causal influences from both EEGs to the corresponding contralateral EMGs are present, participation of the motor cortex in tremor generation is strongly indicated. Moreover, there is also a significant partial directed coherence from the EMG to the contralateral EEG at the tremor frequency. This corresponds to a feedback from the muscles to the somatosensory cortex. For both patients, unexpected ipsilateral interrelations are not detected by partial directed coherence analysis.
Fig. 4

Graph for partial directed coherence analysis of the tremor application of Fig. 3. The arrows indicate a direct and directed interrelation at the tremor frequency. The dashed arrows indicate possible interactions between the EEGs that could not be unambiguously shown


Several approaches to data-based modeling are conceivable. In this entry, we focused on a few multivariate approaches that enable to distinguish direct from indirect interactions. Some even provide directions for the interactions, for instance, Granger-causality-based concepts. In applications to observed nonlinear systems, the type of data, its dimension, noise contamination, and stationarity typically decides which technique can be applied and what conclusions can be drawn. Typically, simulation studies tailored to the problem at hand should be performed prior to any analysis.


  1. Baccala LA, Sameshima K (2001) Partial directed coherence: a new concept in neural structure determination. Biol Cybern 84:463–474PubMedCrossRefGoogle Scholar
  2. Bandrivskyy A, Bernjak A, McClintock PVE, Stefanovska A (2004) Wavelet phase coherence analysis: application to skin temperature and blood flow. Cardiovasc Eng 4:89–93CrossRefGoogle Scholar
  3. Boccaletti S, Kurths J, Osipov G, Valladares DL, Zhou CS (2002) The synchronization of chaotic systems. Phys Rep 366:1–101CrossRefGoogle Scholar
  4. Dahlhaus R (2000) Graphical interaction models for multivariate time series. Metrika 51:157–172CrossRefGoogle Scholar
  5. Ding M, Chen Y, Bressler SL (2006) Granger causality: basic theory and application to neuroscience. In: Schelter B, Winderhalder M, Timmer J (eds) Handbook of time series analysis. Wiley-VCH, Weinheim, pp 437–460CrossRefGoogle Scholar
  6. Eckmann J-P, Oliffson Kamphorst S, Ruelle D (1987) Recurrence plots of dynamical systems. Europhys Lett 4:973–977CrossRefGoogle Scholar
  7. Eichler M (2005) A graphical approach for evaluating effective connectivity in neural systems. Phil Trans R Soc B 360:953–967PubMedCentralPubMedCrossRefGoogle Scholar
  8. Eichler M (2007) Granger-causality and path diagrams for multivariate time series. J Econom 137:334–353CrossRefGoogle Scholar
  9. Granger CWJ (1969) Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37:424–438CrossRefGoogle Scholar
  10. Hellwig B, Haeussler S, Schelter B, Lauk M, Guschlbauer B, Timmer J, Luecking CH (2001) Tremor correlated cortical activity in essential tremor. Lancet 357:519–523PubMedCrossRefGoogle Scholar
  11. Hellwig B, Schelter B, Guschlbauer B, Timmer J, Luecking CH (2003) Dynamic synchronisation of central oscillators in essential tremor. Clin Neurophysiol 114:1462–1467PubMedCrossRefGoogle Scholar
  12. Kantz H, Schreiber T (1997) Nonlinear time series analysis. Cambridge University Press, CambridgeGoogle Scholar
  13. Kocarev L, Parlitz U (1996) Generalized synchronization, predictability, and equivalence of unidirectionally coupled dynamical systems. Phys Rev Lett 76:1816–1819PubMedCrossRefGoogle Scholar
  14. Kralemann B, Pikovsky A, Rosenblum M (2011) Reconstructing phase dynamics of oscillator networks. Chaos 21:025104PubMedCrossRefGoogle Scholar
  15. Le van Quyen M, Martinerie J, Navarro V, Boon P, D’Have M, Adam C, Renault B, Varela F, Baulac M (2001) Anticipation of epileptic seizures from standard EEG recordings. Lancet 357:183–188PubMedCrossRefGoogle Scholar
  16. Louis ED, Ford B, Wendt KJ, Cameron G (1998) Clinical characteristics of essential tremor: data from a community-based study. Mov Disord 13:803–808PubMedCrossRefGoogle Scholar
  17. Luetkepohl H (1993) Introduction to multiple time series analysis. Springer, New YorkCrossRefGoogle Scholar
  18. Mardia K, Jupp P (2000) Directional statistics. Wiley, West SussexGoogle Scholar
  19. Marwan N, Carmen Romano M, Thiel M, Kurths J (2007) Recurrence plots for the analysis of complex systems. Phys Rep 438:237–329CrossRefGoogle Scholar
  20. Mormann F, Lehnertz K, David P, Elger CE (2000) Mean phase coherence as a measure for phase synchronization and its application to the EEG of epilepsy patients. Physica D 144:358–369CrossRefGoogle Scholar
  21. Nawrath J, Romano MC, Thiel M, Kiss IZ, Wickramasinghe M, Timmer J, Kurths J, Schelter B (2010) Distinguishing direct and indirect interactions in oscillatory networks with multiple time scales. Phys Rev Lett 104:038701PubMedCrossRefGoogle Scholar
  22. Osipov GV, Hu B, Zhou C, Ivanchenko MV, Kurths J (2003) Three types of transitions to phase synchronization in chaotic oscillators. Phys Rev Lett 91:024101PubMedCrossRefGoogle Scholar
  23. Packard N, Crutchfield J, Farmer D, Shaw R (1980) Geometry from a time series. Phys Rev Lett 45:712CrossRefGoogle Scholar
  24. Palus M, Stefanovska A (2003) Direction of coupling from phases of interacting oscillators: an information theoretic approach. Phys Rev E 67:055201CrossRefGoogle Scholar
  25. Pecora LM, Carroll TL (1990) Synchronization in chaotic systems. Phys Rev Lett 64:821–824PubMedCrossRefGoogle Scholar
  26. Pikovsky A, Rosenblum M, Kurths J (2000) Phase synchronization in regular and chaotic systems. Int J Bifurc Chaos 10:2291–2305Google Scholar
  27. Pikovsky A, Rosenblum M, Kurths J (2001) Synchronization – a universal concept in nonlinear sciences. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  28. Poincare H (1890) Sur les equations de la dynamique et le probleme de trois corps. Acta Mathematica 13:1–270Google Scholar
  29. Roessler OE (1976) An equation for continuous chaos. Phys Lett A 57:397–398CrossRefGoogle Scholar
  30. Romano MC, Thiel M, Kurths J, Kiss IZ, Hudson JL (2005) Detection of synchronization for non-phase-coherent and non-stationary data. Europhys Lett 71:466–472CrossRefGoogle Scholar
  31. Rosenblum MG, Pikovsky AS, Kurths J (1996) Phase synchronization of chaotic oscillators. Phys Rev Lett 76:1804–1807PubMedCrossRefGoogle Scholar
  32. Rosenblum MG, Pikovsky AS, Kurths J (1997) From phase to lag synchronization in coupled chaotic oscillators. Phys Rev Lett 78:4193–4196CrossRefGoogle Scholar
  33. Rosenblum MG, Pikovsky A, Kurths J, Schaefer C, Tass PA (2001) Phase synchronization: from theory to data analysis. In: Moss F, Gielen S (eds) Handbook of biological physics, vol 4, Neuroinformatics. Elsevier, Amsterdam, pp 279–321Google Scholar
  34. Runge J, Heitzig J, Marwan N, Kurths J (2012) Quantifying causal coupling strength: a lag-specific measure for multivariate time series related to transfer entropy. Phys Rev E 86:061121CrossRefGoogle Scholar
  35. Sauer T, Yorke J, Casdagli M (1991) Embedology. J Stat Phys 65:579–616CrossRefGoogle Scholar
  36. Schelter B, Winterhalder M, Timmer J (eds) (2006a) Handbook of time series analysis. Wiley-VCH, BerlinGoogle Scholar
  37. Schelter B, Winterhalder M, Dahlhaus R, Kurths J, Timmer J (2006b) Partial phase synchronization for multivariate synchronizing system. Phys Rev Lett 96:208103PubMedCrossRefGoogle Scholar
  38. Schelter B, Winterhalder M, Eichler M, Peifer M, Hellwig B, Guschlbauer B, Luecking CH, Dahlhaus R, Timmer J (2006c) Testing for directed influences among neural signals using partial directed coherence. J Neurosci Methods 152:210–219PubMedCrossRefGoogle Scholar
  39. Smirnov D, Schelter B, Winterhalder M, Timmer J (2007) Revealing direction of coupling between neuronal oscillators from time series: phase dynamics modeling versus partial directed coherence. Chaos 17:013111PubMedCrossRefGoogle Scholar
  40. Takens F (1981) Detecting strange attractors in turbulence. In: Rand DA, Young L-S (eds) Dynamical systems and turbulence (Warwick 1980), vol 898, Lecture notes in mathematics. Springer, Berlin, pp 366–381CrossRefGoogle Scholar
  41. Tass PA, Rosenblum MG, Weule J, Kurths J, Pikovsky A, Volkmann J, Schnitzler A, Freund HJ (1998) Detection of n : m phase locking from noisy data: application to magnetoencephalography. Phys Rev Lett 81:3291–3295CrossRefGoogle Scholar
  42. van der Pol B (1922) On oscillation-hysteresis in a simple triode generator. Phil Mag 43:700–719CrossRefGoogle Scholar
  43. Volkmann J, Joliot M, Mogilner A, Ioannides AA, Lado F, Fazzini E, Ribary U, Llinas R (1996) Central motor loop oscillations in Parkinsonian resting tremor revealed by magnetoencephalography. Neurology 46:1359–1370PubMedCrossRefGoogle Scholar
  44. Wibral M, Wollstadt P, Meyer U, Pampu N, Priesemann V, Vicente R (2012) Revisiting Wiener‘s principle of causality – interaction-delay reconstruction using transfer entropy. In: Proceedings of the 34th annual international conference of the IEEE EMBS (EMBC 2012), San DiegoGoogle Scholar
  45. Wiesenfeldt M, Parlitz U, Lauterborn W (2001) Mixed state analysis of multivariate time series. Int J Bifurc Chaos 11:2217–2226CrossRefGoogle Scholar

Further Reading

  1. Hellwig B, Haeussler S, Lauk M, Koester B, Guschlbauer B, Kristeva-Feige R, Timmer J, Luecking CH (2000) Tremor-correlated cortical activity detected by electroencephalography. Electroencephalogr Clin Neurophysiol 111:806–809Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Institute for Complex Systems and Mathematical BiologyUniversity of AberdeenAberdeen, ScotlandUK
  2. 2.Department of PhysicsInstitute for Complex Systems and Mathematical BiologyAberdeenUK