# Determining Network Structure from Data: Nonlinear Modeling Methods

**DOI:**https://doi.org/10.1007/978-1-4614-7320-6_439-2

## Keywords

Essential Tremor Granger Causality Phase Synchronization Causal Influence Synchronization Index## Definition

In this entry we focus on inference of network structures from data. One possible approach to studying networks is to model the nodes, such as neurons, and generate networks and run simulations and observe the network behavior. This approach requires on a priori assumptions about the constituent parts; for instance, Hodgkin-Huxley neurons may be coupled and the resulting network behavior is investigated. The model behaviors can be compared to the measured neuronal signals through statistical analysis. However, this approach only provides indirect information about the network structure. An alternative approach to studying network structure is to use parametric, semiparametric, and nonparametric analyses of the observed signals and reconstruct the network connections. Such analyses are essential tools for systems in which network structure is not known, such as when anatomical connections are not known or information is not sufficient.

A particular challenge when inferring the network structure from data lies in the fact that analysis techniques to investigate interactions are often bivariate, investigating pairwise connections. Often, analyses cannot distinguish between direct and indirect interactions. A good measure of coupling between nodes should be able to distinguish direct and indirect interactions. To achieve this, a multivariate data analysis approach needs to be used. For linear systems multivariate data analysis approaches have been developed over the past decades. However, for nonlinear systems, the development of multivariate analysis techniques is in its infancy. Nonlinear approaches typically require computationally intensive algorithms and large data sets which have limited their application.

In this entry, we provide a survey of approaches for inferring the network structure from nonlinear systems using nonlinear data-based modeling.

## Detailed Description

### Introduction

Mathematicians and physicists often generate models of complex networks starting with first principles. However, for complex biological systems found in neuroscience, the dynamics of neuronal behavior underlying the measured signals is poorly understood, leaving an approach based on first principles ideal. Therefore, an understanding of the behavior can only be based upon the analysis of measured data of the dynamics, i.e., time series from EEG or other population level signals, and point processes, from action potential timing. This approach is called data-based modeling.

Time series and point process analysis have different roots originating in mathematics, physics, and engineering. The underlying assumptions of the sources of the signals are different resulting in different approaches to the analysis. In mathematical statistics, the model has been based on linear stochastic systems; in physics the models have been of nonlinear deterministic systems. While methodological developments evolved independently across disciplines, over the past decade cross-fertilization has resulted in novel approaches to data-based modeling of nonlinear stochastic systems (Schelter et al. 2006b).

Reconstruction of networks depends on the detection of coupling between areas, particularly *causal influences* between two different processes, for instance, between brain areas or between brain areas and the periphery. The goal is to detect changes in coupling that may be caused by the underlying diseases. Detection of changes in coupling may lead to improved diagnosis and treatment strategies especially for neurological diseases.

The approach proposed here considers the brain as a dynamic system from which we measure the activity through the electroencephalogram (EEG), magnetoencephalogram (MEG), or another representation of the underlying neuronal activity. By applying the network analysis to the data sets recorded from normal patients and those suffering from neurological diseases, we hope to gain an understanding of the underlying mechanisms generating these dysfunctions (Volkmann et al. 1996; Tass et al. 1998; Hellwig et al. 2001). However, there is a wide variety of applications beyond neurosciences to which the linear as well as nonlinear data analysis techniques presented here can be applied successfully.

Various linear analysis techniques have been proposed to determine interdependency between dynamic processes and to determine causal influences in multivariate systems (Schelter et al. 2006a). Using the Fourier transform, a signal can be converted to the frequency domain and the interdependencies analyzed using the cross-spectrum and coherence. But these tools are not sufficient to adequately describe interdependence within a multivariate system where correlation arises not because the two are directly coupled but because they receive common inputs. To enable a differentiation between direct and indirect influences in multivariate systems, multivariate linear approaches have been developed (Dahlhaus 2000; Baccala and Sameshima 2001; Ding et al. 2006).

Furthermore, multivariate network analysis can uncover directed interactions, which enables deeper insight into the basic mechanisms underlying such networks. These tools can determine not only if nodes are connected but the direction of the connection. In some cases in which communication is present in both directions, it is possible that they can communicate in distinct frequency bands. To detect directed nonlinear connections, appropriate analysis techniques are needed. Granger causality (Granger 1996) is utilized to determine causal influences when the coupling is linear. This probabilistic approach to determining causality is based on the principle that a cause precedes the effect in time and this is formulated in terms of predictability. Granger causality is done by fitting autoregressive models to the signals and determining if the error in prediction of the next time point of one signal can be improved by the knowledge of the other. A graphical approach for modeling Granger-causal relationships in multivariate processes has been discussed (Eichler 2007). More generally, networks may be determined by the use of Granger causality between the nodes, but unlike linear cross-correlation, this approach tries to address spurious causalities caused by confounding by unobserved processes (Eichler 2005).

Nonlinear systems can show behaviors that are impossible in linear systems (Pikovsky et al. 2001); for example, nonlinear systems can synchronize. In the seventeenth century, Huygens observed synchronization between two pendulum clocks on the same wall. These clocks are coupled self-sustained oscillators. The process of synchronization is an adaptation of certain characteristics of the two processes. The oscillations between the clocks were always antiphase and restored synchronization after being perturbed (Pikovsky et al. 2000); for general systems, different phase relations are conceivable. A weaker form of synchronization has been observed between two coupled chaotic oscillators. These oscillators can synchronize their phases while their amplitudes stay almost uncorrelated (Pikovsky et al. 2001; Boccaletti et al. 2002; Wiesenfeldt et al. 2001). Several forms of synchronization have been described, ranging from phase synchronization via lag synchronization to almost complete synchronization (Boccaletti et al. 2002). Generalized synchronization is the most general case where one signal is related to the other through any arbitrary function.

While a battery of various analysis techniques is available for linear stochastic systems, for nonlinear systems techniques are still in their infancy. This entry is dedicated to the inference of the network structure based on the observation of nonlinear processes utilizing nonlinear data-based modeling approaches. The entry is subdivided into Phase-Based Approaches, Recurrence-Based Approaches, Information Theoretic Approaches, and Linear Approaches. We should note that it is impossible to provide a complete survey of all approaches.

We begin this entry by introducing the general concept underlying most of the approaches to provide a quick introduction into the topic.

### General Concept

One challenge when inferring the network structure from data lies in distinguishing direct and indirect interactions. Given a network with direct and indirect interactions, a bivariate analysis can be applied to every pairwise combination of nodes of the network to detect all connections. The resulting network could be fully connected. But the indirect connections are often weaker than direct ones. By setting some finite statistical power, we can hope to distinguish the direct ones from the indirect ones. But such an approach indeed relies on the assumption that indirect connections are the weakest in the network and that the statistical power is actually too low to detect the interaction. In general, both assumptions are not valid and bear the risk that erroneous conclusions are drawn; this leads to several *false*-*positive* conclusions.

Multivariate approaches attempts to correct for these problems with bivariate approaches. The theory behind multivariate approach is that if all contributing processes have been observed, it should be possible to infer whether or not a connection is indirect. This is done by partializing the information of the third processes. Practically, there are several approaches to do this. Here, we just provide an outline for how this is done. To test if two systems are direct or indirect, we will first assume for simplicity that there are only the two systems in question and one other system that could potentially influence them. Because we have observed all the systems, it is possible to determine the amount of information transfer from the third system onto the two systems of interest. If this extra information from the third process suffices to fully explain the information transfer between the first two systems, then we can conclude that the connection is indirect. If the third cannot explain all the information transfer, then we conclude that they are directly connected. Whether the connection between the two systems of interest is causal can be determined by only using past points of each process to predict the future behaviors of the other processes.

In the following, we present some concepts how the partialization is performed in practice. A complete coverage of all possible approaches is beyond the scope of this entry.

### Phase-Based Approaches

Synchronization analysis is a common approach to detect interactions between nonlinear self-sustained oscillators (Pikovsky et al. 2001). Following the observations and pioneering work of Huygens, the synchrony has been observed in many different systems including systems with a limit cycle or a chaotic attractor. Several different types of synchronization have been observed for these systems, ranging from phase synchronization, as the weakest form of synchronization via lag synchronization, to generalized or complete synchrony (Rosenblum et al. 1996; Rosenblum et al. 1997; Kocarev and Parlitz 1996; Pecora and Carroll 1990).

Phase synchronization analysis is a powerful tool because it detects even weak coupling between oscillators. For example, in some chaotic oscillators, very weak coupling between oscillators can synchronize their phases but not their amplitudes (Rosenblum et al. 1996). To quantify the process of synchronization, different measures have been proposed (Tass et al. 1998; Mormann et al. 2000; Rosenblum et al. 2001). The first approach presented here is a measure based on circular statistics, which is the so-called mean phase coherence (Mormann et al. 2000). We will introduce it by first reviewing phase synchronization that in weakly coupled self-sustained oscillators. For a more detailed introduction to synchronization including phase synchronization, we refer to the literature, e.g., Pikovsky et al. (2001).

#### Self-Sustained Oscillators

where **X**(*t*) is a multidimensional variable to ensure an oscillatory behavior. The external influence * U*(

*t*) as well as the parameters

*(*

**α***t*) are vectors. In this case, external driving is neglected in the following and the parameters

**α**_{ i }are assumed to be constant in time.

where the coupling from oscillator *j* onto oscillator *i* is the coefficient. If ε_{ i,j } is nonzero, then the system is considered coupled. **h** _{1} (.) and **h** _{2} (.) are the coupling functions and can be any arbitrary function. Usually, diffusive coupling is assumed, i.e.,

**h** _{1}(**X** _{1}(*t*), **X** _{2}(*t*)) = (**X** _{2}(*t*) − **X** _{1}(*t*)) and for **h** _{2} accordingly.

#### Phase Synchronization

The phase of a limit cycle oscillator is a monotonically increasing function with Φ(*t*)|_{ t = pT } = *p*2π = *pωT*,

where *p* denotes the number of completed cycles, *T* is the time needed for one complete cycle, and *ω* is the eigenfrequency of the oscillator. But, more generally, we can consider the phase of any oscillation from a differential equation

\( {\dot{\Phi}}_i(t)={\omega}_i,i=1,\dots, N \) where the *ω* _{ i } are the frequencies of the uncoupled oscillators with *i* denoting the *i*-th oscillator.

*n*and

*m*can be written as follows:

This equation represents the generalized phase difference (Pikovsky et al. 2001).

*ε*

_{ 2,1 }=

*ε*

_{ 1,2 }then Φ

_{1,2}

^{ n,m }=

*n*Φ

_{1}−

*m*Φ

_{2}and Δ

*ω*=

*nω*

_{1}−

*mω*

_{2}; the above differential equation can be rewritten as

with a new periodic function *H*(.).

In this case, the phase difference is constant with time. Thus, both phases maintain a fixed relationship and the system is considered to be *n*:*m* phase synchronized.

In this case there will be variability between the phases, but the phase synchronization will still be preserved.

#### The Mean Phase Coherence

If the weakly coupled self-sustained oscillators are phase synchronized, the above equation can be reformulated to yield a single number to quantify the phase synchrony.

_{1,2}

^{ n,m }mod 2π, it indicates that two phases have a coherent motion; if the distribution is flat, then it indicates that the two have independently evolving phases. Based on circular statistics, the phase difference distribution can be quantified as follows:

This quantity is normalized between 0 and 1 and will be one for perfectly synchronized phases and zero for independent signals (Tass et al. 1998; Mardia and Jupp 2000; Mormann et al. 2000).

where *i* is the imaginary number and through Euler’s formula can be written as a complex exponent where Φ(*t*) is the phase. Alternative approaches to obtain the phases are conceivable based on, e.g., wavelet transformations (Le van Quyen et al. 2001; Bandrivskyy et al. 2004). Note that this transformation is applicable for signals in which a frequency is well defined; different approaches are needed for broad band signals (see below) or point processes (Smirnov et al. 2007).

#### Multivariate Phase Synchronization

*N*-dimensional process, the following matrix of pairwise interactions can be generated:

*synchronization matrix*containing all pairwise phase synchronization measures. The inverse

**PR**=

**R**

^{−1}of this matrix leads to the definition of the

*n*:

*m*partial phase synchronization index

between oscillators *k* and *l*. It is conditioned on the remaining processes which are summarized as **Y**. It can be analytically shown (Dahlhaus 2000; Schelter et al. 2006b) that this matrix inversion partializes the information of the third processes, as introduced in the “General Concept” section. Indirect interactions are characterized by a vanishing partial phase synchronization. If the bivariate phase synchronization index R_{ kl } is considerably different from zero while the corresponding multivariate partial phase synchronization index R_{ kl | Y } is approximately zero, then the connection is most likely due to indirect coupling between the processes *k* and *l* (Schelter et al. 2006b). A rigorous statistical test using natural surrogates or twin surrogates (Nawrath et al. 2010) can be used to determine significance.

#### Example

with *i*,*j* = 1,…,3 and parameters are set to *a* = 0.15, *b* = 0.2, *c* = 10, *ω* _{ 1 } = *1.03*, *ω* _{ 2 } = 1.01, and *ω* _{ 3 } = 0.99 yielding chaotic behavior in the deterministic case. The noise term, *η* _{ j }, is Gaussian distributed with mean of zero and standard deviation *σ* _{ j } = 1.5. Both the bidirectional couplings between oscillators 1 and 3 and 1 and 2 are varied between 0 and 0.3. The oscillators 2 and 3 are not directly coupled.

*R*

_{ 12 },as well as

*R*

_{ 13 }, increases for increasing coupling strength, indicating phase synchronization (Fig. 1a upper triangular). Once there is a sufficiently strong coupling between oscillators 1 and 2 as well as 1 and 3, indirect coupling is seen between 2 and 3 with a nonvanishing bivariate synchronization index

*R*

_{ 23 }(Fig. 1a upper triangular). This high but spurious phase synchronization is caused by the common influence from oscillator 1. In the same figure (Fig. 1a below the diagonal), the results of partial phase synchronization analysis are shown. While

*R*

_{ 12 | 3 }and

*R*

_{ 13 | 2 }are essentially unchanged compared to the bivariate synchronization indices,

*R*

_{ 23 | 1 }stays almost always below 0.1 and therefore indicates this measure is due to spurious synchronization, indicating that there is an absence of a direct coupling between oscillators 2 and 3. Hence, the true underlying (multivariate) network structure is correctly revealed by the analysis (Fig. 1b).

#### Phase Dynamics Modeling

Another approach in reconstructing networks is to estimate the coupling functions *H* _{1}(Φ_{1}, Φ_{2}) and *H* _{2}(Φ_{2}, Φ_{1}) from observations of the dynamics. These reconstructions can, for instance, be based on approximating the functions *H* _{ 1 } and *H* _{ 2 } with trigonometric functions. With this approach, it has been shown that a reliable detection of interactions (Smirnov et al. 2007 and references therein) and therefore the reconstruction of the network topology are possible (Kralemann et al. 2011).

### Recurrence-Based Approaches

If the processes are not phase-coherent, which is typically the case for many observed systems, the approach based on the direct calculation of the phases is not feasible because neither the phase nor the onset of phase synchronization can be explicitly defined. Bivariate analysis of noncoherent systems has been approached by testing synchronization using alternate notion of the phase, i.e., a phase that builds on the general idea of curvature (Osipov et al. 2003). However, such a definition of the phase restricts the analysis to systems where the phase trajectory corresponds to a curve with a positive curvature on some projection plane. An alternative approach is based on recurrence analysis.

#### Recurrence Analysis

**X**(

*i*) and

**X**(

*j*) denote trajectories of length

*n*in phase space, Θ(.) is the Heaviside function, | · || is an appropriate norm, and

*ε*is a predefined threshold. If only a scalar time series has been observed, the state of the system, i.e., the vector

**X**(

*i*), can typically be reconstructed by delay embedding (Packard et al. 1980; Takens 1981; Sauer et al. 1991), where each dimension represents the data at some time lag τ in the past:

that a system recurs to the ε-neighborhood of a former point of the trajectory after *τ* time steps is given by the diagonal-wise calculated recurrence rate *RR* _{ τ }(*ε*).

In the case when the two systems have locked phase dynamics, the probability of recurrence is simultaneously high for both systems and the *CPR* _{ kl } will differ from zero significantly. This synchronization measure has been demonstrated to be effective for detecting coupling in a general class of non-phase-coherent and nonstationary systems and even for time series corrupted by strong noise (Romano et al. 2005).

#### Multivariate Recurrence Analysis

*N*interdependent and non-phase-coherent processes. To treat the multivariate case can be generated as a matrix of the bivariate

*CPR*synchronization indices:

*CPR*

^{−1}, the generalized partial synchronization index can be calculated:

This measure quantifies phase synchronization between two oscillators *k* and *l*, conditioned on third processes **Y** of a multivariate possibly non-phase-coherent and noisy oscillatory process (Nawrath et al. 2009).

### Information Theoretic Approaches

*p*(

*x*) represents the probability that the signal

*X*will have the amplitude

*x*and

*p*(

*x*,

*y*) is the joint probability that the time series

*X*will have the value

*x*while simultaneously the time series

*Y*will have the value

*y*. Transfer entropy calculates the direction of information flow between two systems by testing how the information from the past of the two systems predicts the future of one of the signals:

This approach utilizes the fundamental concepts of the information theoretic approaches (Kantz and Schreiber 1997). The *p*(y_{t+1}|y_{t} ^{(k)},x_{t} ^{(l)}) are the conditional probability density functions where the probability of y_{t+1} given the previous k points of the times series y_{t} ^{(k)},x_{t} ^{(l)}.

The information and the transfer information as presented are bivariate; however multivariate extensions are straightforward as the (conditional) probability density functions can include more processes. The challenge in calculating these measures in higher dimensional densities is that they quickly become computationally intensive to find all the conditional probabilities. Solutions are possible and can, for instance, be found in Palus and Stefanovska (2003), Runge et al. (2012), or Wibral et al. (2012) and references therein.

### Linear Approaches

Although linear approaches are strictly speaking not part of the nonlinear data-based modeling approaches, we include them here. They are indeed very powerful in revealing the true underlying network structure despite the fact that they are technically speaking not applicable. Due to space limitations, we can only discuss one of the linear approaches, although many more exist in the literature. We refer the reader, for instance, to Schelter et al. (2006a) and references therein.

*N*-dimensional vector autoregressive process of order

*p*(VAR[

*p*] process)

where *ε*(*t*) is a multivariate Gaussian white noise process with covariance matrix *Σ*. This measure was introduced by Baccala and Sameshima (2001) as a measure of Granger causality.

Partial directed coherence |π_{ i ← j }(*ω*)| provides a measure for the directed, linear influences from *X* _{ j }(*t*) onto *X* _{ i }(*t*) at frequency *ω*. It is estimated by fitting an *N*-dimensional VAR[*p*] model to the data and directly using the above equations with the parameter estimates substituted for the true parameters.

*α*-significance level for the partial directed coherence (Schelter et al. 2006c)

where *χ* ^{2} _{1,1−α} is the 1−α quantile of the *χ* ^{2}-distribution with one degree of freedom. The values \( {C}_{ij}\left(\omega \right)={\varSigma}_{ii}\left[{\displaystyle \sum_{l,m=1}^p{\mathbf{H}}_{jj}}\left(l,m\right)\left( \cos \left( l\omega \right) \cos \left( m\omega \right)+ \sin \left( l\omega \right) \sin \left( m\omega \right)\right)\right] \)

can be calculated based on the inverse **H** = **V** ^{−1} of the covariance matrix **V** of the VAR[*p*] process **X**(*t*), which is composed of the entries

**V** _{ ij }(*l*, *m*) = cov(*X* _{ i }(*t* − *l*), *X* _{ j }(*t* − *m*)) for *i*,*j* = 1,…,*N* and *l*,*m* = 1,…,*p* (Luetkepohl 1993).

We emphasize that the significance level depends on the order of the vector autoregressive process; higher model orders lead to higher significance levels which in turn indicates that for higher models the ability to detect weak couplings decreases.

Partial directed coherence by construction is a multivariate analysis technique. It has been developed for linear stochastic processes. However, in neurophysiology, nonlinear stochastic processes are generating the time series. In most of these cases, the dependence structure is reflected in the linear second-order structure; for this reason the partial directed coherence discloses the network structure also for nonlinear processes.

where the oscillators are given coefficients *ω* _{1} = 1.5, *ω* _{ 2 } = 1.48, *ω* _{3} = 1.53, *ω* _{4} = 1.44, σ = 1.5, and Gaussian distributed white noise *η* _{ i }. This system is simulated for ith *n* = 50,000 data points for each process to generate signals to apply the tools for reconstructing the network topology. Note that although the four oscillators are diffusively coupled and therefore their interactions are still linear, the system is nonlinear. The nonlinearity parameter *μ* is fixed to *μ* = 5, leading to a highly nonlinear behavior of the van der Pol oscillators. The unidirectional and bidirectional couplings between these four nonidentical oscillators are set to ε_{12} = ε_{21} = 0.2, ε_{24} = ε_{42} = 0.2, ε_{31} = 0.2, and ε_{34} = 0.2.

*p*= 200. This high model order is required to reproduce the spectra with sufficient accuracy, as opposed to nonparametric spectral estimates. The corresponding 5 % significance levels are indicated by gray lines. Partial directed coherence correctly detects the causal influences in the van der Pol system. Note that the significance level depends on the investigated frequency. At the peaks in the spectra of the van der Pol oscillators, the significance level is slightly higher than at the remaining frequencies. Thus, only those partial directed coherencies are statistically significant that correspond to a direct causal influences between the oscillators.

### Application

So far, the multivariate analysis techniques presented here have been applied to simulated time series. To illustrate their performance in physiological applications, examples of patients suffering from essential tremor are presented.

The pathophysiological basis of essential tremor, a common neurological disease with a prevalence of 0.4–4 % (Louis et al. 1998), is not precisely known. Essential tremor manifests itself mainly in the upper limbs, when the hands are in a postural outstretched position. Usually the trembling frequency of the hands is 4–10 Hz. To elucidate the tremor generating mechanisms in essential tremor, relationships between the brain and trembling muscles are of particular interest. For unilaterally activated tremor, tremor-correlated cortical activity to the contralateral tremor side has been revealed by magnetoencephalography (MEG) and electroencephalography (EEG) for Parkinsonian tremor (Volkmann et al. 1996; Hellwig et al. 1999) and by electroencephalography for essential tremor (Hellwig et al. 2001). In bilaterally activated essential tremor, however, a more complex interrelation structure has been observed by simultaneous electroencephalographic recordings from the scalp and electromyographic (EMG) recordings from the extensor muscles (Hellwig et al. 2003). In addition to contralateral coherences, also ipsilateral coherences between the sensorimotor cortex and the muscles have been detected.

In this study, patients were seated in a comfortable chair having their forearms supported while their hands were outstretched to activate tremor. Data were sampled at 1,000 Hz. The EEG data as well as the EMG data were preprocessed applying a band-pass filter between 30 and 200 Hz to avoid aliasing and movement artifacts; the EMG was rectified afterward. The EEG was then high-pass filtered above 0.5 Hz to avoid baseline fluctuations and also anti-aliasing filtered. Scalp electrodes over the left and right sensorimotor cortex and the EMG of the left and right wrist extensor are analyzed.

It is of importance to investigate whether the cortex imposes its oscillatory activity on the muscles via the corticospinal tract or whether the muscle activity is just reflected in the cortex via proprioceptive afferences. To get these deeper insights into the tremor generating mechanisms, partial directed coherence is applied to data recorded from these patients.

### Summary

Several approaches to data-based modeling are conceivable. In this entry, we focused on a few multivariate approaches that enable to distinguish direct from indirect interactions. Some even provide directions for the interactions, for instance, Granger-causality-based concepts. In applications to observed nonlinear systems, the type of data, its dimension, noise contamination, and stationarity typically decides which technique can be applied and what conclusions can be drawn. Typically, simulation studies tailored to the problem at hand should be performed prior to any analysis.

## References

- Baccala LA, Sameshima K (2001) Partial directed coherence: a new concept in neural structure determination. Biol Cybern 84:463–474PubMedCrossRefGoogle Scholar
- Bandrivskyy A, Bernjak A, McClintock PVE, Stefanovska A (2004) Wavelet phase coherence analysis: application to skin temperature and blood flow. Cardiovasc Eng 4:89–93CrossRefGoogle Scholar
- Boccaletti S, Kurths J, Osipov G, Valladares DL, Zhou CS (2002) The synchronization of chaotic systems. Phys Rep 366:1–101CrossRefGoogle Scholar
- Dahlhaus R (2000) Graphical interaction models for multivariate time series. Metrika 51:157–172CrossRefGoogle Scholar
- Ding M, Chen Y, Bressler SL (2006) Granger causality: basic theory and application to neuroscience. In: Schelter B, Winderhalder M, Timmer J (eds) Handbook of time series analysis. Wiley-VCH, Weinheim, pp 437–460CrossRefGoogle Scholar
- Eckmann J-P, Oliffson Kamphorst S, Ruelle D (1987) Recurrence plots of dynamical systems. Europhys Lett 4:973–977CrossRefGoogle Scholar
- Eichler M (2005) A graphical approach for evaluating effective connectivity in neural systems. Phil Trans R Soc B 360:953–967PubMedCentralPubMedCrossRefGoogle Scholar
- Eichler M (2007) Granger-causality and path diagrams for multivariate time series. J Econom 137:334–353CrossRefGoogle Scholar
- Granger CWJ (1969) Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37:424–438CrossRefGoogle Scholar
- Hellwig B, Haeussler S, Schelter B, Lauk M, Guschlbauer B, Timmer J, Luecking CH (2001) Tremor correlated cortical activity in essential tremor. Lancet 357:519–523PubMedCrossRefGoogle Scholar
- Hellwig B, Schelter B, Guschlbauer B, Timmer J, Luecking CH (2003) Dynamic synchronisation of central oscillators in essential tremor. Clin Neurophysiol 114:1462–1467PubMedCrossRefGoogle Scholar
- Kantz H, Schreiber T (1997) Nonlinear time series analysis. Cambridge University Press, CambridgeGoogle Scholar
- Kocarev L, Parlitz U (1996) Generalized synchronization, predictability, and equivalence of unidirectionally coupled dynamical systems. Phys Rev Lett 76:1816–1819PubMedCrossRefGoogle Scholar
- Kralemann B, Pikovsky A, Rosenblum M (2011) Reconstructing phase dynamics of oscillator networks. Chaos 21:025104PubMedCrossRefGoogle Scholar
- Le van Quyen M, Martinerie J, Navarro V, Boon P, D’Have M, Adam C, Renault B, Varela F, Baulac M (2001) Anticipation of epileptic seizures from standard EEG recordings. Lancet 357:183–188PubMedCrossRefGoogle Scholar
- Louis ED, Ford B, Wendt KJ, Cameron G (1998) Clinical characteristics of essential tremor: data from a community-based study. Mov Disord 13:803–808PubMedCrossRefGoogle Scholar
- Luetkepohl H (1993) Introduction to multiple time series analysis. Springer, New YorkCrossRefGoogle Scholar
- Mardia K, Jupp P (2000) Directional statistics. Wiley, West SussexGoogle Scholar
- Marwan N, Carmen Romano M, Thiel M, Kurths J (2007) Recurrence plots for the analysis of complex systems. Phys Rep 438:237–329CrossRefGoogle Scholar
- Mormann F, Lehnertz K, David P, Elger CE (2000) Mean phase coherence as a measure for phase synchronization and its application to the EEG of epilepsy patients. Physica D 144:358–369CrossRefGoogle Scholar
- Nawrath J, Romano MC, Thiel M, Kiss IZ, Wickramasinghe M, Timmer J, Kurths J, Schelter B (2010) Distinguishing direct and indirect interactions in oscillatory networks with multiple time scales. Phys Rev Lett 104:038701PubMedCrossRefGoogle Scholar
- Osipov GV, Hu B, Zhou C, Ivanchenko MV, Kurths J (2003) Three types of transitions to phase synchronization in chaotic oscillators. Phys Rev Lett 91:024101PubMedCrossRefGoogle Scholar
- Packard N, Crutchfield J, Farmer D, Shaw R (1980) Geometry from a time series. Phys Rev Lett 45:712CrossRefGoogle Scholar
- Palus M, Stefanovska A (2003) Direction of coupling from phases of interacting oscillators: an information theoretic approach. Phys Rev E 67:055201CrossRefGoogle Scholar
- Pecora LM, Carroll TL (1990) Synchronization in chaotic systems. Phys Rev Lett 64:821–824PubMedCrossRefGoogle Scholar
- Pikovsky A, Rosenblum M, Kurths J (2000) Phase synchronization in regular and chaotic systems. Int J Bifurc Chaos 10:2291–2305Google Scholar
- Pikovsky A, Rosenblum M, Kurths J (2001) Synchronization – a universal concept in nonlinear sciences. Cambridge University Press, CambridgeCrossRefGoogle Scholar
- Poincare H (1890) Sur les equations de la dynamique et le probleme de trois corps. Acta Mathematica 13:1–270Google Scholar
- Roessler OE (1976) An equation for continuous chaos. Phys Lett A 57:397–398CrossRefGoogle Scholar
- Romano MC, Thiel M, Kurths J, Kiss IZ, Hudson JL (2005) Detection of synchronization for non-phase-coherent and non-stationary data. Europhys Lett 71:466–472CrossRefGoogle Scholar
- Rosenblum MG, Pikovsky AS, Kurths J (1996) Phase synchronization of chaotic oscillators. Phys Rev Lett 76:1804–1807PubMedCrossRefGoogle Scholar
- Rosenblum MG, Pikovsky AS, Kurths J (1997) From phase to lag synchronization in coupled chaotic oscillators. Phys Rev Lett 78:4193–4196CrossRefGoogle Scholar
- Rosenblum MG, Pikovsky A, Kurths J, Schaefer C, Tass PA (2001) Phase synchronization: from theory to data analysis. In: Moss F, Gielen S (eds) Handbook of biological physics, vol 4, Neuroinformatics. Elsevier, Amsterdam, pp 279–321Google Scholar
- Runge J, Heitzig J, Marwan N, Kurths J (2012) Quantifying causal coupling strength: a lag-specific measure for multivariate time series related to transfer entropy. Phys Rev E 86:061121CrossRefGoogle Scholar
- Sauer T, Yorke J, Casdagli M (1991) Embedology. J Stat Phys 65:579–616CrossRefGoogle Scholar
- Schelter B, Winterhalder M, Timmer J (eds) (2006a) Handbook of time series analysis. Wiley-VCH, BerlinGoogle Scholar
- Schelter B, Winterhalder M, Dahlhaus R, Kurths J, Timmer J (2006b) Partial phase synchronization for multivariate synchronizing system. Phys Rev Lett 96:208103PubMedCrossRefGoogle Scholar
- Schelter B, Winterhalder M, Eichler M, Peifer M, Hellwig B, Guschlbauer B, Luecking CH, Dahlhaus R, Timmer J (2006c) Testing for directed influences among neural signals using partial directed coherence. J Neurosci Methods 152:210–219PubMedCrossRefGoogle Scholar
- Smirnov D, Schelter B, Winterhalder M, Timmer J (2007) Revealing direction of coupling between neuronal oscillators from time series: phase dynamics modeling versus partial directed coherence. Chaos 17:013111PubMedCrossRefGoogle Scholar
- Takens F (1981) Detecting strange attractors in turbulence. In: Rand DA, Young L-S (eds) Dynamical systems and turbulence (Warwick 1980), vol 898, Lecture notes in mathematics. Springer, Berlin, pp 366–381CrossRefGoogle Scholar
- Tass PA, Rosenblum MG, Weule J, Kurths J, Pikovsky A, Volkmann J, Schnitzler A, Freund HJ (1998) Detection of n : m phase locking from noisy data: application to magnetoencephalography. Phys Rev Lett 81:3291–3295CrossRefGoogle Scholar
- van der Pol B (1922) On oscillation-hysteresis in a simple triode generator. Phil Mag 43:700–719CrossRefGoogle Scholar
- Volkmann J, Joliot M, Mogilner A, Ioannides AA, Lado F, Fazzini E, Ribary U, Llinas R (1996) Central motor loop oscillations in Parkinsonian resting tremor revealed by magnetoencephalography. Neurology 46:1359–1370PubMedCrossRefGoogle Scholar
- Wibral M, Wollstadt P, Meyer U, Pampu N, Priesemann V, Vicente R (2012) Revisiting Wiener‘s principle of causality – interaction-delay reconstruction using transfer entropy. In: Proceedings of the 34th annual international conference of the IEEE EMBS (EMBC 2012), San DiegoGoogle Scholar
- Wiesenfeldt M, Parlitz U, Lauterborn W (2001) Mixed state analysis of multivariate time series. Int J Bifurc Chaos 11:2217–2226CrossRefGoogle Scholar

## Further Reading

- Hellwig B, Haeussler S, Lauk M, Koester B, Guschlbauer B, Kristeva-Feige R, Timmer J, Luecking CH (2000) Tremor-correlated cortical activity detected by electroencephalography. Electroencephalogr Clin Neurophysiol 111:806–809Google Scholar