Advertisement

Measures and Characterisations

  • Márton KarsaiEmail author
  • Hang-Hyun Jo
  • Kimmo Kaski
Chapter
  • 471 Downloads
Part of the SpringerBriefs in Complexity book series (BRIEFSCOMPLEXITY)

Abstract

In this Chapter we will present the theoretical description and characterisation of bursty human dynamics. Starting from the description of discrete time series we go through all the characteristic measures, like inter-event time distribution, burstiness parameter, memory coefficient, bursty train size distribution, autocorrelation function, etc., which were borrowed or introduced over the last decade to describe human bursty systems from the individual to the network level. With these quantities, we show how to detect the temporal inhomogeneities and long-range memory effects in the event sequences of human dynamics. At the same time we also introduce methods for system-level characterisation, mainly in the frame of temporal networks, that have been intensively studied in recent years to describe temporal human social behaviour. Finally, as human dynamics intrinsically show the cyclic patterns like daily and weekly ones, the methods for deciphering the effects of such cycles are also described.

In order to investigate the dynamics of human social behaviour quantitatively, we first introduce it as a time series and we show how it is characterised by means of various techniques of time series analysis. According to Box et al. [37], a time series is a set of observations that are made sequentially in time. The timing of an observation denoted by t can be either continuous or discrete. Since most datasets of human dynamics have recently been recorded digitally, we will here focus on the case of discrete timings. In this sense, the time series can be called an event sequence, where each event indicates an observation. In this series the ith event takes place at time \(t_i\) with the result of the observation \(z_i\) that can denote a number, a symbol, or even a set of numbers, depending on what has been measured. The sequence of \(\{(t_i, z_i)\}\) can be simply denoted by \(z_t\). Some events could occur in a time interval or with duration. For example, a phone call between two individuals may last from few minutes to hours [105]. In many cases as the time scale for event duration is much smaller than that of our interest, the event duration will be ignored in our monograph unless stated otherwise.

In most cases a time series refers to observations made at regular timings. For a fixed time interval \(t_\mathrm{int}\), the timings are set as \(t_i=t_0+ t_\mathrm{int}i\) for \(i=0, 1, 2,\cdots \). In many cases, \(t_0\) and \(t_\mathrm{int}\) are fixed at the outset thus they can be ignored for time series analysis. An example of a time series with regular observations is the daily price of a stock in the stock market, constituting a financial time series [185]. Such time series are often analysed by using traditional techniques like the autocorrelation function with the aim to reveal the dependencies between observed values, which often show inhomogeneities and large fluctuations in them.

One also finds many cases in which the timings of observations are inhomogeneous, like in case of emails sent by a user [21]. The fact that the occurrence of events is not regular in time leads to temporally inhomogeneous time series, potentially together with the variation of observed value \(z_t\). In these cases we can talk about two kinds of inhomogeneities in observed time series. On the one hand, fluctuations are associated both with temporal inhomogeneities and with the variation of observations. On the other hand, inhomogeneities can be associated only with the timings of events, not with observation values. This is the case of several recent datasets, e.g., those related to communication or individual transactions. In such datasets events are typically not assigned with content due to privacy reasons, thus only their timings are observable. In the following Sections we will mainly focus on the latter type of time series.

We remark that the time series with regular timings but with irregular observed values could be translated into time series with irregular timings. This can be done, e.g., by considering only the observations with \(z_t \ge z_\mathrm{th}\), where \(z_\mathrm{th}\) denotes some threshold value. Then the time series can be generated, which contains only observations with extreme values, like crashes in the financial markets. In the opposite direction, the time series with irregular timings can be also translated into that with regular timings, e.g., by binning the observations over a sufficiently large time window \(t_w\). More precisely, one can obtain the time series with regular timings as follows:
$$\begin{aligned} \tilde{z}_k\equiv \sum _{kt_w\le t<(k+1)t_w} z_t \end{aligned}$$
(2.1)
for all possible integers k. This constitutes a coarse-graining process for the time series.

2.1 Point Processes as Time Series with Irregular Timings

A time series with irregular timings can be interpreted as the realisation of a point process on the time axis. To introduce these interpretations, let us first disregard the information contained in the observation results \(z_t\), as it is not generally accessible, and consider only the timings of events. On the one hand, the event sequence with n events can be represented by an ordered list of event timings, i.e., \(ev(t_i)=\{t_0,t_1,\cdots , t_{n-1}\}\), where \(t_i\) denotes the timing of the ith event. On the other hand, the event sequence can be depicted as a binary signal x(t) that takes a value of 1 at time \(t=t_i\), or 0 otherwise. For discrete timings, one can write the signal as
$$\begin{aligned} x(t)=\sum _{i=0}^{n-1}\delta _{t, t_i}, \end{aligned}$$
(2.2)
where \(\delta \) denotes the Kronecker delta.

2.1.1 The Poisson Process

The temporal Poisson process is a stochastic process, which is commonly used to model random processes such as the arrival of customers at a store, or packages at a router. It evolves via completely independent events, thus it can be interpreted as a type of continuous-time Markov process. In a Poisson process, the probability that n events occur within a bounded interval follows a Poisson distribution
$$\begin{aligned} P(n)=\frac{\lambda ^{n}e^{-\lambda }}{n!}, \end{aligned}$$
(2.3)
where \(\lambda \) denotes the average number of events per interval, which is equal to the variance of the distribution in this case. Since these stochastic processes consist of completely independent events, they have served as reference models when studying bursty systems. As we will see later, bursty temporal sequences emerge with fundamentally different dynamics with strong temporal heterogeneities and temporal correlations. Any deviation in their dynamics from the corresponding Poisson model can help us to indicate patterns induced by correlations or other factors like memory effects.

Throughout the monograph we are going to refer to two types of Poisson processes. One type, called the homogeneous Poisson process, is characterised by a constant event rate \(\lambda \), while the other type, called the non-homogeneous Poisson process, is defined such that the event rate varies over time, denoted by \(\lambda (t)\). For more precise definitions and discussion on the characters of Poisson processes we suggest the reader to study the extended literature addressing this process, e.g., Ref. [84]. We remark that the Poisson processes and their variants have been studied in terms of shot noise in electric conductors and related systems [29, 42, 179].

2.1.2 Characterisation of Temporal Heterogeneities

The temporal irregularities of an event sequence can be characterised in terms of various quantities. For this, a schematic diagramme and a realistic example of such event sequences are respectively depicted in Figs. 2.1 and 2.2a, where the example has been generated using a model for bursty dynamics [130].
Fig. 2.1

Schematic diagramme of an event sequence, where each vertical line indicates the timing of the event. The inter-event time \(\tau \) is the time interval between two consecutive events. The residual time \(\tau _r\) is the time interval from a random moment (e.g., the timing annotated by the vertical arrow) to the next event. In most empirical datasets, the distributions of \(\tau \) are heavy-tailed

2.1.2.1 The Inter-event Time Distribution

In order to formally introduce these measures let us first consider an event sequence \(ev(t_i)\) and define the inter-event time as
$$\begin{aligned} \tau _i\equiv t_i-t_{i-1}, \end{aligned}$$
(2.4)
which is the time interval between two consecutive events at times \(t_{i-1}\) and \(t_i\) for \(i=1,\cdots , n-1\). Then we obtain a sequence of inter-event times, i.e., \(iet(\tau _i)=\{\tau _1,\cdots ,\tau _{n-1}\}\), where \(n\ge 2\) is assumed. By ignoring the order of \(\tau _i\)s, we can compute the probability density function of inter-event times, i.e., the inter-event time distribution \(P(\tau )\). For completely regular time series, all inter-event times are the same and equal to the mean inter-event time, denoted by \(\langle \tau \rangle \), thus the inter-event time distribution reads as follows:
$$\begin{aligned} P(\tau )=\delta (\tau -\langle \tau \rangle ), \end{aligned}$$
(2.5)
where \(\delta (\cdot )\) denotes the Dirac delta function. Here the standard deviation of inter-event times, denoted by \(\sigma \), is zero.
For the completely random and homogeneous Poisson process, it is easy to derive [84] that the inter-event times are exponentially distributed as follows:
$$\begin{aligned} P(\tau )=\frac{1}{\langle \tau \rangle } e^{-\tau /\langle \tau \rangle }, \end{aligned}$$
(2.6)
where \(\sigma =\langle \tau \rangle \). Note that the event rate introduced in Eq. (2.3) is \(\lambda =1/\langle \tau \rangle \).
Finally, in many empirical processes in nature and society, inter-event time distributions have commonly been observed to be broad with heavy tails ranging over several magnitudes. In such bursty time series the fluctuations characterised by \(\sigma \) are much larger than \(\langle \tau \rangle \), indicating that \(P(\tau )\) is rather different from an exponential distribution, as it would derive from Poisson dynamics. Bursty systems evolve through events that are heterogeneously distributed in time. It leads to a broad \(P(\tau )\), which can be fitted with either power law, log-normal, or stretched exponential distributions, just to name a few candidates. Most commonly, many empirical analyses show that \(P(\tau )\) could be described in the power-law form with an exponential cutoff as
$$\begin{aligned} P(\tau )\simeq C\tau ^{-\alpha }e^{-\tau /\tau _c}, \end{aligned}$$
(2.7)
where C denotes a normalisation constant, \(\alpha \) is the power-law exponent, and \(\tau _c\) sets the position of the exponential cutoff. Refer to an example of the power-law \(P(\tau )\) in Fig. 2.2b. The power-law scaling of \(P(\tau )\) indicates the lack of any characteristic time scale, but the presence of strong temporal fluctuations, characterised by the power-law exponent \(\alpha \). Power-law distributions are also associated to the concepts of scale-invariance and self-similarity as demonstrated in Ref. [212]. In this sense, the value of \(\alpha \) is deemed to have an important meaning, especially in terms of universality classes in statistical physics [232]. Interestingly, as will be discussed in Chap.  3, a number of recent empirical researches have reported power-law inter-event time distributions with various exponent values.
Fig. 2.2

a An example of the realistic event sequence generated by a model with preferential memory loss mechanism for correlated bursts [130] using the parameter values of \(\mu =0.1\), \(\nu =2\), and \(\varepsilon =\varepsilon _L=10^{-6}\). The bursty behaviour of the event sequence can be characterised by b inter-event time distribution \(P(\tau )\), c bursty train size distribution \(P_{\varDelta t}(E)\) for time window \(\varDelta t\), and d autocorrelation function \(A(t_d)\) with time delay \(t_d\). In addition, the burstiness parameter and memory coefficient of the event sequence were estimated as \(B\approx 0.483\) and \(M\approx 0.038\), respectively

Nevertheless, we note that although recent studies disclosed several bursty systems with broad inter-event time distributions, it is not trivial to identify the best functional form of distribution fitting the data points and to estimate its parameters like the value of power-law exponent. For the related statistical and technical issues, one can see Ref. [50] and references therein. In addition, the effect of finite size of the observation period on the evaluation of inter-event time distributions has recently been discussed in Ref. [158].

2.1.2.2 The Burstiness Parameter

The heterogeneity of the inter-event times can be quantified by a single measure introduced by Goh and Barabási [79]. The burstiness parameter B is defined as the function of the coefficient of variation (CV) of inter-event times \(r\equiv \sigma /\langle \tau \rangle \) to measures temporal heterogeneity as follows:
$$\begin{aligned} B\equiv \frac{r-1}{r+1}=\frac{\sigma -\langle \tau \rangle }{\sigma +\langle \tau \rangle }. \end{aligned}$$
(2.8)
Here B takes the value of \(-1\) for regular time series with \(\sigma =0\), and it is equal to 0 for random, Poissonian time series where \(\sigma =\langle \tau \rangle \). In case when the time series appears with more heterogeneous inter-event times than a Poisson process, the burstiness parameter is positive (\(B>0\)), while taking the value of 1 only for extremely bursty cases with \(\sigma \rightarrow \infty \). This measure has found a wide range of applications because of its simplicity, e.g., in analysing earthquake records, heartbeats of human subjects, and communication patterns of individuals in social networks, as well as for testing models of bursty dynamics [74, 79, 123, 130, 154, 177, 292, 303, 305].
However, it was recently shown that the range of B is strongly affected by the number of events n especially for bursty temporal patterns [153]. For the regular time series, the CV of inter-event times, r, has the value of 0 irrespective of n as all the inter-event times are the same. For the random time series, one gets \(r=\sqrt{(n-1)/(n+1)}\) by imposing the periodic boundary condition to the time series. This case basically corresponds to the Poisson process. Finally, for the extremely bursty time series, one has \(r=\sqrt{n-1}\), corresponding to the case when all events occur asymptotically at the same time. This implies the strong finite-size effect on the burstiness parameter for time series with moderate number of events. We also remark that \(B=1\) is realised only when \(n\rightarrow \infty \). Let us assume that one compares the degrees of burstiness of two event sequences but with different numbers of events in them. If the measured values of B are the same for both event sequences, does it really mean that those event sequences are equally bursty? This is not a trivial issue. Thus, in order to fix these strong finite-size effects, an alternative measure has been introduced for the burstiness parameter in Ref. [153]:
$$\begin{aligned} B_n\equiv \frac{\sqrt{n+1} r-\sqrt{n-1}}{(\sqrt{n+1}-2)r +\sqrt{n-1}}, \end{aligned}$$
(2.9)
which was devised to have the value of 1 for \(r=\sqrt{n-1}\), 0 for \(r=\sqrt{(n-1)/(n+1)}\), and \(-1\) for \(r=0\), respectively. The authors claimed that using this measure, one can distinguish the finite-size effect from the intrinsic burstiness characterising the time series.

2.1.2.3 The Memory Coefficient

So far, we have ignored any possible correlation between inter-event times for the sake of simple description. As a first approximation to quantify dependencies between consecutive inter-event times, a joint distribution \(P(\tau _i,\tau _{i+1},\cdots ,\tau _{i+k})\) of arbitrary number of consecutive inter-event times can be directly studied in a non-trivial fashion as introduced in Ref. [130]. For a simpler description of such dependencies, Goh and Barabási [79] introduced the memory coefficient M to measure two-point correlations between consecutive inter-event times as follows:
$$\begin{aligned} M\equiv \frac{1}{n-2}\sum _{i=1}^{n-2}\frac{(\tau _i-\langle \tau \rangle _1)(\tau _{i+1}-\langle \tau \rangle _2)}{\sigma _1\sigma _2}, \end{aligned}$$
(2.10)
with \(\langle \tau \rangle _1\) (respectively \(\langle \tau \rangle _2\)) and \(\sigma _1\) (respectively \(\sigma _2\)) being the average and the standard deviation of inter-event times \(\{\tau _i | i=1,\cdots , n-2\}\) (respectively \(\{\tau _{i+1} | i=1,\cdots , n-2\}\)). Beyond only considering consecutive inter-event times, this measure can be extended to capture correlations between inter-event times separated by exactly \(m-1\) intermediate inter-event times (\(m\ge 1\)). As a general form, the memory coefficient can be written as follows:
$$\begin{aligned} M_m\equiv \frac{1}{n-m-1}\sum _{i=1}^{n-m-1}\frac{(\tau _i-\langle \tau \rangle _1)(\tau _{i+m}-\langle \tau \rangle _2)}{\sigma _1\sigma _2} \end{aligned}$$
(2.11)
with the corresponding definitions of \(\langle \tau \rangle _1\), \(\langle \tau \rangle _2\), \(\sigma _1\), and \(\sigma _2\). Then, the set of \(M_m\) for all possible m may fully characterise the memory effects between inter-event times.
Note that an alternative measure, called the local variation, was introduced originally in neuroscience [254]. The local variation is defined as
$$\begin{aligned} \mathrm{LV} \equiv \frac{1}{n-2}\sum _{i=1}^{n-2}\frac{3(\tau _i-\tau _{i+1})^2}{(\tau _i+\tau _{i+1})^2}, \end{aligned}$$
(2.12)
which takes the values of 0, 1, and 3, respectively, for the regular, random, and extremely bursty time series . This measure has also been used to analyse datasets describing human bursty patterns [12].
We also introduce an entropy-based measure for the correlations between consecutive inter-event times that applies only to the power-law inter-event time distribution [15]. If the inter-event time distribution is a power law as \(P(\tau )\propto \tau ^{-\alpha }\) for \(\tau \ge \tau _\mathrm{min}\), to each inter-event time \(\tau _i\) one can assign a number \(r_i\) as follows:
$$\begin{aligned} r_i=1-\left( \frac{\tau _i}{\tau _\mathrm{min}}\right) ^{1-\alpha }, \end{aligned}$$
(2.13)
which will be uniformly distributed between [0, 1). Then the correlation between consecutive inter-event times is measured in terms of the mutual information using the joint probability density function \(P(r_i, r_{i+1})\):
$$\begin{aligned} I(r_i;r_{i+1})\equiv \sum _{r_i}\sum _{r_{i+1}}P(r_i, r_{i+1}) \log \left[ \frac{P(r_i, r_{i+1})}{P(r_i)P(r_{i+1})}\right] . \end{aligned}$$
(2.14)
If \(\tau _i\) and \(\tau _{i+1}\) are fully uncorrelated, so are \(r_i\) and \(r_{i+1}\), leading to the zero value of the mutual information defined above.

2.1.2.4 The Autocorrelation Function

The conventional way for detecting correlations in time series is to measure the autocorrelation function. For this, we use the representation of event sequences as binary signals x(t) as defined in Eq. (2.2). In addition, for a proper introduction we need to define the delay time \(t_d\), which sets a time lag between two observations of the signal x(t). Then the autocorrelation function with delay time \(t_d\) is defined as follows:
$$\begin{aligned} A(t_d)\equiv \frac{ \langle x(t)x(t+t_d)\rangle _t- \langle x(t)\rangle ^2_t}{ \langle x(t)^2\rangle _t- \langle x(t)\rangle ^2_t}, \end{aligned}$$
(2.15)
where \(\langle \cdot \rangle _t\) denotes the time average over the observation period. For more on the autocorrelation function, see Ref. [37]. In the time series with temporal correlations, \(A(t_d)\) typically decays as a power law:
$$\begin{aligned} A(t_d)\sim t_d^{-\gamma } \end{aligned}$$
(2.16)
with decaying exponent \(\gamma \). One can see an example of the power-law decaying \(A(t_d)\) in Fig. 2.2d. In addition, note that one can relate \(A(t_d)\) to the power spectrum or spectral density of the signal x(t) as follows:
$$\begin{aligned} P(\omega )=\left| \int x(t)e^{i\omega t}dt\right| ^2 \propto \int A(t_d)e^{-i\omega t_d}dt_d, \end{aligned}$$
(2.17)
which appears as the Fourier transform of autocorrelation function . We are mostly interested in the power-law decaying power spectrum as
$$\begin{aligned} P(\omega )\sim \omega ^{-\alpha _\omega } \end{aligned}$$
(2.18)
with \(0.5<\alpha _\omega <1.5\), then the time series is called 1 / f noise . 1 / f noise has been ubiquitously observed in various complex systems [18], hence extensively studied for the last few decades.
The scaling relation between \(\alpha \) and \(\gamma \) has been studied both analytically and numerically. Let us first mention the relation between \(\alpha _\omega \) and \(\gamma \). If \(A(t_d) \sim t_d^{-\gamma }\) for \(0<\gamma <1\), then from Eq. (2.17) one finds the scaling relation:
$$\begin{aligned} \alpha _\omega =1-\gamma . \end{aligned}$$
(2.19)
When the inter-event times are i.i.d. random variables with \(P(\tau )\sim \tau ^{-\alpha }\), implying no interdependency between inter-event times, the power-law exponent \(\alpha _\omega \) is obtained as a function of \(\alpha \) as follows [8, 180]:
$$\begin{aligned} \alpha _\omega =\left\{ \begin{array}{ll} \alpha -1 &{} \text {for} \, 1<\alpha \le 2,\\ 3-\alpha &{} \text {for} \, 2<\alpha \le 3,\\ 0 &{} \text {for} \, \alpha >3. \end{array}\right. \end{aligned}$$
(2.20)
For this result, the following inter-event time distribution was used:
$$\begin{aligned} P(\tau )=\left\{ \begin{array}{ll} \frac{\alpha -1}{a^{1-\alpha }-b^{1-\alpha }}\tau ^{-\alpha } &{} \text {for} \, 0<a<\tau <b,\\ 0 &{} \text {otherwise}. \end{array}\right. \end{aligned}$$
(2.21)
Combining Eqs. (2.19) and (2.20), we have
$$\begin{aligned} \begin{array}{ll} \alpha +\gamma =2 &{} \text {for} \, 1<\alpha \le 2,\\ \alpha -\gamma =2 &{} \text {for} \, 2<\alpha \le 3,\\ \end{array} \end{aligned}$$
(2.22)
which have also been derived in Ref. [281]. The above power-law exponents can be related via the Hurst exponent H , i.e., \(\gamma =2-2H\) [135] or \(\alpha _\omega =2H-1\) [8, 248]. This indicates that the power-law decaying autocorrelation function could be explained solely by the inhomogeneous inter-event times, not by the interdependency between inter-event times. In fact, the observed autocorrelation functions measure not only the inhomogeneities in inter-event times themselves but also correlations between consecutive inter-event times of arbitrary length. Thus, it is required to distinguish these effects from each other, if possible, for better understanding of bursty behaviour. For this, another measurement has recently been introduced, called bursty train size distribution, to be discussed below.

2.1.2.5 The Bursty Train Size Distribution

The above mentioned ambiguity of the autocorrelation function called for another way to indicate correlations between consecutive inter-event times. A method has been proposed by detecting correlated bursty trains as introduced in Ref. [144]. A bursty train is a sequence of events, where each event follows the previous one within a time window \(\varDelta t\). \(\varDelta t\) actually defines the maximum time between consecutive events, which are assumed to be causally correlated. In this way, an event sequence can be decoupled into a set of causal event trains in which each pair of consecutive events in a given train is closer than \(\varDelta t\), while trains are separated from each other by an inter-event time \(\tau >\varDelta t\). To obtain the size of each bursty train, denoted by E, we can count the number of events they contain, as depicted in Fig. 2.3. Note that this notion assigns a bursty train size \(E=1\) to standalone events, which occurs independently from any of the previous or following events, according to this definition. The relevant measure for temporal correlation is the bursty train size distribution \(P_{\varDelta t}(E)\) for a fixed \(\varDelta t\). If events are independent, \(P_{\varDelta t}(E)\) must appear as follows:
$$\begin{aligned} P_{\varDelta t}(E)= & {} \left[ \int _0^{\varDelta t}P(\tau )d\tau \right] ^{E-1}\left[ 1- \int _0^{\varDelta t} P(\tau )d\tau \right] \end{aligned}$$
(2.23)
$$\begin{aligned}\approx & {} \frac{1}{E_c(\varDelta t)}e^{-E/E_c(\varDelta t)}, \end{aligned}$$
(2.24)
where \(E_c(\varDelta t)\equiv -1/\ln F(\varDelta t)\) with the cumulative distribution of inter-event times \(F(\varDelta t)\equiv \int _0^{\varDelta t}P(\tau )d\tau \). Since \(F(\varDelta t)\) is not a function of E in this case, the functional form of \(P(\tau )\) is irrelevant to the functional form of \(P_{\varDelta t}(E)\), which appears with an exponential distribution for any independent event sequences. Thus any correlation between inter-event times may lead to different forms of \(P_{\varDelta t}(E)\), implying that any deviation from an exponential form of \(P_{\varDelta t}(E)\) indicates correlations between inter-event times. Interestingly, several empirical cases have been found to show the power-law distributed train sizes as
$$\begin{aligned} P_{\varDelta t}(E)\sim E^{-\beta }, \end{aligned}$$
(2.25)
with the power-law exponent \(\beta \) for a wide range of \(\varDelta t\) [119, 144, 145, 152]. For the demonstration of such observations, see Fig. 2.4a–c adopted from Ref. [144]. This phenomenon, called correlated bursts , has been shown to characterise several systems in nature and human dynamics [144].
Fig. 2.3

Schematic diagramme of an event sequence, where each vertical line indicates the timing of the event. For a given time window \(\varDelta t\), a bursty train is determined by a set of events separated by \(\tau \le \varDelta t\), while events in different trains are separated by \(\tau >\varDelta t\). The number of events in each bursty train, i.e., bursty train size, is denoted by E. In most empirical datasets, the distributions of E are heavy-tailed

Fig. 2.4

The characteristic functions of human communication event sequences. The bursty train size distribution \(P_{\Delta t}(E)\) with various time windows \(\Delta t\) (main panels), the inter-event time distribution \(P(\tau )\) (left bottom panels), and autocorrelation functions \(A(t_d)\) (right bottom panels) are calculated for different communication datasets. (a) Mobile phone call dataset: The scale-invariant behaviour was characterised by power-law functions with exponent values \(\alpha \simeq 0.7\), \(\beta \simeq 4.1\), and \(\gamma \simeq 0.5\) (b) Almost the same exponents were estimated for short message sequences taking values \(\alpha \simeq 0.7\), \(\beta \simeq 3.9\) and \(\gamma \simeq 0.6\). (c) Email event sequence with estimated exponents \(\alpha \simeq 1.0\), \(\beta \simeq 2.5\) and \(\gamma =0.75\). A gap in the tail of \(A(t_d)\) on figure (c) appears due to logarithmic binning and slightly negative correlation values. Empty symbols assign the corresponding calculation results on independent sequences. Lanes labeled with s, m, h and d are denoting seconds, minutes, hours and days, respectively.

(Source: This figure is adopted from Ref. [144] and it is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License)

Finally, we mention the possible effects of interdependency between inter-event times on the scaling relations between power-law exponents of inter-event times and autocorrelation function as presented in Eq. (2.22). For example, one can compare the autocorrelation function calculated for an empirical event sequence with that for the shuffled event sequence, where correlations between inter-event times are destroyed, as shown in the lower right panels in each of Fig. 2.4a–c. By doing so, the effects of interdependency between inter-event times can be tested. Such effects of correlation between inter-event times on the scaling relation should be studied more rigorously in the future as they are far from being fully understood. So far only a few studies have tackled this issue, e.g., see Refs. [130, 144, 248, 281].

2.1.2.6 Memory Kernels

We also introduce the memory kernel as one of the measurements for the bursty temporal patterns [12, 57, 191]. The memory kernel \(\phi (t)\) relates the past events, either being endogenous or being exogenous, to the future events. This measure, which represents the effect by the past events, has been empirically found to have the power-law form as
$$\begin{aligned} \phi (t)\sim t^{-(1+\theta )}, \end{aligned}$$
(2.26)
where t is the elapsed time from the past event and \(\theta \) denotes the power-law exponent characterising the degree of memory effects . However, in general, memory kernels are also assumed to follow different functional forms, e.g., hyperbolic, exponential [191], or power-law [130]. They are commonly applied in modelling bursty systems using self-exciting point processes [196]: For a given set of past events occurred before the time t, the event rate at time t reads as follows:
$$\begin{aligned} \lambda (t)=V(t)+\sum _{i, t_i\le t} \phi (t-t_i), \end{aligned}$$
(2.27)
where V(t) is the exogenous source, and \(t_i\) denotes the timing of the ith event. We are going to discuss more in details in Sect.  4.1.2.2.

2.1.2.7 Other Characteristic Measures

In addition to the conventional measures of bursty behaviour that we already introduced, we here mention some less recognised ones. There indeed exist a number of traditional measures and techniques in nonlinear time series analysis [136, 185]. Among them we here introduce the detrended fluctuation analysis (DFA), originally devised for analysing DNA sequences [228]. For a given time series x(t) for \(0\le t<T\), with its average value \(\langle x\rangle \), the cumulative time series is constructed by
$$\begin{aligned} y(t)\equiv \int _0^t (x(t')-\langle x\rangle )dt'. \end{aligned}$$
(2.28)
The total time period T is divided into segments of size w. For each segment, the cumulative time series is fit to a polynomial \(y_w (t)\). Using the fit polynomials for all segments, the mean-squared residual for the entire range of time series is calculated as follows:
$$\begin{aligned} F(w)\equiv \sqrt{\int _0^T [y(t)-y_w (t)]^2 dt}, \end{aligned}$$
(2.29)
which typically scales with the segment size w as \(w^H\). Here the scaling exponent H is called the Hurst exponent [41].

2.2 Inter-event Time, Residual Time, and Waiting Time

As for the terminology for burstiness, there is a common confusion between the definitions of inter-event time, waiting time, and residual time . Here we would like to clarify their definitions and relations to each other.

For a given event sequence, the inter-event time \(\tau \) is defined as the time between two consecutive events. However, the observations of an event sequence always cover a finite period of time, which has to be considered in the terminology. So let us assume an observer who begins to observe the time series of events at a random moment of time, and waits for the next, firstly observed event to take place. The time interval between the beginning time of the observation period and the next event has been called the residual time \(\tau _r\), also often called the residual waiting time or relay time [133]. A similar definition of the residual time is found in queuing theory in a situation when a customer arrives at a random time and waits for the server to become available [54, 56]. The residual time then is the time interval between the time of arrival and the time of being served, thus it corresponds to the remaining or residual time to the next event after a random arrival. The residual time distribution can be derived from the inter-event time distribution as
$$\begin{aligned} P(\tau _r)=\frac{1}{\langle \tau \rangle }\int _{\tau _r}^\infty P(\tau )d\tau , \end{aligned}$$
(2.30)
and the average residual time can be calculated as
$$\begin{aligned} \langle \tau _r \rangle = \int _0^\infty \tau _r P(\tau _r)d\tau _r = \frac{\langle \tau ^2\rangle }{2\langle \tau \rangle }. \end{aligned}$$
(2.31)
This result explains a phenomenon called the waiting-time paradox , which has important consequences on dynamical processes evolving on bursty temporal systems that we will discuss in details later in Sect.  5.1.1.1. As we mentioned earlier, a common reference dynamics to quantify the heterogeneity of a bursty sequence is provided by a Poisson process. Thus we may consider a normalised average residual time after dividing \(\langle \tau _r \rangle \) by the corresponding residual time of a Poisson process \(\langle \tau _{r}^{P} \rangle \), which is simply \(\langle \tau \rangle \). This can then be written as
$$\begin{aligned} \frac{\langle \tau _r \rangle }{\langle \tau _{r}^{P} \rangle }=\frac{\langle \tau ^2 \rangle }{2\langle \tau \rangle ^2}=\frac{1}{2}\left[ \left( \frac{\sigma }{\langle \tau \rangle }\right) ^2+1\right] =\frac{B^2+1}{(B-1)^2}, \end{aligned}$$
(2.32)
where \(\sigma \) is the standard deviation of \(P(\tau )\) and B is the burstiness parameter as defined in Eq. (2.8). Consequently this ratio can equally well be seen as a measure of burstiness.

Contrary to the above definitions, waiting times are not necessarily derived from series of consecutive events, but they can rather characterise the lifespan of single tasks. The tasks wait to be executed for a period depending on their priorities as well as on the newly-coming other tasks. In this way the waiting time \(\tau _w\), also often called response time or processing time , is defined as the time interval a newly arrived task needs to wait before it is executed. For example, in an editorial process, each submitted manuscript gives rise to one waiting time until the decision is made [91, 127, 205] and the waiting time distribution is obtained from a number of submitted manuscripts. However, the heavy tail of the waiting time distribution, \(P(\tau _w)\), implies the heterogeneity of the editorial system, but not necessarily the bursty dynamics of the process itself. On the other hand, the waiting time can be deduced from an event sequence, e.g., of directed interactions, like the time between receiving and responding to an email or letter . In these cases, a close relation between \(P(\tau )\) and \(P(\tau _w)\) seems to appear. Actually, it has been argued that in case of a process with heterogeneous waiting time distribution, the inter-event time distribution is also heterogeneous and vice versa, and can be characterised by the same exponent [21, 70, 176, 286]. Waiting times will be duly addressed later in Sect.  4.1.1, where they appear as the central quantity in the definition of priority queuing models [2, 21].

2.3 Collective Bursty Phenomena

So far we have been discussing measures to characterise bursty behaviour at the level of single individuals. However, individuals form egocentric networks and connected to a larger social system, which could show bursty dynamics and be characterised at the system level. Since individual dynamics is observed to be bursty, it may effect the system-level dynamics and the emergence of any collective phenomena, while also the contrary is true: If the collective dynamics is bursty, it must affect the temporal patterns of each individual. The structure of social systems has been commonly interpreted as social networks [35, 293], where nodes are identified as individuals and links assign their interactions. Thanks to the recent access to a huge amount of digital datasets related to human dynamics and social interaction, a number of empirical findings have been cumulated to study the structure and dynamics of social networks. Researchers have analysed various social networks of face-to-face interactions [63, 72, 306], emails [65, 161], mobile phone communication [30, 219], online forums [66, 108], Social Networking Services (SNS) like Facebook [279] and Twitter [168], as well as even massive multiplayer online games [270, 272]. These studies of social networks show that there are commonly observed features or stylised facts characterising their structures [115, 151, 206], see also the summary in Table I in Ref. [125]. For example, one finds broadly distributed network quantities like node degree and link weight [4, 221], homophily [194, 210], community or modular structure [71, 83], multilayer nature [32, 156], and geographical and demographic correlations [131, 220, 223] to mention a few. All these characters play important roles in the dynamics of social interactions.

At the same time, such datasets lead to the observation of mechanisms and correlations driving the interaction dynamics of people. This is the subject of the recent field of temporal networks [101, 105, 106, 190], which identifies social networks as temporal objects, where interactions are time-varying, and code the static structure after aggregation over an extensive period. Temporal networks are commonly interpreted as a sequence of events, which are defined as triplets (ijt), indicating that a node i interacts with a node j at time t. The analysis of event sequences of large number of individuals can disclose the mesoscopic structure of bursty interaction patterns, and enable us to characterise burstiness at the system level as well.

2.3.1 Bursty Patterns in Egocentric Networks

The interaction dynamics of a focused individual or an ego can be exploited from the global temporal network by extracting all event sequence where the ego i participates as:
$$\begin{aligned} x_i(t)\equiv \sum _{j\in \varLambda _i} x_{ij}(t), \end{aligned}$$
(2.33)
where \(\varLambda _i\) denotes the neighbour set of the ego i. In other works, the event sequence \(x_i(t)\) builds up from interaction sequences on single links, \(x_{ij}(t)\), which together define the dynamics of the egocentric network . Our first question is how the bursty interactions of an ego are distributed among the different neighbours.
We have already discussed that by observing an individual, her bursty activities may evolve in trains where several events follow each other within a short time window \(\varDelta t\). This is especially true for communication dynamics, where interactions like mobile calls, SMSs or emails sent or received by an ego, exhibit such patterns. However, the question remains whether such bursty communication trains are the consequences of some collective interaction patterns in the larger egocentric network, e.g., to organise an event or to process information, or on the contrary, they evolve on single links induced by discussions between only two people. One can easily figure this out by decoupling the entangled egocentric dynamics to single links and see how the bursty train size distribution P(E) changes before and after this process. If the first hypothesis is true, as long trains of an ego are distributed among many links, after decoupling the trains should fall apart and their size distribution should change radically. On the other hand, if the second hypothesis is true, their size distribution should not change considerably. Using mobile phone call and SMS sequences, it has been shown in Ref. [145] that after decoupling, P(E) measured on single links are almost identical to ones observed in individual activity sequences. In support of this observation it has been found that \({\sim }80\%\) of trains evolve on single links, almost independently from the train size. Consequently, this suggests that long correlated bursty trains are more like the property of links rather than nodes and are commonly induced by dyadic interactions. This study further discusses the difference between call and SMS sequences and finds that call (respectively SMS) trains are more imbalanced (respectively balanced) than one would expect from the overall communication balance of the social tie.
Fig. 2.5

Schematic example of the event sequence of an individual A with her various contexts (neighbours) B, C, and D. The collective inter-event time \(\tau ^{(i)}\) is defined as the time interval between consecutive events of any contexts of the ego \(i=A\). The contextual inter-event time \(\tau ^{(ij)}\) is defined between events of the same context, e.g., \(j=B\)

One can adopt the same picture to understand the contribution of bursty patterns on links to the overall inter-event time distribution of an ego. This question was addressed by Jo et al., who proposed an alternative explanation for bursty links related to contextual dependence of behaviour. In their interpretation, the context of an event [121, 128] is the circumstance in which the event occurs and can be a person, a social situation with some convention, or a place. In case of social interactions, for an ego i the context of social interactions can be associated to a neighbour j in the egocentric network . Then the question is how much contextual bursts , which evolve in the interaction sequences of single links \(x_{ij}(t)\), determine collective bursts observable in the overall interaction sequence \(x_i(t)\) of the ego i. This question can be addressed on the level of inter-event times. As depicted in Fig. 2.5, let us denote collective inter-event times in \(x_i(t)\) as \(\tau ^{(i)}\), while contextual inter-event times in \(x_{ij}(t)\) as \(\tau ^{(ij)}\). It is straightforward to see that a contextual inter-event time comprises typically of multiple collective inter-event times as follows:
$$\begin{aligned} \tau ^{(ij)} =\sum _{k=1}^n \tau ^{(i)}_k, \end{aligned}$$
(2.34)
where \(n-1\) is the number of events with contexts other than j between two consecutive events with j. For example, one finds \(n=3\) in Fig. 2.5 between the first and second observed interactions with context B. The relation between \(P(\tau ^{(ij)})\) and \(P(\tau ^{(i)})\) for uncorrelated inter-event times has been studied analytically and numerically in Ref. [128], where both \(P(\tau ^{(ij)})\) and \(P(\tau ^{(i)})\) are assumed to have power-law forms with exponents \(\alpha '\) and \(\alpha \), respectively. For deriving the scaling relation between \(\alpha '\) and \(\alpha \), another power-law distribution is assumed for n in Eq. (2.34), i.e., the number of collective inter-event times for one contextual inter-event time, as \(P(n)\sim n^{-\eta }\). The distribution of n is related to how the ego distributes her limited resource like time to her neighbours. Then one can write the relation between distribution functions as follows:
$$\begin{aligned} P(\tau ^{(ij)})= & {} \sum _{n=1}^\infty P(n) F_n(\tau ^{(ij)}),\end{aligned}$$
(2.35)
$$\begin{aligned} F_n(\tau ^{(ij)})\equiv & {} \prod _{k=1}^n \int _{\tau _0}^\infty d\tau _k^{(i)} P(\tau _k^{(i)}) \delta \left( \tau ^{(ij)}-\sum _{k=1}^n \tau _k^{(i)}\right) , \end{aligned}$$
(2.36)
where \(F_n\) denotes the probability of making one \(\tau ^{(ij)}\) as a sum of n \(\tau ^{(i)}\)s, and \(\tau _0\) is the lower bound of inter-event times \(\tau ^{(i)}\). By solving this equation, the scaling relation between \(\alpha '\), \(\alpha \), and \(\eta \) is obtained [128]:
$$\begin{aligned} \alpha '=\min \{(\alpha -1)(\eta -1)+1,\alpha ,\eta \}. \end{aligned}$$
(2.37)
This result provides a condition under which the statistical properties of the ego’s own temporal pattern could be described similarly to those of the ego’s relationships.
Note that this terminology can be generalised for event sequences not only on links but for an arbitrary set of neighbours associated to the same context \(\varLambda \). In this case the contextual event sequence can be written as
$$\begin{aligned} x_\varLambda (t)\equiv \sum _{i, j\in \varLambda } x_{ij}(t), \end{aligned}$$
(2.38)
where the summation considers individuals i and j who both belong to the same context or group of \(\varLambda \). Then one can study the relation between statistical properties at different levels of contextual grouping. For example, empirical analysis using online forum dataset was recently performed to relate individual bursty patterns to the forum-level bursty patterns in Ref. [224].
In another work Song et al. [257] proposed scaling relations between power-law exponents characterising structural and temporal properties of temporal social networks. In terms of structure they concentrate on the distribution of node degrees and link weights observed over a finite time period. Here the node degree indicates the number of neighbours of a node, while the link weight is defined as the number of interaction events between two neighbouring nodes. Both of these distributions can be approximated as power-laws with exponents \(\varepsilon _k\) and \(\varepsilon _w\). To characterise the dynamics of the network they consider individual activity \(a_i\), defined as the total number of interactions of an ego i within a given period, and inter-event time distributions, but not in real time but event times and not of egos but of social ties. In this case inter-event time is defined as the number of events between two consecutive interaction of the ego and one specific neighbour (similar to n in Eq. (2.34)). Distributions of these dynamical quantities can be also approximated by power-laws with exponents assigned as \(1+\alpha _{a}\) for activity and \(1+\alpha _{\tau }\) for inter-event times. They first show that the degree of a node i, denoted by \(k_i\), observed for a period \([ t_1, t_2 ]\) is increasing as
$$\begin{aligned} k_i(t_1,t_2)\sim a_i(t_1,t_2)^{\kappa _i}. \end{aligned}$$
(2.39)
They argue that the power-law exponent \(\kappa _i\) measured for an ego i, what they call the sociability, satisfies the condition
$$\begin{aligned} \kappa _i+\alpha _{\tau , i}=1, \end{aligned}$$
(2.40)
where \(\alpha _{\tau , i}\) denotes the inter-event time exponent observed in the interaction sequence of the ego i. They further argue that the degree and weight distribution exponents can be determined by the dynamical parameters as
$$\begin{aligned} \varepsilon _k=1+\min \left\{ \frac{\alpha _a}{1-\overline{\alpha }_{\tau }}, \frac{u}{\overline{\alpha }_{\tau } \ln \overline{a}} \right\} , \varepsilon _w=2-\overline{\alpha }_{\tau }, \end{aligned}$$
(2.41)
where \(\overline{\alpha }_{\tau }\) and \(\overline{a}\) denote average values, while u is a parameter capturing the variability of sociability \(\kappa \). The authors support these scaling relations by introducing the scaling functions to scale the corresponding distributions obtained from various human interaction datasets.

2.3.2 Bursty Temporal Motifs

Taking off from the egocentric point of view, bursty temporal interaction patterns can appear not only centered around a single ego but also between a larger number of people. Such patterns are formed by causally correlated sequence of interactions, which appear within a short time window between two or more people. These temporal motifs are arguably induced by group conversations, information processing, or organisation like a common event, etc., and can be associated to burstiness at the mesoscopic level of networks. The emergence of such group-level bursty events is rather rare and it strongly depends on the observed communication channel and the type of induced events. However, it has been shown that some of them appear with a significantly larger frequency as compared to random reference models.
Fig. 2.6

a A directed temporal network between four nodes a, b, c, and d with four events, \(e_1\), \(e_2\), \(e_3\), \(e_4\), respectively at \(t=15\), 18, 24, and 33. By assuming that \(\varDelta t = 10\), \(e_2\) end \(e_4\) are adjacent but not \(\varDelta t\)-adjacent. bd All 2-event valid temporal subgraphs. e An invalid subgraph because it skips the event \(e_2\) that for node a takes place between \(e_1\) and \(e_3\)

Temporal motifs are defined in temporal networks . For a schematic example, see Fig. 2.6a. Here interactions between nodes occur in different timings and they are interpreted as events assigned with time stamps. For a more detailed definition and characterisation of temporal networks we refer the reader to Refs. [105, 190]. Temporal motifs consist of \(\varDelta t\) -adjacent events in the temporal network, which share at least one common node and happens within a time window \(\varDelta t\). Two events that are not directly \(\varDelta t\)-adjacent might be \(\varDelta t\) -connected if there is a sequence of events connecting the two events, which are successive in time and \(\varDelta t\)-adjacent. A connected temporal subgraph is then a set of events where each pair of events are \(\varDelta t\)-connected, as depicted in Fig. 2.6b–e. To define temporal motifs we further restrict our definition on valid temporal subgraph where for each node in the subgraph the events involving the node must be consecutive, e.g., as in Fig. 2.6b–d. Note that for the final definition of temporal motifs we consider only maximal valid temporal subgraphs, which contain all events that are \(\varDelta t\)-connected to the included events. For a more precise definition, see Refs. [163, 164]. Also note that an alternative definition of temporal motifs has been proposed recently, where motifs are defined by events which all appear within a fixed time window [225].

One way to detect temporal motifs is by interpreting them as static directed colored graphs and find all isomorphic structures with equivalent ordering in a temporal network [163]. The significance of the detected motifs can be inferred by comparing the observed frequencies to those calculated in some reference models, where temporal and causal correlations were removed. Such analysis has shown [163] that the most frequent motifs in mobile phone communication sequences correspond to dyadic bursty interaction trains on single links. On the other hand the least frequent motifs are formed by non-causal events, suggesting strong dependence between causal correlations and bursty phenomena.

2.3.3 System Level Characterisation

Finally, we discuss methods to characterise bursty phenomena at the level of the whole social network. Temporal inhomogeneity at the system level can be measured in terms of temporal network sparsity  [230]. This measure counts the number of microscopic configurations associated with the macroscipoc state of a temporal networks. This concept of multiplicity has been known in statistical physics. More specifically, in a temporal network for a given time window, events can be distributed over the links of the corresponding static structure. Here we denote a link between nodes i and j as ij, and the set of all links as L. Thus, for a time window one can measure the fraction of events on a given link ij, denoted by \(p_{ij}\), and compute the Shannon entropy considering each link \(ij\in L\) as:
$$\begin{aligned} H_{L} = -\sum _{ij\in L} p_{ij}\ln p_{ij}, \end{aligned}$$
(2.42)
which quantifies how heterogeneously events are distributed among different links. After computing an average entropy \(\langle H_L \rangle \) over several time windows, one can estimate the effective number of links as
$$\begin{aligned} L^\mathrm{eff} \equiv \exp (\langle H_L \rangle ), \end{aligned}$$
(2.43)
which gives the number of links in a given time window assuming that the event rate per a link is constant. Simultaneously measuring the effective number of links in the empirical temporal network and in a random reference model where events are uniformly distributed in time, one can introduce the notion of temporal network sparsity :
$$\begin{aligned} \zeta _\mathrm{temp} \equiv \frac{L^\mathrm{eff}}{L^\mathrm{eff}_\mathrm{ref}}. \end{aligned}$$
(2.44)
This measure indicates the overall distribution of events within a given time window as compared to the case with homogeneously distributed events. The smaller value \(\zeta _\mathrm{temp}\) has, the more severe heterogeneities characterise the event sequence and the more “temporally sparse” the network is. This measure turns out to have some explanatory power for spreading dynamics on various temporal networks [230].
Fig. 2.7

a An example of the deseasoning method applied to a mobile call series of a user, with \(T_{\circlearrowleft }=1\) week. The top shows the first two weeks of the call series colored in red (the first week) and blue (the second week). Events for all weeks are collected in one week period to obtain the event rate \(\rho (t)\) for \(0\le t< T_{\circlearrowleft }\). After deseasoning, the events in each week are put back to their original slot. b The original inter-event time distribution for individuals with 200 calls is compared to the distributions with deseasoned inter-event times for various values of \(T_{\circlearrowleft }\).

(Source This figure is adopted from Ref. [123] and it is licensed under a ©IOP Publishing & Deutsche Physikalische Gesellschaft (CC BY-NC-SA))

2.4 Cyclic Patterns in Human Dynamics

It is evident that humans follow intrinsic periodic patterns of circadian, weekly, and even longer cycles [5, 122, 123, 184]. Such cycles clearly contribute to the inhomogeneities of temporal patterns, and they often result in an exponential cutoff to the inter-event time distributions. Identifying and filtering out such cyclic patterns from a time series can reveal bursty behaviour of different origins than those cycles. In order to characterise such cyclic patterns, let us consider a time series, i.e., the number of events at time t, denoted by x(t), for the entire period of \(0\le t< T\). One may be interested in a specific cycle, like daily or weekly ones, with period denoted by \(T_{\circlearrowleft }\). Then, for a given period of \(T_{\circlearrowleft }\), the event rate with \(0\le t <T_{\circlearrowleft }\) can be defined as
$$\begin{aligned} \rho (t)\equiv \frac{T_{\circlearrowleft }}{X}\sum _{k=0}^{T/T_{\circlearrowleft }} x(t+kT_{\circlearrowleft }), X\equiv \int _0^{T} x(t)dt. \end{aligned}$$
(2.45)
Such cycles turn out to be also apparent in the inter-event time distributions \(P(\tau )\) . For example, one finds peaks of \(P(\tau )\) corresponding to multiples of one day in mobile phone calls and blog posts [122, 155]. Note that such periodicities could be characterised by means of a power spectrum analysis in Eq. (2.17), however here we take another way.
Once such cycles are identified in terms of the event rate \(\rho (t)\), we can filter them by deseasoning the time series [123]. First, we extend indefinitely the domain of \(\rho (t)\) by \(\rho (t+kT_{\circlearrowleft })=\rho (t)\) with an arbitrary integer k. Then using the identity of \(\rho (t)dt=\rho ^*(t^*)dt^*\) with the deseasoned event rate of \(\rho ^*(t^*)=1\), we can get the deseasoned time \(t^*(t)\) as
$$\begin{aligned} t^*(t)\equiv \int _0^t \rho (t')dt'. \end{aligned}$$
(2.46)
For the schematic example of the deseasoning method, see Fig. 2.7a. In plain words, the time is dilated (respectively contracted) at the moment of the high (respectively low) event rate. Then the deseasoned event sequence of \(\{t^*(t_i)\}\) is compared to the original event sequence of \(\{t_i\}\) to see how strong signature of burstiness or memory effects remained in the deseasoned sequence. This reveals whether the empirically observed temporal heterogeneities can (or cannot) be explained by the intrinsic cyclic patterns, characterised in terms of the event rate. For example, if one obtains the deseasoned inter-event time \(\tau _i^*\) corresponding to the original inter-event time \(\tau _i=t_i-t_{i-1}\) as
$$\begin{aligned} \tau _i^* \equiv t^*(t_i)-t^*(t_{i-1})=\int _{t_{i-1}}^{t_i} \rho (t')dt', \end{aligned}$$
(2.47)
then the deseasoned inter-event time distribution \(P(\tau ^*)\) can be compared to the original inter-event time distribution \(P(\tau )\). This method was applied to the mobile phone call series [123], as partly depicted in Fig. 2.7b, where the inter-event time distributions for the original and deseasoned event sequences show almost the same shape for various values of \(T_{\circlearrowleft }\). This indicates that there could be other origins for the human bursty dynamics than the circadian and weekly cycles of humans. In order to quantitatively study the effects of deseasoning, the burstiness parameter B has been measured for both original and deseasoned mobile phone call series to find the overall decreased yet positive values of B, implying that the bursts remain after deseasoning. In addition, the memory coefficients \(M_m\), bursty train size distributions \(P_{\varDelta t}(E)\), and autocorrelation function \(A(t_d)\) can be also measured by using the deseasoned event sequence of \(\{t^*(t_i)\}\) for the comparison to the original ones.
It is straightforward to extend this method for the aggregated time series at different levels of activity groups, including the whole population. For a set of individuals \(\varLambda \), the number of events in time t is denoted by
$$\begin{aligned} x_{\varLambda }(t)\equiv \sum _{i\in \varLambda } x_i(t), \end{aligned}$$
(2.48)
where \(x_i(t)\) is the number of events of an individual i at time t. Then, for a given period of \(T_{\circlearrowleft }\), the event rate with \(0\le t <T_{\circlearrowleft }\) is defined as
$$\begin{aligned} \rho _{\varLambda }(t)\equiv \frac{T_{\circlearrowleft }}{X_{\varLambda }}\sum _{k=0}^{T/T_{\circlearrowleft }} x_{\varLambda }(t+kT_{\circlearrowleft }),\ X_{\varLambda }\equiv \int _0^{T} x_{\varLambda }(t)dt. \end{aligned}$$
(2.49)
Using this event rate for the actual set of individuals \(\varLambda \), one can get the deseasoned time \(t^*_{\varLambda }(t)\) as follows:
$$\begin{aligned} t^*_{\varLambda }(t)\equiv \int _0^t \rho _{\varLambda }(t')dt'. \end{aligned}$$
(2.50)
We remark that the fully deseasoned time series, i.e., for \(T_{\circlearrowleft }=T\), corresponds to the time series represented in the ordinal time-frame, where real timings of events are replaced by the orders of events. Now if \(T_{\circlearrowleft }=T\), we have the event rate for a node i as \(\rho _i(t) =\frac{T}{X_i}x_i(t)\) with \(X_i\) denoting the total number of events of the node i. We assign the timing of the kth event between i and j by \(t^{(ij)}_k\) and get the deseasoned inter-event time corresponding to \(\tau ^{(ij)}_k=t^{(ij)}_k-t^{(ij)}_{k-1}\) as
$$\begin{aligned} {\tau ^*}^{(ij)}_k \equiv \frac{T}{X_i} \int _{t^{(ij)}_{k-1}}^{t^{(ij)}_k} x_i(t')dt' = \frac{T}{X_i}n^{(ij)}_k. \end{aligned}$$
(2.51)
Here \(n^{(ij)}_k\) is the contextual ordinal inter-event time, i.e., the number of events of contexts other than j between two consecutive events with the context j. Thus, the fully deseasoned real time-frame is simply translated into the ordinal time-frame. The characterisation of bursts in terms of the ordinal time-frame has also been studied in other contexts, e.g., in terms of activity clock [77], relative clock [311], and “proper time” [69, 70]. In these works, the elapsed time is counted in terms of the number of events instead of the real time.

2.4.1 Remark on Non-stationarity

So far, the time series has been assumed to be stationary, either explicitly or implicitly. As the stationarity by definition indicates the symmetry under the time translation, all non-Poissonian processes could be considered non-stationary, hence various time series analysis methods mentioned cannot be applied to the bursty temporal patterns. However, the definition of the stationarity can be relaxed by allowing a non-stationary behaviour only for some specific time scale: For example, human individuals can show a daily cycle in their temporal patterns, while they might keep their daily routines for several months or longer. Then, their temporal patterns can be considered stationary only at time scales that are longer than one day and shorter than several months. This relaxed definition of stationarity could be yet misleading, given the fact that most bursty phenomena show scale-free, hierarchical nature in terms of time scales, while we can apply various time series analysis methods as long as the time series looks stationary at least at some specific time scales. In this sense, the deseasoning method or detrended fluctuation analysis and its variants can be useful for removing the non-stationary temporal patterns from the original time series, hence for allowing us to investigate the bursty nature of time series without being concerned with non-stationarity issue. This is an important issue but has been largely ignored in many works, except for some recent studies mostly in relation to the dynamic processes on networks [102, 107].

Copyright information

© The Author(s) 2018

Authors and Affiliations

  1. 1.Laboratoire de l’Informatique du ParallélismeÉcole Normale Supérieure de Lyon, Univ Lyon, InriaLyonFrance
  2. 2.Junior Research GroupAsia Pacific Center for Theoretical PhysicsPohangKorea (Republic of)
  3. 3.Department of Computer ScienceAalto UniversityEspooFinland

Personalised recommendations