Bursty Human Dynamics pp 7-29 | Cite as

# Measures and Characterisations

- 471 Downloads

## Abstract

In this Chapter we will present the theoretical description and characterisation of bursty human dynamics. Starting from the description of discrete time series we go through all the characteristic measures, like inter-event time distribution, burstiness parameter, memory coefficient, bursty train size distribution, autocorrelation function, etc., which were borrowed or introduced over the last decade to describe human bursty systems from the individual to the network level. With these quantities, we show how to detect the temporal inhomogeneities and long-range memory effects in the event sequences of human dynamics. At the same time we also introduce methods for system-level characterisation, mainly in the frame of temporal networks, that have been intensively studied in recent years to describe temporal human social behaviour. Finally, as human dynamics intrinsically show the cyclic patterns like daily and weekly ones, the methods for deciphering the effects of such cycles are also described.

In order to investigate the dynamics of human social behaviour quantitatively, we first introduce it as a time series and we show how it is characterised by means of various techniques of time series analysis. According to Box et al. [37], a time series is a set of observations that are made sequentially in time. The timing of an observation denoted by *t* can be either continuous or discrete. Since most datasets of human dynamics have recently been recorded digitally, we will here focus on the case of discrete timings. In this sense, the time series can be called an event sequence, where each event indicates an observation. In this series the *i*th event takes place at time \(t_i\) with the result of the observation \(z_i\) that can denote a number, a symbol, or even a set of numbers, depending on what has been measured. The sequence of \(\{(t_i, z_i)\}\) can be simply denoted by \(z_t\). Some events could occur in a time interval or with duration. For example, a phone call between two individuals may last from few minutes to hours [105]. In many cases as the time scale for event duration is much smaller than that of our interest, the event duration will be ignored in our monograph unless stated otherwise.

In most cases a time series refers to observations made at regular timings. For a fixed time interval \(t_\mathrm{int}\), the timings are set as \(t_i=t_0+ t_\mathrm{int}i\) for \(i=0, 1, 2,\cdots \). In many cases, \(t_0\) and \(t_\mathrm{int}\) are fixed at the outset thus they can be ignored for time series analysis. An example of a time series with regular observations is the daily price of a stock in the stock market, constituting a financial time series [185]. Such time series are often analysed by using traditional techniques like the autocorrelation function with the aim to reveal the dependencies between observed values, which often show inhomogeneities and large fluctuations in them.

One also finds many cases in which the timings of observations are inhomogeneous, like in case of emails sent by a user [21]. The fact that the occurrence of events is not regular in time leads to temporally inhomogeneous time series, potentially together with the variation of observed value \(z_t\). In these cases we can talk about two kinds of inhomogeneities in observed time series. On the one hand, fluctuations are associated both with temporal inhomogeneities and with the variation of observations. On the other hand, inhomogeneities can be associated only with the timings of events, not with observation values. This is the case of several recent datasets, e.g., those related to communication or individual transactions. In such datasets events are typically not assigned with content due to privacy reasons, thus only their timings are observable. In the following Sections we will mainly focus on the latter type of time series.

*k*. This constitutes a coarse-graining process for the time series.

## 2.1 Point Processes as Time Series with Irregular Timings

*n*events can be represented by an ordered list of event timings, i.e., \(ev(t_i)=\{t_0,t_1,\cdots , t_{n-1}\}\), where \(t_i\) denotes the timing of the

*i*th event. On the other hand, the event sequence can be depicted as a binary signal

*x*(

*t*) that takes a value of 1 at time \(t=t_i\), or 0 otherwise. For discrete timings, one can write the signal as

### 2.1.1 The Poisson Process

*n*events occur within a bounded interval follows a Poisson distribution

Throughout the monograph we are going to refer to two types of Poisson processes. One type, called the *homogeneous Poisson process*, is characterised by a constant event rate \(\lambda \), while the other type, called the *non-homogeneous Poisson process*, is defined such that the event rate varies over time, denoted by \(\lambda (t)\). For more precise definitions and discussion on the characters of Poisson processes we suggest the reader to study the extended literature addressing this process, e.g., Ref. [84]. We remark that the Poisson processes and their variants have been studied in terms of shot noise in electric conductors and related systems [29, 42, 179].

### 2.1.2 Characterisation of Temporal Heterogeneities

#### 2.1.2.1 The Inter-event Time Distribution

*C*denotes a normalisation constant, \(\alpha \) is the power-law exponent, and \(\tau _c\) sets the position of the exponential cutoff. Refer to an example of the power-law \(P(\tau )\) in Fig. 2.2b. The power-law scaling of \(P(\tau )\) indicates the lack of any characteristic time scale, but the presence of strong temporal fluctuations, characterised by the power-law exponent \(\alpha \). Power-law distributions are also associated to the concepts of scale-invariance and self-similarity as demonstrated in Ref. [212]. In this sense, the value of \(\alpha \) is deemed to have an important meaning, especially in terms of universality classes in statistical physics [232]. Interestingly, as will be discussed in Chap. 3, a number of recent empirical researches have reported power-law inter-event time distributions with various exponent values.

Nevertheless, we note that although recent studies disclosed several bursty systems with broad inter-event time distributions, it is not trivial to identify the best functional form of distribution fitting the data points and to estimate its parameters like the value of power-law exponent. For the related statistical and technical issues, one can see Ref. [50] and references therein. In addition, the effect of finite size of the observation period on the evaluation of inter-event time distributions has recently been discussed in Ref. [158].

#### 2.1.2.2 The Burstiness Parameter

*B*is defined as the function of the coefficient of variation (CV) of inter-event times \(r\equiv \sigma /\langle \tau \rangle \) to measures temporal heterogeneity as follows:

*B*takes the value of \(-1\) for regular time series with \(\sigma =0\), and it is equal to 0 for random, Poissonian time series where \(\sigma =\langle \tau \rangle \). In case when the time series appears with more heterogeneous inter-event times than a Poisson process, the burstiness parameter is positive (\(B>0\)), while taking the value of 1 only for extremely bursty cases with \(\sigma \rightarrow \infty \). This measure has found a wide range of applications because of its simplicity, e.g., in analysing earthquake records, heartbeats of human subjects, and communication patterns of individuals in social networks, as well as for testing models of bursty dynamics [74, 79, 123, 130, 154, 177, 292, 303, 305].

*B*is strongly affected by the number of events

*n*especially for bursty temporal patterns [153]. For the regular time series, the CV of inter-event times,

*r*, has the value of 0 irrespective of

*n*as all the inter-event times are the same. For the random time series, one gets \(r=\sqrt{(n-1)/(n+1)}\) by imposing the periodic boundary condition to the time series. This case basically corresponds to the Poisson process. Finally, for the extremely bursty time series, one has \(r=\sqrt{n-1}\), corresponding to the case when all events occur asymptotically at the same time. This implies the strong finite-size effect on the burstiness parameter for time series with moderate number of events. We also remark that \(B=1\) is realised only when \(n\rightarrow \infty \). Let us assume that one compares the degrees of burstiness of two event sequences but with different numbers of events in them. If the measured values of

*B*are the same for both event sequences, does it really mean that those event sequences are equally bursty? This is not a trivial issue. Thus, in order to fix these strong finite-size effects, an alternative measure has been introduced for the burstiness parameter in Ref. [153]:

#### 2.1.2.3 The Memory Coefficient

*M*to measure two-point correlations between consecutive inter-event times as follows:

*m*may fully characterise the memory effects between inter-event times.

#### 2.1.2.4 The Autocorrelation Function

*x*(

*t*) as defined in Eq. (2.2). In addition, for a proper introduction we need to define the delay time \(t_d\), which sets a time lag between two observations of the signal

*x*(

*t*). Then the autocorrelation function with delay time \(t_d\) is defined as follows:

*x*(

*t*) as follows:

*f*noise . 1 /

*f*noise has been ubiquitously observed in various complex systems [18], hence extensively studied for the last few decades.

*H*, i.e., \(\gamma =2-2H\) [135] or \(\alpha _\omega =2H-1\) [8, 248]. This indicates that the power-law decaying autocorrelation function could be explained solely by the inhomogeneous inter-event times, not by the interdependency between inter-event times. In fact, the observed autocorrelation functions measure not only the inhomogeneities in inter-event times themselves but also correlations between consecutive inter-event times of arbitrary length. Thus, it is required to distinguish these effects from each other, if possible, for better understanding of bursty behaviour. For this, another measurement has recently been introduced, called bursty train size distribution, to be discussed below.

#### 2.1.2.5 The Bursty Train Size Distribution

*E*, we can count the number of events they contain, as depicted in Fig. 2.3. Note that this notion assigns a bursty train size \(E=1\) to standalone events, which occurs independently from any of the previous or following events, according to this definition. The relevant measure for temporal correlation is the bursty train size distribution \(P_{\varDelta t}(E)\) for a fixed \(\varDelta t\). If events are independent, \(P_{\varDelta t}(E)\) must appear as follows:

*E*in this case, the functional form of \(P(\tau )\) is irrelevant to the functional form of \(P_{\varDelta t}(E)\), which appears with an exponential distribution for any independent event sequences. Thus any correlation between inter-event times may lead to different forms of \(P_{\varDelta t}(E)\), implying that any deviation from an exponential form of \(P_{\varDelta t}(E)\) indicates correlations between inter-event times. Interestingly, several empirical cases have been found to show the power-law distributed train sizes as

*correlated bursts*, has been shown to characterise several systems in nature and human dynamics [144].

Finally, we mention the possible effects of interdependency between inter-event times on the scaling relations between power-law exponents of inter-event times and autocorrelation function as presented in Eq. (2.22). For example, one can compare the autocorrelation function calculated for an empirical event sequence with that for the shuffled event sequence, where correlations between inter-event times are destroyed, as shown in the lower right panels in each of Fig. 2.4a–c. By doing so, the effects of interdependency between inter-event times can be tested. Such effects of correlation between inter-event times on the scaling relation should be studied more rigorously in the future as they are far from being fully understood. So far only a few studies have tackled this issue, e.g., see Refs. [130, 144, 248, 281].

#### 2.1.2.6 Memory Kernels

*t*is the elapsed time from the past event and \(\theta \) denotes the power-law exponent characterising the degree of memory effects . However, in general, memory kernels are also assumed to follow different functional forms, e.g., hyperbolic, exponential [191], or power-law [130]. They are commonly applied in modelling bursty systems using self-exciting point processes [196]: For a given set of past events occurred before the time

*t*, the event rate at time

*t*reads as follows:

*V*(

*t*) is the exogenous source, and \(t_i\) denotes the timing of the

*i*th event. We are going to discuss more in details in Sect. 4.1.2.2.

#### 2.1.2.7 Other Characteristic Measures

*x*(

*t*) for \(0\le t<T\), with its average value \(\langle x\rangle \), the cumulative time series is constructed by

*T*is divided into segments of size

*w*. For each segment, the cumulative time series is fit to a polynomial \(y_w (t)\). Using the fit polynomials for all segments, the mean-squared residual for the entire range of time series is calculated as follows:

*w*as \(w^H\). Here the scaling exponent

*H*is called the Hurst exponent [41].

## 2.2 Inter-event Time, Residual Time, and Waiting Time

As for the terminology for burstiness, there is a common confusion between the definitions of inter-event time, waiting time, and residual time . Here we would like to clarify their definitions and relations to each other.

*inter-event time*\(\tau \) is defined as the time between two consecutive events. However, the observations of an event sequence always cover a finite period of time, which has to be considered in the terminology. So let us assume an observer who begins to observe the time series of events at a random moment of time, and waits for the next, firstly observed event to take place. The time interval between the beginning time of the observation period and the next event has been called the

*residual time*\(\tau _r\), also often called the

*residual waiting time*or

*relay time*[133]. A similar definition of the residual time is found in queuing theory in a situation when a customer arrives at a random time and waits for the server to become available [54, 56]. The residual time then is the time interval between the time of arrival and the time of being served, thus it corresponds to the remaining or residual time to the next event after a random arrival. The residual time distribution can be derived from the inter-event time distribution as

*waiting-time paradox*, which has important consequences on dynamical processes evolving on bursty temporal systems that we will discuss in details later in Sect. 5.1.1.1. As we mentioned earlier, a common reference dynamics to quantify the heterogeneity of a bursty sequence is provided by a Poisson process. Thus we may consider a normalised average residual time after dividing \(\langle \tau _r \rangle \) by the corresponding residual time of a Poisson process \(\langle \tau _{r}^{P} \rangle \), which is simply \(\langle \tau \rangle \). This can then be written as

*B*is the burstiness parameter as defined in Eq. (2.8). Consequently this ratio can equally well be seen as a measure of burstiness.

Contrary to the above definitions, *waiting times* are not necessarily derived from series of consecutive events, but they can rather characterise the lifespan of single tasks. The tasks wait to be executed for a period depending on their priorities as well as on the newly-coming other tasks. In this way the *waiting time* \(\tau _w\), also often called *response time* or *processing time* , is defined as the time interval a newly arrived task needs to wait before it is executed. For example, in an editorial process, each submitted manuscript gives rise to one waiting time until the decision is made [91, 127, 205] and the waiting time distribution is obtained from a number of submitted manuscripts. However, the heavy tail of the waiting time distribution, \(P(\tau _w)\), implies the heterogeneity of the editorial system, but not necessarily the bursty dynamics of the process itself. On the other hand, the waiting time can be deduced from an event sequence, e.g., of directed interactions, like the time between receiving and responding to an email or letter . In these cases, a close relation between \(P(\tau )\) and \(P(\tau _w)\) seems to appear. Actually, it has been argued that in case of a process with heterogeneous waiting time distribution, the inter-event time distribution is also heterogeneous and vice versa, and can be characterised by the same exponent [21, 70, 176, 286]. Waiting times will be duly addressed later in Sect. 4.1.1, where they appear as the central quantity in the definition of priority queuing models [2, 21].

## 2.3 Collective Bursty Phenomena

So far we have been discussing measures to characterise bursty behaviour at the level of single individuals. However, individuals form egocentric networks and connected to a larger social system, which could show bursty dynamics and be characterised at the system level. Since individual dynamics is observed to be bursty, it may effect the system-level dynamics and the emergence of any collective phenomena, while also the contrary is true: If the collective dynamics is bursty, it must affect the temporal patterns of each individual. The structure of social systems has been commonly interpreted as social networks [35, 293], where nodes are identified as individuals and links assign their interactions. Thanks to the recent access to a huge amount of digital datasets related to human dynamics and social interaction, a number of empirical findings have been cumulated to study the structure and dynamics of social networks. Researchers have analysed various social networks of face-to-face interactions [63, 72, 306], emails [65, 161], mobile phone communication [30, 219], online forums [66, 108], Social Networking Services (SNS) like Facebook [279] and Twitter [168], as well as even massive multiplayer online games [270, 272]. These studies of social networks show that there are commonly observed features or *stylised facts* characterising their structures [115, 151, 206], see also the summary in Table I in Ref. [125]. For example, one finds broadly distributed network quantities like node degree and link weight [4, 221], homophily [194, 210], community or modular structure [71, 83], multilayer nature [32, 156], and geographical and demographic correlations [131, 220, 223] to mention a few. All these characters play important roles in the dynamics of social interactions.

At the same time, such datasets lead to the observation of mechanisms and correlations driving the interaction dynamics of people. This is the subject of the recent field of *temporal networks* [101, 105, 106, 190], which identifies social networks as temporal objects, where interactions are time-varying, and code the static structure after aggregation over an extensive period. Temporal networks are commonly interpreted as a sequence of events, which are defined as triplets (*i*, *j*, *t*), indicating that a node *i* interacts with a node *j* at time *t*. The analysis of event sequences of large number of individuals can disclose the mesoscopic structure of bursty interaction patterns, and enable us to characterise burstiness at the system level as well.

### 2.3.1 Bursty Patterns in Egocentric Networks

*i*participates as:

*i*. In other works, the event sequence \(x_i(t)\) builds up from interaction sequences on single links, \(x_{ij}(t)\), which together define the dynamics of the egocentric network . Our first question is how the bursty interactions of an ego are distributed among the different neighbours.

*P*(

*E*) changes before and after this process. If the first hypothesis is true, as long trains of an ego are distributed among many links, after decoupling the trains should fall apart and their size distribution should change radically. On the other hand, if the second hypothesis is true, their size distribution should not change considerably. Using mobile phone call and SMS sequences, it has been shown in Ref. [145] that after decoupling,

*P*(

*E*) measured on single links are almost identical to ones observed in individual activity sequences. In support of this observation it has been found that \({\sim }80\%\) of trains evolve on single links, almost independently from the train size. Consequently, this suggests that long correlated bursty trains are more like the property of links rather than nodes and are commonly induced by dyadic interactions. This study further discusses the difference between call and SMS sequences and finds that call (respectively SMS) trains are more imbalanced (respectively balanced) than one would expect from the overall communication balance of the social tie.

*i*the context of social interactions can be associated to a neighbour

*j*in the egocentric network . Then the question is how much

*contextual bursts*, which evolve in the interaction sequences of single links \(x_{ij}(t)\), determine

*collective bursts*observable in the overall interaction sequence \(x_i(t)\) of the ego

*i*. This question can be addressed on the level of inter-event times. As depicted in Fig. 2.5, let us denote collective inter-event times in \(x_i(t)\) as \(\tau ^{(i)}\), while contextual inter-event times in \(x_{ij}(t)\) as \(\tau ^{(ij)}\). It is straightforward to see that a contextual inter-event time comprises typically of multiple collective inter-event times as follows:

*j*between two consecutive events with

*j*. For example, one finds \(n=3\) in Fig. 2.5 between the first and second observed interactions with context

*B*. The relation between \(P(\tau ^{(ij)})\) and \(P(\tau ^{(i)})\) for uncorrelated inter-event times has been studied analytically and numerically in Ref. [128], where both \(P(\tau ^{(ij)})\) and \(P(\tau ^{(i)})\) are assumed to have power-law forms with exponents \(\alpha '\) and \(\alpha \), respectively. For deriving the scaling relation between \(\alpha '\) and \(\alpha \), another power-law distribution is assumed for

*n*in Eq. (2.34), i.e., the number of collective inter-event times for one contextual inter-event time, as \(P(n)\sim n^{-\eta }\). The distribution of

*n*is related to how the ego distributes her limited resource like time to her neighbours. Then one can write the relation between distribution functions as follows:

*n*\(\tau ^{(i)}\)s, and \(\tau _0\) is the lower bound of inter-event times \(\tau ^{(i)}\). By solving this equation, the scaling relation between \(\alpha '\), \(\alpha \), and \(\eta \) is obtained [128]:

*i*and

*j*who both belong to the same context or group of \(\varLambda \). Then one can study the relation between statistical properties at different levels of contextual grouping. For example, empirical analysis using online forum dataset was recently performed to relate individual bursty patterns to the forum-level bursty patterns in Ref. [224].

*i*within a given period, and inter-event time distributions, but not in real time but event times and not of egos but of social ties. In this case inter-event time is defined as the number of events between two consecutive interaction of the ego and one specific neighbour (similar to

*n*in Eq. (2.34)). Distributions of these dynamical quantities can be also approximated by power-laws with exponents assigned as \(1+\alpha _{a}\) for activity and \(1+\alpha _{\tau }\) for inter-event times. They first show that the degree of a node

*i*, denoted by \(k_i\), observed for a period \([ t_1, t_2 ]\) is increasing as

*i*, what they call the sociability, satisfies the condition

*i*. They further argue that the degree and weight distribution exponents can be determined by the dynamical parameters as

*u*is a parameter capturing the variability of sociability \(\kappa \). The authors support these scaling relations by introducing the scaling functions to scale the corresponding distributions obtained from various human interaction datasets.

### 2.3.2 Bursty Temporal Motifs

*temporal motifs*are arguably induced by group conversations, information processing, or organisation like a common event, etc., and can be associated to burstiness at the mesoscopic level of networks. The emergence of such group-level bursty events is rather rare and it strongly depends on the observed communication channel and the type of induced events. However, it has been shown that some of them appear with a significantly larger frequency as compared to random reference models.

Temporal motifs are defined in temporal networks . For a schematic example, see Fig. 2.6a. Here interactions between nodes occur in different timings and they are interpreted as events assigned with time stamps. For a more detailed definition and characterisation of temporal networks we refer the reader to Refs. [105, 190]. Temporal motifs consist of \(\varDelta t\) *-adjacent events* in the temporal network, which share at least one common node and happens within a time window \(\varDelta t\). Two events that are not directly \(\varDelta t\)-adjacent might be \(\varDelta t\) *-connected* if there is a sequence of events connecting the two events, which are successive in time and \(\varDelta t\)-adjacent. A connected temporal subgraph is then a set of events where each pair of events are \(\varDelta t\)-connected, as depicted in Fig. 2.6b–e. To define temporal motifs we further restrict our definition on *valid temporal subgraph* where for each node in the subgraph the events involving the node must be consecutive, e.g., as in Fig. 2.6b–d. Note that for the final definition of temporal motifs we consider only *maximal valid temporal subgraphs*, which contain all events that are \(\varDelta t\)-connected to the included events. For a more precise definition, see Refs. [163, 164]. Also note that an alternative definition of temporal motifs has been proposed recently, where motifs are defined by events which all appear within a fixed time window [225].

One way to detect temporal motifs is by interpreting them as static directed colored graphs and find all isomorphic structures with equivalent ordering in a temporal network [163]. The significance of the detected motifs can be inferred by comparing the observed frequencies to those calculated in some reference models, where temporal and causal correlations were removed. Such analysis has shown [163] that the most frequent motifs in mobile phone communication sequences correspond to dyadic bursty interaction trains on single links. On the other hand the least frequent motifs are formed by non-causal events, suggesting strong dependence between causal correlations and bursty phenomena.

### 2.3.3 System Level Characterisation

*temporal network sparsity*[230]. This measure counts the number of microscopic configurations associated with the macroscipoc state of a temporal networks. This concept of multiplicity has been known in statistical physics. More specifically, in a temporal network for a given time window, events can be distributed over the links of the corresponding static structure. Here we denote a link between nodes

*i*and

*j*as

*ij*, and the set of all links as

*L*. Thus, for a time window one can measure the fraction of events on a given link

*ij*, denoted by \(p_{ij}\), and compute the Shannon entropy considering each link \(ij\in L\) as:

*effective number of links*as

*temporal network sparsity*:

## 2.4 Cyclic Patterns in Human Dynamics

*t*, denoted by

*x*(

*t*), for the entire period of \(0\le t< T\). One may be interested in a specific cycle, like daily or weekly ones, with period denoted by \(T_{\circlearrowleft }\). Then, for a given period of \(T_{\circlearrowleft }\), the event rate with \(0\le t <T_{\circlearrowleft }\) can be defined as

*k*. Then using the identity of \(\rho (t)dt=\rho ^*(t^*)dt^*\) with the deseasoned event rate of \(\rho ^*(t^*)=1\), we can get the deseasoned time \(t^*(t)\) as

*B*has been measured for both original and deseasoned mobile phone call series to find the overall decreased yet positive values of

*B*, implying that the bursts remain after deseasoning. In addition, the memory coefficients \(M_m\), bursty train size distributions \(P_{\varDelta t}(E)\), and autocorrelation function \(A(t_d)\) can be also measured by using the deseasoned event sequence of \(\{t^*(t_i)\}\) for the comparison to the original ones.

*t*is denoted by

*i*at time

*t*. Then, for a given period of \(T_{\circlearrowleft }\), the event rate with \(0\le t <T_{\circlearrowleft }\) is defined as

*i*as \(\rho _i(t) =\frac{T}{X_i}x_i(t)\) with \(X_i\) denoting the total number of events of the node

*i*. We assign the timing of the

*k*th event between

*i*and

*j*by \(t^{(ij)}_k\) and get the deseasoned inter-event time corresponding to \(\tau ^{(ij)}_k=t^{(ij)}_k-t^{(ij)}_{k-1}\) as

*j*between two consecutive events with the context

*j*. Thus, the fully deseasoned real time-frame is simply translated into the ordinal time-frame. The characterisation of bursts in terms of the ordinal time-frame has also been studied in other contexts, e.g., in terms of activity clock [77], relative clock [311], and “proper time” [69, 70]. In these works, the elapsed time is counted in terms of the number of events instead of the real time.

### 2.4.1 Remark on Non-stationarity

So far, the time series has been assumed to be stationary, either explicitly or implicitly. As the stationarity by definition indicates the symmetry under the time translation, all non-Poissonian processes could be considered non-stationary, hence various time series analysis methods mentioned cannot be applied to the bursty temporal patterns. However, the definition of the stationarity can be relaxed by allowing a non-stationary behaviour only for some specific time scale: For example, human individuals can show a daily cycle in their temporal patterns, while they might keep their daily routines for several months or longer. Then, their temporal patterns can be considered stationary only at time scales that are longer than one day and shorter than several months. This relaxed definition of stationarity could be yet misleading, given the fact that most bursty phenomena show scale-free, hierarchical nature in terms of time scales, while we can apply various time series analysis methods as long as the time series looks stationary at least at some specific time scales. In this sense, the deseasoning method or detrended fluctuation analysis and its variants can be useful for removing the non-stationary temporal patterns from the original time series, hence for allowing us to investigate the bursty nature of time series without being concerned with non-stationarity issue. This is an important issue but has been largely ignored in many works, except for some recent studies mostly in relation to the dynamic processes on networks [102, 107].