1 Introduction and motivation

One way to study economic history amounts to the construction and analysis of historical time series data, see for example van Zanden and van Leeuwen (2012) amongst many others. A particularly interesting period to study concerns the times of the Atlantic slave trade. One of the aspects of frequent examination concerns the contribution of slave trade to the size of an economy. Recent important studies are Eltis and Engerman (2000). Fatah-Black and van Rossum (2015) and Eltis et al. (2016). Another recent study is Brandon and Bosma (2019) who shows that 5 to10% of Gross Domestic Product (GDP) in Holland around 1770 was based on slave trade, see Table 1.

Table 1 The variables

An important feature to study concerns the trends in the data. Did the contribution to GDP of slave trade grow with a steady pace, like with a deterministic trend? Or, did that contribution jump to plateaus due to structural breaks, perhaps caused by technological developments? If it would be along a deterministic trend, then shocks to the data were not persistent. If the growth patterns followed sequences of structural breaks, then those shocks were persistent. Hence, it is of interest to study the persistence properties of the historical data.

Ideally, the constructed historical data are equally spaced, like per year of per ten years, as then basic time series analytical tools can be used to study the properties of the data. In the present paper the focus is on the analysis of unequally spaced data, which can also occur in historical research, as will be evident below.

2 Introductory remarks

An important property of time series data is, what is called, the persistence of shocks. Such persistence is perhaps best illustrated when we consider the following simple time series model for a variable \(y_{t}\), which is observed for a sequence of T years, \(t = 1,2, \ldots ,T\), that is,

\(y_{t} = \alpha y_{t - 1} + \varepsilon_{t}\)This model is called a first order autoregression, with acronym AR(1). The \(\varepsilon_{t}\) is a series of shocks (or news) that drives the data over time, and these shocks have mean 0 and common variance \(\sigma_{\varepsilon }^{2}\), and over time these shocks are uncorrelated. In other words, future shocks or news cannot be predicted from past shocks or news. The \(\alpha\) is an unknown parameter that needs to be estimated from the data. Usually one relies on the ordinary least squares (OLS) method to estimate this parameter, see for example Franses et al. (2014, Chapter 3) for details.

In anAR(1) model,Footnote 1 the persistence of shocks to \(y_{t}\) is reflected by (functions of) the parameter \(\alpha\). This is best understood by explicitly writing down all the observations on \(y_{t}\) when the AR(1) is the model for these data. The first observation is then

$$y_{1} = \alpha y_{0} + \varepsilon_{1}$$

where \(y_{0}\) is some known starting value, that can be equal to 0 or not. In practice this starting value is usually taken as the first available observation, and then the estimation sample runs from \(t = 2,3,4 \ldots ,T\). The second observation is

$$y_{2} = \alpha y_{1} + \varepsilon_{2} = \alpha^{2} y_{0} + \varepsilon_{2} + \alpha \varepsilon_{1}$$

where the expression on the right-hand side now incorporates the expression for \(y_{1}\). When this recursive inclusion of past observations is continued, we have for any \(y_{t}\) observation that

$$y_{t} = \alpha^{t} y_{0} + \varepsilon_{t} + \alpha \varepsilon_{t - 1} + \alpha^{2} \varepsilon_{t - 2} + \alpha^{3} \varepsilon_{t - 3} + \ldots + \alpha^{t - 1} \varepsilon_{1}$$

This expression shows that the immediate impact of a shock \(\varepsilon_{t}\) is equal to 1. The impact of a shock one period ago (which is \(\varepsilon_{t - 1}\)) is \(\alpha\) and the impact of a shock \(j\) periods ago is \(\alpha^{j}\). The total effect of a shock if \(t \to \infty\) is thus

$$1 + \alpha + \alpha^{2} + \alpha^{3} + \ldots = \frac{1}{1 - \alpha }$$

when \(\left| \alpha \right| < 1\). So, when \(\alpha = 0.5\), the total effect of a shock is 2. When \(\alpha = 0.9\), the total effect is 10. So, when \(\alpha\) approaches 1, the impact gets larger. When \(\alpha = 1\), the total effect is infinite. At the same time, when \(\alpha = 1\), each shock in the past has the same permanent effect 1, as \(1^{j} = 1\). In that case, shocks are said to have a permanent effect.

One may also be interested in, what is called, a duration interval. For example, a 95% duration interval is the time period \(\tau_{0.95}\) within which 95% of the cumulative or total effect of a shock has occurred. It is defined by

$$\tau_{0.95} = \frac{{{\text{log}}\left( {1 - 0.95} \right)}}{\log \left( \alpha \right)}$$

where log denotes the natural logarithm. When \(\alpha = 0.5\), the \(\tau_{0.95} = 4.32\), and when \(\alpha = 0.9\), the \(\tau_{0.95} = 28.4\). These persistence measures are informative about how many years (or periods) shocks last.

3 Motivation of this paper

In this paper the focus is on persistence measures in case the data do not involve a connected sequence of years but instead concern data with missing data at irregular intervals. Consider for example the data on Gross Domestic Product (GDP) in Holland for the sample 1738–1779 in Fig. 1. In principle the sample size is 42, but it is clear that various years with data are missing, and hence the sample effectively covers 24 years. Take for example the data in the final column of Table 2, which concern the Weights of slave-based activities in GDP Holland, for the sample 1738–1779. The data are in Fig. 2. The issue is now how we can construct persistence measures, that is, functions of \(\alpha\) like above, when the data follow a first order autoregression for such irregularly spaced data.

Fig. 1
figure 1

Total size GDP of Holland, 1738–1779

Table 2 The data
Fig. 2
figure 2

Weight of slave-based activities in GDP Holland, 1738–1779

The paper proceeds as follows. The next section presents a useful model for unevenly spaced data. It also deals with a step-by-step illustration of how to implement this method, which can be done using any statistical package. The empirical section implements this method for ten variables with irregularly spaced data, all of which appeared in a recent study of Brandon and Bosma (2019) on the economic impact of the Atlantic slave trade. The final section concludes.

4 Methodology

The starting point of our analysis is the representation of an AR(1) process given in Robinson (1977) (see also for example Schulz and Mudelsee, 2002). Suppose an AR(1) process is observed at times \(t_{i}\) where \(i = 1,2,3, \ldots ,N\). A general expression for an AR(1) process with arbitrary time intervals is

$$y_{{t_{i} }} = \alpha_{i} y_{{t_{i - 1} }} + \varepsilon_{{t_{i} }}$$
(1)

with

$$\alpha_{i} = {\text{exp}}\left( { - \frac{{t_{i} - t_{i - 1} }}{\tau }} \right)$$
(2)

where \(\tau\) is scaling the memory, see Robinson (1977). For easy of analysis, it is assumed here that \(\varepsilon_{{t_{i} }}\) is a white noise uncorrelated process with mean 0 but with time-variation in the variance.Footnote 2 This means that in practice, one should correct for this heteroskedasticity by using the Newey West (1987) HAC estimator.

One may continue with (1) and (2), but it may be easier to define

$$\alpha = {\text{exp}}\left( { - \frac{1}{\tau }} \right)$$

This makes that the general AR (1) model can be written as

$$y_{{t_{i} }} = \alpha^{{t_{i} - t_{i - 1} }} y_{{t_{i - 1} }} + \varepsilon_{{t_{i} }}$$
(3)

When the data would be regularly spaced, then \(t_{i} - t_{i - 1} = 1\) and this model collapses into

$$y_{t} = \alpha y_{t - 1} + \varepsilon_{t}$$

which is the standard AR(1) model above. Or, suppose the data would be unequally spaced because of selective sampling each even observation, and all the odd observations would be called as missing, then \(t_{i} - t_{i - 1} = 2\), and then the model reads as

$$y_{t} = \alpha^{2} y_{t - 2} + \varepsilon_{t}$$

Before one proceeds with estimating the parameter in (3), one first needs to demean and detrend the data, see Robinson (1977).

5 Estimation

Given a sample {\(t_{i} ,y_{{t_{i} }} \}\), one can use Nonlinear Least Squares (NLS) to estimate \(\alpha\) (and hence \(\tau\)). Table 3 provides the key variables relevant for estimation concerning the variable in Fig. 2. The first column gives the demeaned and detrended irregularly spaced time series, that is \(x_{{t_{i} }}\), where this variable follows from the OLS regression

$$y_{{t_{i} }} = \mu + \delta t + x_{{t_{i} }}$$

where \(t = 1,2,3, \ldots ,T\) with \(T = 42\) here. The demeaned and detrended data are in Fig. 3. The next column in Table 3 contains the \(t_{i} - t_{i - 1}\) with acronym DIFT. The last column of Table 3 reflects the new variable \(x_{{t_{i - 1} }}\). With this new variable, one can apply NLS to

$$x_{{t_{i} }} = \alpha^{{t_{i} - t_{i - 1} }} x_{{t_{i - 1} }} + u_{{t_{i} }}$$
Table 3 Numerical example. PGDPDMDT means Weight of slave-based activities in GDP Holland, after demeaning (DM) and detrending (DT). DIFT is \(t_{i} - t_{i - 1}\)
Fig. 3
figure 3

Weight of slave-based activities in GDP Holland, demeaned and detrended (DMDT), 1738–1779

and obtain an estimate of \(\alpha\) and an associated HAC standard error.

6 Illustration

Let us see how this works out for the ten historical series in Table 2, which are taken from Brandon and Bosma (2019, Annex page XXX). Table 4 reports the estimation results for the auxiliary regression for demeaning and detrending. Two series do not seem to have a trend as the associated parameter is not significant at the 5% level, and these are Sugar refinery and Army and Navy. However, we do use the residuals of the auxiliary regressions in the subsequent analysis.

Table 4 Regression on intercept and trend (with estimated standard errors in parentheses) using the regression \(y_{{t_{i} }} = \mu + \delta t + x_{{t_{i} }}\)

Table 5 reports on the estimated \(\alpha\) parameters. The estimates range from 0.278 (Total size GDP of Holland) to 0.907 (Sugar refinery). Comparing the estimated parameters with their associated HAC standard errors, we see that 0 is included in the 95% confidence interval only for Total size GDP of Holland. So, this variable fully follows a deterministic trend.

Table 5 Estimate of persistence (with estimated HAC standard errors in parentheses, Newey and West, 1987) using NLS to the regression model \(x_{{t_{i} }} = \alpha^{{t_{i} - t_{i - 1} }} x_{{t_{i - 1} }} + u_{{t_{i} }}\)

Table 6 presents the estimated persistence of shocks (news), measured the 95% duration interval \(\tau_{0.95}\) and by \(\tau\). Clearly, persistence is largest for Sugar refinery and Notaries. The parameter for Notaries is 0.862 (Table 5) is very close to 1, given its HAC standard error, so one might even claim that shocks to this sector in the observed period were permanent.

Table 6 Measures of persistence, measured in years

7 Conclusion

This paper has introduced to the literature on Economic History a measure of persistence which is particularly useful if the data are irregularly spaced. An illustration to ten historical series for the impact and contribution of slave trade in Holland of 1738–1779 showed the merits of the methodology.

When the question is addressed whether the contribution to GDP of slave trade has grown with a steady pace, like with a deterministic trend, or whether that contribution jumped to plateaus due to structural breaks, perhaps caused by technological developments, the following conclusion can be drawn. The persistence in the variables “Weight of slave-based activities in GDP Holland”, as measured by the parameters in an AR (1) regression, is equal to 0.536 with HAC standard error 0.214. This persistence is not equal to 1, meaning that there is no sign of occasional structural breaks with a long-lasting effect. Hence, in the considered period, the contribution to GDP has steadily grown with a deterministic pattern.

Further applications should emphasize the practical relevance of the method. Also, an extension to an autoregressive process of higher order could be relevant, in order to provide additional measures of persistence. An extension to fractionally integrated processes is also relevant. Finally, and this a further technical issue, that is, one may want to formally test if \(\alpha = 1\). This amounts to a so-called test for a unit root, for which the asymptotic theory is different than standard, see for example Chapter 4 of Franses et al. (2014).