Keywords

1 Introduction

Bayesian uncertainty quantification is useful for calibrating physical models to observed data. As well as inferring the parameters that produce the best fit between a model and observed data, Bayesian methods can also identify the range of parameters that are consistent with observations and allow for prior beliefs to be incorporated into inferences. This means that not only can predictions be made but also their uncertainty can be quantified. In demography, Bayesian analysis is used to forecast the global population [7]. In defence systems, Bayesian analysis is used to track objects from radar signals [1]. And in computational neuroscience Bayesian analysis is used to compare different models of brain connectivity and to estimate physiological parameters in mechanistic models [13]. Many more examples can be found in the references of [2, 6, 15].

We focus here on problems which require the use of Markov Chain Monte Carlo (MCMC), a widely applicable methodology for generating samples approximately drawn from the posterior distribution of model parameters given observed data. MCMC is useful for problems where a parametric closed form solution for the posterior distribution cannot be found. MCMC became popular in the statistical community with the re-discovery of Gibbs sampling [26], and the development of the BUGS software [15]. More recently it has been found that methods which use derivatives of the posterior distribution with respect to model parameters, such as the Metropolis Adjusted Langevin Algorithm (MALA) and Hamiltonian Monte Carlo (HMC) tend to generate samples more efficiently than methods which do not require derivatives [12]. HMC is used in the popular Stan software [4]. From the perspective of a C++ programmer, the limitations of Stan are as follows: it may take a significant investment of effort to get started. Either the C++ code has to be translated into the Stan modelling language. Or, alternatively, C++ code can be called from Stan, but it may be challenging to (efficiently) obtain the derivatives that Stan needs in order to sample efficiently.

The software that we present includes (i) our own implementation of a derivative-based MCMC sampler called simplified manifold MALA (smMALA). This sampler can be easily be used in conjunction with C++ codes for Bayesian data analysis, (ii) Stan’s MCMC sampler with derivatives computed using , an industrial standard tool for efficient derivative computation.

An alternative approach to the one presented in this paper would be simply to use Stan as a stand-alone tool without the smMALA sampler and without . Determining the most efficient MCMC sampler for a given problem is still an active area of research, but at least within computational neuroscience, it has been found the smMALA performs better than HMC methods on certain problems [25]. Determining the most appropriate method for computing derivatives will depend on both the user and the problem at hand. In many applications Algorithmic Differentiation (AD) is needed for the reasons given in Sect. 2.3. The Stan Math Library includes an open-source AD tool. Commercial AD tools such as offer a richer set of features than open-source tools, and these features may be needed in order to optimize derivative computations. For example, the computations done using the Eigen linear algebra library [10] can be differentiated either using the Stan Math Library or using but there are cases where computes derivatives more efficiently than the Stan Math Library [22]. The aim of the software we present is to offer a range of options that both make it easy to get started and to tune performance.

2 Methods for Spectral Time-Series Analysis

2.1 Whittle Likelihood

The software presented in this paper is targeted at differential equation models with a stable equilibrium point and stochastic input. We refer to such models as stable SDEs. The methods that are implemented assume that the system is operating in a regime where we can approximate the dynamics through linearization around the stable fixed point. If the time-series data is stationary this is a reasonable assumption. Note that the underlying model may be capable of operating in nonlinear regimes such as limit cycles or chaos in addition to approximately linear dynamics. However, parameter estimation using data in nonlinear regimes quickly becomes intractable - see Chapter 2 of [16]. The stability and linearity assumptions are commonly made in the computational neuroscience literature, see for example [18].

In order to simplify the presentation we illustrate the software using a linear state-space model, which is of the form,

$$\begin{aligned} d\mathbf {X}(t) = A\, \mathbf {X}(t)dt + \mathbf {P}(t), \end{aligned}$$
(1)

where the term \(A\, \mathbf {X}(t)\) represents the deterministic evolution of the system and \(\mathbf {P}(t)\) represents the noisy input. The example we analyze in this paper is the noise-driven harmonic oscillator, which is a linear state-space model with

$$\begin{aligned} A = \begin{pmatrix} 0 &{} 1 \\ -\omega _0^2 &{} -2 \zeta \omega _0 \end{pmatrix}, \quad P(t) = \begin{pmatrix} 0 \\ dW(t) \end{pmatrix}, \end{aligned}$$
(2)

and where dW(t) represents a white noise process with variance \(\sigma _{in}^2\). The observations are modelled as \(Y_k = X_0(k\cdot \varDelta t) + \epsilon _k\) with \(\epsilon _k \sim N(0, \sigma _{obs}^2)\).

Our aim is to infer model parameters (\(\omega _0, \zeta , \sigma _{in}\)) from time-series data. This could be done in the time-domain using a Kalman filter, but it is often more computationally efficient to do inference in the frequency domain [3, 17]. In the case where we only have a single output (indexed by i) and a single input (indexed by j), we can compute a likelihood of the parameters \(\theta \) in the frequency domain through the following steps.

  1. 1.

    Compute the (left and right) eigendecomposition of A, such that,

    $$\begin{aligned} A \mathcal {R} = \varLambda \mathcal {R}, \quad \mathcal {L} A = \mathcal {L} \varLambda , \quad \mathcal {L} \mathcal {R} = \mathrm {diag}(c) \end{aligned}$$
    (3)

    where \(\mathrm {diag}(c)\) is a diagonal matrix, such that \(c_i\) is the dot product of the ith left eigenvector with the ith right eigenvector.

  2. 2.

    Compute ijth element of the transfer matrix for frequencies \(\omega _1,\ldots ,\omega _K\),

    $$\begin{aligned} \mathcal {T}(\omega ) = \mathcal {R}\ \mathrm {diag}\bigg [ \frac{1}{c_k (i\omega - \lambda _k)} \bigg ]\ \mathcal {L}. \end{aligned}$$
    (4)
  3. 3.

    Evaluate the spectral density for component i of \(\mathbf {X}(t)\), \(f_{X_i}(\omega )\), and the spectral density for the observed time-series, \(f_{Y}(\omega )\),

    $$\begin{aligned} f_{X_i}(\omega )&= |\mathcal {T}_{ij}(\omega )|^2\ f_{P_j}(\omega ), \end{aligned}$$
    (5)
    $$\begin{aligned} \nonumber \\ f_{Y}(\omega )&= f_{X_i}(\omega ) + \sigma _{obs}^2 \varDelta t, \end{aligned}$$
    (6)

    where \(f_{P_j}(\omega )\) is the spectral density for component j of \(\mathbf {P}(t)\).

  4. 4.

    Evaluate the Whittle likelihood,

    $$\begin{aligned} p(y_0,\ldots ,y_{n-1}|\theta ) = p(S_0,\ldots ,S_{n-1}|\theta ) \approx \prod _{k=1}^{n/2-1} \frac{1}{f_Y(\omega _k)} \exp \left[ -\frac{S_k}{f_Y(\omega _k)}\right] , \end{aligned}$$
    (7)

    where \(\{S_k\}\) is the Discrete Fourier Transform of \(\{y_k\}\). Note that \(\theta \) represents a parameter set (e.g. specific values of \(\omega _0, \zeta , \sigma _{in}\)) that determines the spectral density.

The matrix A that parameterizes a linear state-space model is typically non-symmetric, which means that eigenvectors and eigenvalues will be complex-valued. We use Eigen-AD [22], a fork of the linear algebra library Eigen [10]. Eigen is templated which facilitates the application of AD by overloading tools and Eigen-AD provides further optimizations for such tools. The operations above require an AD tool that supports differentiation of complex variables. AD of complex variables is considered in [24]. It is currently available in the feature/0123-complex-var branch of the Stan Math Library and in from release 3.4.3.

2.2 Markov Chain Monte Carlo

In the context of Bayesian uncertainty quantification, we are interested in generating samples from the following probability distribution,

$$\begin{aligned} p(\theta |y_0,\ldots ,y_{n-1}) \propto p(y|\theta ) p(\theta ), \end{aligned}$$
(8)

where \(p(y|\theta )\) is the likelihood of the parameter set \(\theta \) given observed data \(y_0,\ldots , y_{n-1}\), and \(p(\theta )\) is the prior distribution of the parameters. In many application where the likelihood \(p(y|\theta )\) is based on some physical model we cannot derive a closed form expression for the posterior density \(p(\theta |y)\). Markov Chain Monte Carlo (MCMC) has emerged over the last 30 years as one of the most generally applicable and widely used framework for generating samples from the posterior distribution [6, 9]. The software used in this paper makes use of two MCMC algorithms: the No U-Turn Sampler (NUTS) [12] and the simplified manifold Metropolis Adjusted Langevin Algorithm (smMALA) [8]. NUTS is called via the Stan environment [4]. It is a variant of Hamiltonian Monte Carlo (HMC), which uses the gradient (first derivative) of the posterior density, whereas smMALA uses the gradient and Hessian (first and second derivatives) of the posterior density, smMALA is described in Algorithm 1.

The error in estimates obtained from MCMC is approximately \(C / \sqrt{N}\), where N is the number of MCMC iterations and C is some problem-dependent constant. In general it is not possible to demonstrate that MCMC has converged, but there are several diagnostics that can indicate non-convergence, see Section 11.4 of [6] for more detail. Briefly, there are two phases of MCMC sampling: burn-in and the stationary phase. Burn-in is finished when we are in the region of the parameter space containing the true parameters. In this paper we restrict ourselves to synthetic data examples. In this case it is straight-forward to assess whether the sampler is burnt in by checking whether the true parameters used to simulate the data are contained in the credible intervals obtained from the generated samples. In real data applications, it is good practice to test MCMC sampling on a synthetic data problem that is analogous to the real data problem. During the stationary phase we assess convergence rate using the effective sample size, N Eff. If we were able to generate independent samples from the posterior then the constant C is \(\mathcal {O}(1)\). MCMC methods generate correlated samples, in which case C may be \(\gg 1\). A small N Eff (relative to N) indicates that this is the case.

If we are sampling from a multivariate target distribution, N Eff for the ith component is equal to,

$$\begin{aligned} \frac{S}{1 + 2 \sum _k \hat{\rho }_i(k)}, \end{aligned}$$
(9)

where S is the number of samples obtained during the stationary period, and \(\hat{\rho }_i(k)\), is an estimate of the autocorrelation at lag k for the ith component of the samples. This expression can be derived from writing down the variance of the average of a correlated sequence (Chapter 11 of [6]). The key point to note is that if the autocorrelation decays slowly N Eff will be relatively small.

figure j

2.3 Derivative Computation

A finite difference approximation to the first and second derivatives of the function \(F(\theta ): \mathbb {R}^N \rightarrow \mathbb {R}^M\) can be computed as,

$$\begin{aligned} \frac{\partial F}{\partial \theta _i} = \frac{F(\theta +h e_i) - F(\theta )}{h}, \qquad \frac{\partial ^2 F}{\partial \theta _i \partial \theta _j} = \frac{\dfrac{\partial F}{\partial \theta _i}\big (\theta + h e_j\big ) - \dfrac{\partial F}{\partial \theta _i}\big (\theta \big )}{h}, \nonumber \end{aligned}$$

where \(e_i\) is the ith Cartesian basis vector, and h is a user-defined step-size. For first-order derivatives the default value we use is \(\sqrt{\epsilon }|\theta _i|\), where \(\epsilon \) is machine epsilon for double types, i.e., if \(\theta _i\) is \(\mathcal {O}(1)\), \(h\approx 10^{-8}\). For second derivatives the default value is \(\epsilon ^{1/3}|\theta _i|\), so that \(h\approx 5\cdot 10^{-6}\). More details on the optimal step size in finite difference approximations can be found in [20]. First derivatives computed using finite differences require \(\mathcal {O}(N)\) function evaluations, where N is the number of input variables. Second derivatives require \(\mathcal {O}(N^2)\) function evaluations.

Derivatives that are accurate to machine precision can be computed using AD tools. AD tools generally use one of two modes: tangent (forward) or adjoint (reverse). For a function with N inputs and M outputs, tangent mode requires \(\mathcal {O}(N)\) function evaluations and adjoint mode requires \(\mathcal {O}(M)\) function evaluations. In statistical applications our output is often a scalar probability density, so adjoint AD will scale better with N than either tangent mode or finite differences in terms of total computation time. Adjoint AD is implemented as follows using the Stan Math Library, [5].

figure k

In the code above the function is evaluated using the stan::math:var scalar type rather than double. During function evaluation, derivative information is recorded. When the grad() function is called this derivative information is interpreted and derivative information is then accessed by calling adj() on the input variables. Similarly to the Stan Math Library, exact derivatives can be computed using the NAG tool developed in collaborations with RWTH Aachen University’s STCE group, [14].

figure m

  provides its adjoint type using the \(\texttt {dco::ga1s<T>::type}\) typedef, where T s the corresponding primal type (e.g. double). Higher order adjoints can be achieved by recursively nesting the adjoint type. Using this type, the program is first executed in the augmented forward run, where derivative information is stored in the global_tape data structure. The register_variable function initialises recording of the tape and facilitates dynamic varied analysis [11]. Derivative information recorded during the augmented primal run is then interpreted using the interpret_adjoint() function and dco::derivative is used to access the propagated adjoints.

3 Results for Noise Driven Harmonic Oscillator

We now present the results of MCMC sampling using smMALA to estimate parameters of the noise driven harmonic oscillator. Then we demonstrate that, for this particular model, MCMC sampling can be accelerated by using NUTS. We also compare run times and sampling efficiency when AD is used to evaluate derivatives. Here we estimate parameters using synthetic data (i.e. data generated from the model). This is a useful check that the MCMC sampler is working correctly: we should be able to recover the parameters that were used to simulate the data. We generate a pair of datasets representing different conditions (which we label \(c_1\) and \(c_2\)). In Table 1 we show that the \(95\%\) credible intervals for each parameter include the actual parameter values for all 5 parameters. These results came from one MCMC run of 10, 000 iterations. Figure 1 shows \(95\%\) confidence intervals for the spectral density obtained using the Welch method alongside \(95\%\) credible intervals for the spectral density estimated from 10, 000 MCMC iterations. This is another useful check: the spectral density predictions generated by sampled parameter sets are consistent with non-parametric estimates of the spectral density.

Table 1. Estimated quantiles of posterior distribution for noise driven harmonic oscillator. Synthetic data was generated by simulating from the model: duration = 20.0, time-step = 0.01. Two datasets (\(c_1\) and \(c_2\)) were generated with different parameter values for \(\omega _0\) and \(\sigma _{in}\) in each dataset (values are given in ‘actual’ column). The quantiles are estimated from a sequence of 10, 000 MCMC samples from the joint (posterior) distribution of all the parameters given all of the data. For example, 2.5% of the MCMC samples had \(\omega _0(c_1)\) values less than 77.3.
Fig. 1.
figure 1

Spectral density estimates for noise driven harmonic oscillator for the datasets described in Table 1.

Table 2 uses the same model and the same datasets as above to compare smMALA with NUTS, and finite differences with AD. The AD implementation used in smMALA was dco’s tangent over adjoint mode (i.e. dco::gt1s combined with dco::ga1s). In NUTS we used the dco’s tangent mode for computing the derivatives of the spectral density and Stan’s adjoint mode (stan::math::var) for the rest of the computation. Given the most recent developments in the feature/0123-complex-var branch of the Stan Math Library it would likely be possible to use Stan’s adjoint mode for the whole computation. However this was not the case when we started writing the code. In general users may find that they need the more advanced functionality of for part of their computation in order to obtain good performance.

The MCMC samplers were each run for 1,000 iterations. The results were analyzed using Stan’s stansummary command-line tool. To account for correlation between MCMC samples we use the N Eff diagnostic defined in Sect. 2.2 to measure sampling efficiency, min N Eff is the minimum over the 5 model parameters. Table 2 shows that, for the noise driven harmonic oscillator, we can accelerate MCMC sampling by a factor of around 3–5 by using NUTS rather than smMALA. We also see that the N Eff/s is higher for finite differences than for AD, because of the small number of input variables. However, even for this simple model the NUTS min N Eff is higher for AD than for finite differences suggesting that the extra accuracy in the derivatives results in more efficient sampling per MCMC iteration.

In the context of Bayesian uncertainty quantification what we are interested in is whether the MCMC samples are an accurate representation of the true posterior distribution. A necessary condition for posterior accuracy is that the true parameters are (on average) contained in the credible intervals derived from the MCMC samples. This is the case for all the different variants of MCMC that we tested. The main differences we found between the different variants was the sampling efficiency. As discussed in Sect. 2.2, a sampling efficiency that is 3–5 time greater means a reduction in the value of C in the Monte Carlo error (\(C / \sqrt{N}\)) by that factor. Another way of interpreting this is that we could reduce the number of MCMC iterations by a factor of 3–5 and expect to obtain the same level of accuracy in the estimated posterior distribution.

Table 2. Noise-driven harmonic oscillator benchmarking results.

4 Discussion

The Whittle likelihood function is written to be polymorphic over classes derived from the a base class called Stable_sde. The function signature includes a reference to the base class.

figure r

Classes that are derived from the Stable_sde base class must define the following pure virtual function.

figure s

The spectral density evaluation (Eq. (5)) is implemented in the base class, reducing the effort required to do spectral data analysis for other stable SDEs. The smMALA sampler is written to be polymorphic over classes derived from a base class called Computation. Classes that are derived from the Computation base class must define the following pure virtual function.

figure t

This function should evaluate the posterior density in the user’s model. The gradient and Hessian of the posterior density are implemented in the Computation class. This reduces the effort required to use NUTS or smMALA for other computational models. Classes derived from Stable_sde or from Computation can be instantiated with several different scalar types including double, stan::math::var, and a number of types. This enables automatic evaluation of first and second derivatives. The gradient and Hessian functions need a template specialization to be defined in the base class for each different scalar type that is instantiated.

We conclude with some comments regarding the choice of MCMC sampler and the method for evaluating derivatives. Our software only implements derivative-based MCMC as this tends to be more computationally efficient than other MCMC methods [12, 23]. Derivative-based MCMC samplers can be further subdivided into methods that use higher-order derivatives, such as smMALA (sometimes referred to as Riemannian) and methods that only require first-order derivatives (such as NUTS). Riemannian methods tend to be more robust to complexity in the posterior distribution, but first-order methods tend to be more computationally efficient for problems where the posterior geometry is relatively simple. We recommend using smMALA in the first instance, then NUTS as a method that may provide acceleration in MCMC sampling, in terms of effective samples per second, N Eff/s. Regarding the derivative method, finite differences often results in adequate performance for problems with moderate input dimension (e.g. 10–20), at least with smMALA. But for higher-dimensional problems (e.g. Partial Differential Equations or Monte Carlo simulation) we recommend accelerating derivative computation using adjoint AD [19, 21]. The software presented enables users to implement and benchmark all these alternatives so that the most appropriate methods for a given problem can be chosen.