Modular Bayesian damage detection for complex civil infrastructure

  • Andre JesusEmail author
  • Peter Brommer
  • Robert Westgate
  • Ki Koo
  • James Brownjohn
  • Irwanda LaoryEmail author
Open Access
Original Paper


We address the problem of damage identification in complex civil infrastructure with an integrative modular Bayesian framework. The proposed approach uses multiple response Gaussian processes to build an informative yet computationally affordable probabilistic model, which detects damage through inverse updating. Performance of structural components associated with parameters of the developed model was quantified with a damage metric. Particular emphasis is given to environmental and operational effects, parametric uncertainty and model discrepancy. Additional difficulties due to usage of costly physics-based models and noisy observations are also taken into account. The framework has been used to identify a reduction of a simulated cantilever beam elastic modulus, and anomalous features in main/stay cables and bearings of the Tamar bridge. In the latter case study, displacements, natural frequencies, temperature and traffic monitored throughout one year were used to form a reference baseline, which was compared against a current state, based on one month worth of data. Results suggest that the proposed approach can identify damage with a small error margin, even under the presence of model discrepancy. However, if parameters are sensitive to environmental/operational effects, as observed for the Tamar bridge stay cables, false alarms might occur. Validation with monitored data is also highlighted and supports the feasibility of the proposed approach.


Bayesian inference Damage detection Long suspension bridge Gaussian process Structural health monitoring 

1 Introduction

To be practically useful, a structural health monitoring (SHM) system must be able to identify the performance of complex physical systems. Noteworthy examples of structural failure, such as the 2018 collapse of the Morandi viaduct in Genoa, Italy, justify the development and deployment of SHM in civil infrastructure.

There are two common pathways for damage identification, depending on the type of model that is used to interpret monitored data. These are known as the data-based and physics-based approach. A common ground problem to both pathways, is their coarse misinterpretation of the structural behaviour due to numerous uncertainties. With the advent of powerful statistical sampling techniques, e.g., Markov Chain Monte Carlo methods, various probabilistic inference methods were developed and applied to surpass this challenge.

The Bayesian approach gathered considerable interest [10, 15, 45]. Its conceptual simplicity and consistent treatment of uncertainties contributed to its wide dissemination. Some notable research milestones are attributed to Sohn and Law [36] and Beck et al. [3, 4, 5]. Application of their frameworks for damage identification was illustrated in a seven-story full-scale building [28, 33] and a laboratory reduced scale steel bridge [29]. Overall, two main factors contribute to the methods poor performance. One is the environmental/operational effects—also known as confounding influences—which mask the presence and extent of damage. This is a well known and extensively documented problem [11, 16, 17, 35]. A second factor is large model discrepancy, i.e. the epistemic misfit between predictions and monitored data, caused by modelling assumptions and simplifications. Unless the effects of model discrepancy are small, assuming them as a zero-mean uncorrelated Gaussian [37, 43] (as in the traditional Bayesian methods) results in biased identifications.

For these reasons, the broad influence of environmental variations, and in particular temperature, lead to the development of several temperature-based damage identification methods. Some examples can be seen in Cross et al. [13], Yarnold and Moon [42] or others [26, 41, 44]. State of the art Bayesian methods adopt hierarchical models to account for these variations, such as the frameworks from Huang et al. [20] or Behmanesh et al. [8]. Relatively to model discrepancy, only a handful of studies developed a more functional form, such as a multivariate normal distribution [8] or correlated errors [30, 34]. The model falsification method by Goulet and Smith [18, 19] is an extreme alternative, which does not assume any particular form. Applications are illustrated in benchmark ASCE structures [20], a footbridge in the Tufts university campus [6, 7, 8] and a nine-story building [9]. Results still indicate unidentifiable or potentially biased identifications, and in some of the case studies, certain mode shapes or seasonal fluctuations had to be removed to improve results.

Based on the above remarks, the current paper proposes a hybrid modular Bayesian approach (MBA) to address the damage identification problem. Jesus et al. [22, 23] originally developed and applied the methodology for structural identification (st-id) problems. The MBA explicitly considers environmental/operational effects with multiple response Gaussian processes (mrGp). More importantly, the MBA multivariate normal discrepancy function and ability for data fusion [2] counters many of the problems of traditional Bayesian methods. Thus, the current work highlights the extension and application of the MBA to damage detection. The present paper is divided into a description of the methodology, Sect. 2; an application to a simulated cantilever beam, Sect. 3; and to the Tamar bridge, Sects. 4 and 5. Finally, major conclusions are presented in Sect. 6.

2 Modular Bayesian damage detection

2.1 Modelling assumptions

The current section gives a short overview of the modelling assumptions underlying the MBA. As commonly seen in Bayesian approaches, the parameters which we wish to learn from, \(\varvec{\theta }\), are treated as random variables. The ‘randomness’ in these variables is associated with our ability to estimate the parameters’ true values \(\varvec{\theta }^*\), and these true values are assumed constant throughout the monitoring period. Bayes’ theorem provides a way to synthesise two sources of information of these parameters, the prior distribution and the likelihood function, into an update posterior distribution.

In the present work, the prior probability density function (PDF) of the parameters is assumed as uniform or Gaussian. More details will be given in the following sections. The likelihood function is based on the following equation:
$$\begin{aligned} \varvec{Y}^e(\varvec{X}^e)=\varvec{Y}^m(\varvec{X}^e,\varvec{\theta }^*)+\varvec{\delta }(\varvec{X}^e)+\varvec{\varepsilon }, \end{aligned}$$
where \(\varvec{Y}^e\) are observations, dependent on design variables \(\varvec{X}^e\); \(\varvec{Y}^m\) is the model response function, dependent on the design variables and a vector of unknown structural parameters \(\varvec{\theta }^*\); \(\varvec{\delta }(\varvec{X}^e)\) is a discrepancy function that translates the misfit between the model and the true process; and \(\varvec{\varepsilon }\) is an observation error term, which is assumed to follow a Gaussian distribution \(\mathcal {N}(\varvec{O},\varvec{\varLambda })\).

Given the above assumptions, the next step requires approximating the computer model and the discrepancy function with multiple response Gaussian processes (mrGp) [2, 12]. MrGps efficiently account for the uncertainties of Eq. (1), but require estimation of a set of parameters (known as hyperparameters) from observed and simulated data. The MBA separates such process into two modules, for the computer model and the discrepancy function. A short description of the mrGp formulation will be demonstrated next, to present the hyperparameters and showcase the advantages of the proposed approach.

The mrGp formulation is a natural expansion of the single response case [32], which aims to fit q responses, dependent of d design variables at N sampled time histories. Its description is based on a mean and covariance prior structure. The mean is assumed to have the following form:
$$\begin{aligned} m(\varvec{X})=I_q\otimes \varvec{H}(\varvec{X}){\varvec{\beta }}, \end{aligned}$$
where \(I_q\) is the q-dimensional identity matrix, \(\otimes\) stands for the Kronecker product, \(\varvec{H}\in \mathbb {R}^{N\times p}\) is a regression matrix, and \(\varvec{\beta }\in \mathbb {R}^{p\times q}\) is a matrix of regression coefficients. Matrix \(\varvec{H}\) contains N polynomial functions which are assumed to have a linear form \(h_j(x)=[1\; x_1\ldots x_d]: j=1,\ldots ,p\) of degree \(p=d+1\).
In a similar manner, the covariance structure is formulated as
$$\begin{aligned} \varvec{V}(\varvec{X},\varvec{X}')=\varvec{\varSigma }^2\otimes \varvec{R}(\varvec{X},\varvec{X}'), \end{aligned}$$
where \(\varvec{\varSigma }^2\in \mathbb {R}^{q\times q}\) is a spatial variance matrix, and \(\varvec{R}\in \mathbb {R}^{N\times N}\) is a correlation matrix. This equation is interpreted as a separation of a spatial variance between q responses that are being approximated and a temporal correlation between the N time histories. The latter matrix \(\varvec{R}\) contains inside each of its entries a correlation function which needs to be assumed. Currently, a linear form has been adopted
$$\begin{aligned} R(\omega ,x,x')=\prod _{j=1}^d \max \{0, 1-\omega _j |x_j-x_j'|\}, \end{aligned}$$
where \(\omega _j\; \; j=1,\ldots , d\) are called roughness parameters. These parameters represent how roughly the responses change from point x to point \(x'\). A final parameter that has to be considered and concludes the description of the mrGp is the variance \(\varvec{\varLambda }\) of the observation error, which can be simply added to Eq. (3) as \(\varvec{\varLambda }\otimes I_N\).

Thus in short, the hyperparameters which have to be estimated are denoted as \(\varvec{\phi }^m=\{\varvec{\beta }_m,\varvec{\varSigma }^2_m,\varvec{\omega }_m\}\) for the model, and \(\varvec{\phi }^\delta =\{\varvec{\beta }_\delta ,\varvec{\varSigma }^2_\delta ,\varvec{\omega }_\delta ,\varvec{\varLambda }\}\) for the discrepancy function error term. Note that when \(m(\varvec{X})=0\) and matrix \(\varvec{R}\) is the identity matrix, the mrGp reduces to a zero-mean uncorrelated Gaussian, which is the common trend of traditional Bayesian frameworks applied in SHM. After estimating the hyperparameters, Bayes’ theorem is applied to obtain the parameter posterior distribution. For additional details of the modelling assumptions, considered uncertainties and application of the methodology for st-id, the reader is referred to previous literature [1, 2, 22, 23, 24].

2.2 Damage identification

In this section, the MBA framework is extended for damage identification. In the current context, damage is defined as changes introduced into a structural system which adversely affect its current or future performance. The proposed damage evaluation can be classed as probabilistic, parametric, and supervised. Essentially a reference state (when the structural system is assumed healthy) is established, and subsequently compared against an estimate for a current state. The two elements of Eq. (1) which assess structural change are the structural parameters and the discrepancy function. However, the current work is limited to the analysis of the parameters' influence. Ideally, considered parameters should represent the structural system integrity, e.g. soil permeability, initial strain of prestress cables or stiffness of other key components.

The algorithm flowchart can be seen in Fig. 1, with the original MBA algorithm on the left side, and its expansion for damage identification on the right side. Further details of the MBA modules can be found in [2] and will not be detailed here. Referring to Fig. 1, Task 1 trains a mrGp with simulated data from a computer model, identically to the original MBA. The estimated hyperparameters of this mrGp, \(\phi ^m\), uniquely determine its behaviour. Subsequently, Task 2 and 3 determine information relative to the reference and current state of the structural system, including a trained mrGp of the discrepancy function and the parameters posterior. Each task iterates over module 2 and 3 from the original MBA, with prior and monitored data \(D^e_\mathrm{{r}}\) or \(D^e_\mathrm{{c}}\) inputs for each state. It is recalled that the module 2 of the MBA, approximation of the discrepancy function, requires marginalisation of the computer model mrGp with respect to the parameters prior. Although the priors differ for Tasks 2 and 3, the computer model remains the same as computed in Task 1. Finally, in Task 4 the samples of the parameters posteriors are used to propagate the uncertainty to a damage metric DF, defined as follows
$$\begin{aligned} {\text {DF}}=\frac{\theta _\mathrm{{c}}-\theta _\mathrm{{r}}}{\theta _\mathrm{{r}}}, \end{aligned}$$
where \(\theta _\mathrm{{r}}\) and \(\theta _\mathrm{{c}}\) represent the parameters in the reference and current health state, respectively.
Fig. 1

Flowchart of the MBA original approach (left) and the proposed damage detection framework (right)

Specifically, two distribution functions can be computed based on this damage metric: the probability of damage exceeding a given damage factor df, and the probability distribution for the most probable damage factor DF. The former is defined as
$$\begin{aligned} p(\text {DF}\ge {\text{df}} )=p\left( \frac{\theta _\mathrm{{c}}-\theta _\mathrm{{r}}}{\theta _\mathrm{{r}}}\ge {\text{df}} \right) =p(\theta _\mathrm{{c}}-\theta _\mathrm{{r}}\ge {\text{df}} \times \theta _\mathrm{{r}}), \end{aligned}$$
provided that the probability of \(\theta _\mathrm{{r}}\) becoming negative is small. The density in Eq. (6) can be further developed as
$$\begin{aligned} p(\theta _\mathrm{{c}}-\theta _\mathrm{{r}}\ge {\text{df}} \times \theta _\mathrm{{r}})= & {} 1- {\text{CDF}} \left( {\text{df}} \times \theta _\mathrm{{r}}-(\mu _\theta ^\mathrm{{c}}- \mu _\theta ^\mathrm{{r}}),\sqrt{(\sigma _\theta ^\mathrm{{r}})^2+(\sigma _\theta ^\mathrm{{c}})^2}\right) \nonumber \\= & {} \frac{1}{2}-\frac{1}{2}{{\,\mathrm{erf}\,}}\left( \frac{ {\text{df}} \times \theta ^\mathrm{{r}}-( \mu _\theta ^\mathrm{{c}}-\mu _\theta ^\mathrm{{r}})}{\sqrt{2((\sigma _\theta ^\mathrm{{r}})^2+( \sigma _\theta ^\mathrm{{c}})^2)}}\right) , \end{aligned}$$
where \((\mu _\theta ^\mathrm{{r}},\sigma _\theta ^\mathrm{{r}})\) and \((\mu _\theta ^\mathrm{{c}},\sigma _\theta ^\mathrm{{c}})\) are the mean and standard deviations for the structural parameters posteriors in the reference and current health state, respectively; \(CDF\) is the cumulative Gaussian density function; and \({{\,\mathrm{erf}\,}}\) is the Gaussian error function.
On the other hand, the probability distribution for the most probable damage factor corresponds to a 50% confidence level of Eq. (7). Furthermore, on the basis that the reference and current state are independent, the variance formula [27] allows to calculate DF’s variance as
$$\begin{aligned} \sigma _{\text {DF}}^2=\left( \frac{\sigma _{\theta }^\mathrm{{r}}\mu _{\theta }^\mathrm{{c}}}{( \mu _{\theta }^\mathrm{{r}})^2}\right) ^2+\left( \frac{\sigma _{\theta }^\mathrm{{c}}}{ \mu _{\theta }^\mathrm{{r}}}\right) ^2. \end{aligned}$$
Associated with the selection of the confidence level, two statistical errors become an integral part of the detection test. Selecting a larger or smaller level would affect the occurrence of type I or II errors, respectively. A type I error, or a missed detection, occurs when damage is present but is not detected. A type II error, or false alarm, occurs when the test incorrectly flags the structural component as damaged.

Finally, it is worth detailing some aspects of the current framework. Note for example, that during task 2 it is advisable to supply a reference data set \(D^e_\mathrm{{r}}\) which is as informative as possible, i.e. including a large range of environmental and operational variations, ideally in a one year time frame, acquired at the earliest possible stage of the structure life-cycle. In addition, Eq. (5) metric assumes that an increase of the parameter is associated with loss of its current or future performance, which might not necessarily be the case, e.g. if the parameter represents the stiffness or area of a structural element. However, a similar metric can be developed for the alternative case. Lastly, note that damage which occurs at a location, other than the one modelled by the identified parameters, would not be readily detected. One possible way to overcome this last limitation is to also analyse the variability of the discrepancy function between the current and reference state, using for example the Kullback–Leibler divergence.

3 Numerical example of a cantilever beam

In the current section, the MBA is used to detect damage on a simulated cantilever beam. Damage is considered as a 5% reduction of the beam’s original Young’s modulus \(E^*\). Note that this example requires a small change to the damage metric shown in the previous section, so that a decrease of the parameter represents damage.
Fig. 2

Cantilever beam example. Actual (a) and idealised cantilever beam (b)

The cantilever beam is subjected to a point load F at its free end, and its tip deflection and rotation are considered as responses for identification (calculated from beam theory). The discrepancy between the model and the actual beam is a rotational spring, of stiffness \(K^*\), located at the fixed end of the beam, as shown in Fig. 2. Additionally, numerical values of the beam properties are shown in Table 1, with A, I and L as the cross-sectional area, second moment of area and length, respectively. The intervals over F and E represent uniform sampling regions for training of the mrGps. For each state, two datasets with 90 and 60 samples have been generated to train the model and discrepancy function mrGps, respectively. The amount of samples were determined on the basis of an evaluation of the mrGps’ accuracy, with partition of 80% training and 20% testing data. Finally, for both the current and reference state, prior information of E is set as a uniform PDF in the same interval as shown above.
Table 1

Parameters of the cantilever beam


Numerical value


Numerical value


\(10\times 10^{11}~\mathrm{N\,mm/rad}\)


\(3000~\mathrm {mm}\)


\(70\times 10^{3}~\mathrm {MPa}\)


\([20,100]\times 10^{3}~\mathrm {MPa}\)


\([1,10]\times 10^{3}~\mathrm {N}\)


\(6.75\times 10^8~{\mathrm{mm}}^{4}\)


\(300\times 300~{\mathrm{mm}}^{2}\)

After propagating the uncertainty of the resulting posteriors to the damage metric DF, a distribution with \(E[{\text {DF}}]=5.12\%\) and \(V[{\text {DF}}]=9.33\%^2\) is obtained, as shown in Fig. 3a, b. Therefore, the MBA identified the presence, location and extent of the exact damage level (5%) with a small error margin, despite the model discrepancy induced by the rotational spring. If the model discrepancy had been assumed as a zero-mean uncorrelated Gaussian, \(\varvec{\beta }_\delta =0\), \(\varvec{\varSigma }^2_\delta =\mathrm {diag}(\sigma _1^2,\ldots ,\sigma _q^2)\) and \(\varvec{R}=\varvec{I}\), the identification would be biased and lose precision. This is shown in Fig. 3c, d, with \(E[{\text {DF}}]=3.07\%\) and \(\mathrm {V}[DF]=18.00\%^2\). Assuming a confidence level different from 50% leads to a shift of the distribution in Fig. 3b, according to the cumulative density function shown in Fig. 3a, which would increase the probability of type I or type II errors.
Fig. 3

Probability of damage exceeding a given damage factor \({\text{df}}\) and PDF for the most probable damage factor DF for Young’s modulus of cantilever beam. a, b correspond to a discrepancy function approximated with a linear correlation function, and c, d with an uncorrelated zero-mean function

4 The Tamar bridge reference and current state datasets

The current section frames the application of the MBA for damage detection in the Tamar bridge, highlighting major challenges and relevant details of the two states which are used for health assessment.

The Tamar bridge is a long suspension bridge, which connects Saltash and Plymouth over the Tamar river in south west England. Its construction was finalised in 1961, but its deck has been rebuilt in 1999–2001, with sixteen stay cables added to support the additional loading. Its concrete towers reach an height of 67 m, and it supports three roadway lanes over a span of 335 m. The deck stiffness is increased via supporting steel cables and a truss bridge under the deck. The cables arrangement is subdivided into main, stay and suspension cables. New bearings have also been installed in the Saltash tower, to improve the bridge thermal expansion. Thus, the bridge load history has changed considerably across time, and much research has been developed to understand its behaviour.

The components under scrutiny are the suspension and stay cables shown in Fig. 4, and the friction of the bearings at the Saltash tower. The cables will be assessed on the basis of their initial strain, that is, the strain containing all the load history supported since installation. On the other hand, the friction in the bearings is the stiffness which develops between the bridge moving parts, i.e. the thermal expansion joints. Both the initial strain and stiffness are represented as input parameters of the Tamar bridge FE model. Lastly, it should be noted that the initial strains have been assumed as single parameters for each type of cable (main or stay) and across their length. It is believed that for the stay cables the latter assumption is not very strong, due to their constant cross-sectional area and linear geometry (cf. Fig. 4b).
Fig. 4

Main (a) and stay cables (b) of the Tamar bridge

Furthermore, it is worth mentioning some available information related to the behaviour of the above components. One important point are vertical plane oscillations which have been registered in the stay cables. To avoid public concern/ensure durability of the cable sockets, these vibrations have been eliminated with water-butt dampers in April 2006 [25]. Another point stems from measurements of temperature and extension data in the bearing arrangement, obtained in July 2010, which revealed that the gap extension against temperature is perfectly adjusted to a linear relation, and therefore, does not indicate any relevant frictional force (see Fig. 15 of Battista et al. [14] for clarification). The bearings and the string gauge sensor used for such assessment are shown in Fig. 5.
Fig. 5

New bearings at the Saltash tower (a) and setup of string gauge for measurement of the expansion joint thermal dilation (b)

In the current analysis, we consider the joint influence of temperature and traffic in the natural frequencies and mid-span displacements of the bridge. Temperature has been obtained from a thermocouple sensor (D062) installed on one of the main cables. Traffic influence is based on vehicle counts obtained from toll gates on the Plymouth side. The natural frequencies of the structure were determined with a Stochastic Subspace Identification (SSI) technique [31], based on a real time modal parameter identification system installed in June 2007. Lastly, the displacements have been measured with a total positioning system (TPS) reflectors/camera, which were installed in 2009. To put into perspective the time scale of the aforementioned points and highlight the location of the sensors, see a timeline in Fig. 6, and a diagram of the Tamar bridge SHM system in Fig. 7. As it can be observed, the reference and current data were obtained from 24 May 2009 to 1 March 2010 and between 9 March 2010 to 27 March 2010, respectively. The “H” and “v” labels of the accelerometers represent vertical and lateral channels of accelerometers on the North or South side of the deck.
Fig. 6

Timeline of SHM installations, building work and tests performed on the Tamar bridge during 1999–2013, plus reference and current states

Fig. 7

Diagram of Tamar bridge SHM system—cable temperature sensor, displacement reflector and accelerometers from whose natural frequencies/mode shapes are estimated. There are 16 stay cables on North/South and Saltash/Plymouth sides

Finally, it should be noted that there are other sources of uncertainty which affect the damage identification task and have not been considered, e.g. wind or soil settlements. Up to a certain extent, combining the MBA’s ability to consider model discrepancy and the detailed FE model developed by Westgate [38] accounts for these effects. A detailed analysis of the Tamar bridge FE model discrepancy is shown in Jesus et al. [22].

4.1 Analysis of temperature and traffic effects on modal properties and mid-span displacements

In the current section, the monitored data are thoroughly examined, to justify the assumptions detailed in Sect. 2.1 placed over the mrGp of the MBA for damage detection.

After cleansing and synchronising the whole datasets, 2419 and 270 common points were obtained for the reference/current state, respectively. Beforehand, the relations of the temperature/traffic against the natural frequencies/mid-span displacement are shown in Fig. 8. Frequency labels follow the convention: L is a lateral mode shape, V is vertical mode shape, T is a torsional mode shape, TRANS is a longitudinal translation mode, S is symmetric, A is asymmetric, SS is side span and the numbers are their relevant order. Note that for the sake of clarity only three out of five natural frequencies are shown; the temperature axis refers to the thermocouple sensor (D062); and the traffic mass is based on the toll gates counts and nominal UK vehicle size classes.
Fig. 8

Post-processed data—May of 2009 to March of 2010 time period—natural frequencies against temperature (a) and traffic (b) and mid-span displacements against temperature (c) and traffic (d)

Thus, it can be observed that most of the trends in Fig. 8 are linear; the dependency of the mid-span displacements on traffic is not as noticeable as for the other relations; and the LS1a and Northern displacement channels have the highest noise. These factors justify the linear correlations of the mrGp which approximate the discrepancy function. Furthermore, a larger search interval was assumed to estimate the variance hyperparameter \(\varvec{\varLambda }\) of the North mid-span displacement. Usually this estimation is performed by maximisation of the mrGp likelihood function with genetic algorithms.

The dataset for the current state follows similar patterns as the ones shown above. During this period no negative temperatures were registered, and the maximum temperature was \({17.4}\,^{\circ }\hbox {C}\).

4.2 Simulation of thermal and traffic effects in the Tamar bridge and mrGp emulation

Although not mentioned previously, it is important that the responses which are used for damage identification (natural frequencies/displacement) are sensitive to changes of the structural components under analysis (cables' initial strain and bearings' stiffness). One way to select appropriate output responses is by performing a sensitivity analysis with the bridge FE model. Westgate and Brownjohn [40] analysis indicates that the Tamar bridge natural frequencies are sensitive to the cables initial strain; justifying the responses chosen in the current publication. Similarly, Westgate et al. [39] has shown that the mid-span displacement is sensitive to the stiffness of the thermal expansion gap bearings. These analyses are based on a complex full-scale FE model of the Tamar bridge which has been developed using ANSYS parametric design language (APDL). The model has approximately 45,000 elements, from which expansion joints are modelled with linear spring elements; truss members with fixed-rotation beam elements; deck/towers with shells and the cables and hangers with uniaxial tension only beam elements. The FE model scale, complexity and computational cost fully justify the use of surrogate modelling.

Subsequently, it is necessary to highlight how temperature and traffic effects are considered. Westgate [38] established a simplified temperature, based on monitored data, which is adopted in the present work. In essence, a uniform temperature is applied across all the finite elements, provided that the temperature of the main cable (D062 sensor) is below a notable value \(\tau _\mathrm{{c}}\le {15}\,^{\circ }\hbox {C}\). If the cable temperature is above this value, different temperatures are applied on two groups of elements, hereby denoted as lighted and shaded elements, as follows:
$$\begin{aligned} \tau _\mathrm{{S}}=\left\{ \begin{array}{ll} 0.433\tau _\mathrm{{c}}+7.877\;\;\; &{}\tau _\mathrm{{c}}> 15\\ \tau _\mathrm{{c}} &{}\tau _\mathrm{{c}}\le 15\\ \end{array}\right. \quad \tau _\mathrm{{L}}=\left\{ \begin{array}{ll} 1.544\tau _\mathrm{{c}}-8.798\;\;\;&{}\tau _\mathrm{{c}}> 15\\ \tau _\mathrm{{c}} &{}\tau _\mathrm{{c}}\le 15\\ \end{array}\right. , \end{aligned}$$
where \(\tau _\mathrm{{S}}\) and \(\tau _\mathrm{{L}}\) represent the temperature of shaded (truss bridge under deck), and lighted (deck and pylons) elements, respectively. Eq. (9) represents a temperature fork, occurring at \({15}\,^{\circ }\hbox {C}\), where lighted and shaded structural elements attain a higher/lower temperature than cables. These linear relations and the elements groups are displayed in Fig. 9a, b, respectively.
Fig. 9

Linear bifurcation temperature model between cable, shaded and lighted groups (a) represented in cyan, blue and red components, respectively, in the ANSYS FE model (b).

a is reproduced from [38] and includes monitored data

The adopted traffic model is based on Westgate et al. [39], although our analysis considered only the traffic from the Plymouth to Saltash direction. The effects of traffic are assumed as a set of distributed mass nodes, evenly spread longitudinally across the bridge deck, and asymmetrically in the lateral direction, as shown by the top diagram in Fig. 7. An alternative and more comprehensive vehicular load case could be implemented on the basis of appropriate modelling ratios as presented by Jamali et al. [21].

Subsequently, we consider the approximation of the FE model with a mrGp structure as described in the previous sections. Let us denote the cable temperature at the location of the D062 sensor as \(\tau _\mathrm{{c}}\) and consider its influence in the [\(-5\), 30] \(^{\circ }\)C range. Similarly, the traffic mass is denoted as \(m_\mathrm{{t}}\) in [0, 2.5 \(\times\, 10^{6}\) ] kg, the initial strain in the main and stay cables as \(\varepsilon _\mathrm{{iMC}}\) [36.5, 2700]\(\mu \varepsilon\), \(\varepsilon _\mathrm{{iSC}}\) [36.5, 3700]\(\mu \varepsilon\), and the stiffness of the bearings as \(K_\mathrm{{d}}\) in [0, 10] kN/mm. The prior PDF for the reference state is a uniform prior with the same range as the influence interval of the structural parameters. On the other hand, the prior PDF for the current state is multivariate normal, with the same mean and covariance as the posterior PDF of the reference state. Hence, the model mrGp has five input arguments and five output responses, and its training simulated data has been generated in a Latin hypercube space within the above ranges. The resulting 1028 simulations are shown in Fig. 10. Note that to avoid mixtures between mode shapes while running the simulations, a comparison against reference mode shapes has been performed using the modal assurance criterion (MAC) at 80%.
Fig. 10

Simulated data—natural frequencies (a, b) and mid-span relative displacements (c, d)

Finally, the reliability of the model mrGp is highlighted. A testing data set of 256 points has been used to calculate the relative error against predictions of the mrGp, as follows:
$$\begin{aligned} \varepsilon _\mathrm{{r}}=\frac{\mu ^m_\mathrm{{p}}-y^m_t}{\sigma ^m_\mathrm{{p}}}, \end{aligned}$$
where \(y^m_\mathrm{{t}}\) is a testing data point, and \(\mu ^m_\mathrm{{p}}\), \(\sigma ^m_\mathrm{{p}}\) are the correspondent mrGp predicted mean and standard deviation, for a particular input (\(x^m,\theta ^m\)). The histogram of the resulting relative error (in %) between the posterior predictions and the testing dataset is shown in Fig. 11. Although the largest error was 6.3%, using more training data would probably overfit the mrGp beyond what is strictly necessary to adequately represent the FE model, and therefore, the current approximation was adopted throughout the rest of this work.
Fig. 11

Histogram of posterior relative error of the Tamar bridge FE emulator, based on a testing dataset

5 Bridge cables/bearings damage identification

The previous sections have detailed the enhancement of the MBA framework for damage detection, field experiment aspects of the Tamar bridge SHM system and its FE model. Now it is possible to present the results of the actual application for each of the structural components under analysis. The validation is based on monitored forces from existent stay cables strain gauge load cells, and the already mentioned Battista’s tests.

Beforehand, all the parameters posterior moments for the current and reference states are shown in Table 2. It is noticeable that the variances are smaller for the current state, despite the smaller amount of data which was used for its identification. In particular, the stay cables variance decreased by 89%. More details for each individual component will be detailed in the next sections.
Table 2

Posterior PDF moments for the reference and current health state








Main cables



0.25 \(\times\,10^{-6}\)

0.14 \(\times\,10^{-6}\)

Stay cables



1.17 \(\times\,10^{-6}\)

0.13 \(\times\,10^{-6}\)

Bearings (kN/mm)





5.1 Stiffness of bearings arrangements

Starting off with the stiffness of the bearings \(K_\mathrm{{d}}\), the samples for the current state prior, likelihood function and posterior are shown in Fig. 12. By visual inspection, the posterior PDF has a pronounced left tail, which indicates low friction, despite that its density decays to zero for a frictionless bearing.
Fig. 12

Likelihood function, prior and posterior PDFs for the stiffness of bearings, conditioned by the current state data

Next, the uncertainty associated with the posteriors is propagated into the damage metric DF, and the probability of damage exceeding a given damage factor \({\text{df}}\) and the PDF of DF are shown in Fig. 13a, b, respectively. The considered interval for df is [− 40 80]%, and the moments are E[DF] = − 17.9% and V[DF] = 37.4%\(^2\). Despite the high uncertainty associated with the current estimate, visible in the PDF of DF, the mean value indicates a lower, and therefore safer, value of stiffness. Such result is in agreement with the conclusions from Battista et al. [14], whose tests were performed 3 months after the time period of the current state (cf. Fig. 6).
Fig. 13

Probability of damage exceeding a given damage factor \({\text{df}}\) (a), and PDF for the most probable damage factor DF (b) for the bearing arrangement

5.2 Initial strain of main suspension cables

The second structural component under analysis is the bridge’s main suspension cables. The procedure is analogous to what has been described above, with a damage identification based on the cables initial strain. The initial strain is considered in the FE model as a prestressing force which loads and deforms the cables before any analysis is run. Since the main cables extend along a nonlinear profile, with different values of force/strain along its length, there is a considerable amount of uncertainty associated with estimation of this parameter. As can be seen in Fig. 14, the current state’s prior and posterior PDFs are very similar, except for two peaks, visible in the region of highest density. Such shape indicates a bimodal distribution, plausibly associated with each of the two main cables. The moments of this distribution have been presented in Table 2.
Fig. 14

Likelihood function, prior and posterior PDFs for main suspension cables, conditioned by the current state data

Correspondingly, the probability that the damage factor DF of Eq. (5) exceeds a certain threshold value \({\text{df}}\) and the PDF of DF for the main cables initial strain are shown in Fig. 15a, b. In contrast to the results shown in the previous section, the current probability distribution has an expected value E[DF] = − 2.4% and variance V[DF] = 50.3%\(^2\). Once more, the negative expected value indicates an estimate below the reference state for the main cables, and its relatively low value is also reassuring. Although such results are encouraging, unfortunately it is not possible to validate them with any field data, since the Tamar bridge main cables have never been monitored directly.
Fig. 15

Probability of exceeding a certain damage factor df (a) and PDF of DF (b) for main suspension cable initial strain

5.3 Initial strain of stay cables

The final component under analysis is the sixteen stay cables which support the bridge deck. Similar to the previous sections, the distributions of the current state for the stay cables initial strain are shown in Fig. 16. As noted before, this parameter registered a considerable shrinkage of its uncertainty from the reference to the current state. It is believed that the informative prior and relatively low modelling assumptions contributed to such improvement.
Fig. 16

Current state likelihood function, prior and posterior PDFs for stay cable initial strain

Accordingly, the distributions of the most probable damage factor can be seen in Fig. 17. Its moments are E[DF] = 18.60% and V[DF] = 54.5%\(^2\) for the mean and variance, respectively. Similar to the previous example, the identified metric has a considerable uncertainty associated with its identification. However, oppositely to the previous results a non-negligible increase of the initial strain was obtained, which is associated with loss of performance of the structural element. Therefore, it is necessary to investigate the cause of such deviation.
Fig. 17

Probability of exceeding a certain damage factor df (a) and PDF of DF (b) for stay cables initial strain

As described in Sect. 2.1, one assumption of the MBA is that the parameters do not change because of environmental/operational effects. Although these effects are considered in the response output \(\varvec{Y}(\varvec{X})\), the same cannot be said for the parameters \(\varvec{\theta }\). This stands in contrast to Behmanesh’s hierarchical Bayes approach, which is able to characterise a parameter’s randomness due to long-term monitoring, i.e. its inherent variability. To assess if the current deviation occurred because of damage, or if it can reasonably be explained by the parameters inherent variability, we consider a dataset from the stay cables, monitored with strain gauge load cells.

For the current assessment, seven of the eight stay cables from the Saltash side of the bridge are considered. The last cable was not included because its sensor was found to be faulty. From previous literature, it is known that an increase of temperature decreases the stay cable forces. Such relation is shown in Fig. 18, where in addition to monitored data, seven linear functions of first-order polynomials which have been fitted to the data, are shown. These functions follow the form:
$$\begin{aligned} F(T)=\alpha _1T+\alpha _2, \end{aligned}$$
where F is the cable force, T is the temperature in the D062 sensor, and \(\alpha _1\) and \(\alpha _2\) are the coefficients from the polynomial functions. The numerical value of the \(\alpha\) coefficients is displayed in Table 3.
Fig. 18

Stay cable forces during reference period and the corresponding linear regression functions

The final step is to use these functions to calculate the relative force deviation that each cable experiences from its mean value. This is calculated easily as \((F(30)-\mu _F)/\mu _F\times 100\), where F(30) is the cable’s force function at an extreme temperature of 30 \(^\circ\)C, and \(\mu _F\) is the mean value of the cable force. The results are also displayed in the last column of Table 3.
Table 3

Coefficients of linear polynomial function fitted to stay cable monitored data, and percentual deviation of the stay cable force relative to its mean


\(\alpha _1\)

\(\alpha _2\)

\(\frac{F(30)-\mu _F}{\mu _F}\)(%)


− 5.1




− 12.6




− 6.9




− 6.4




− 3.4




− 7.1




− 9.9



As it can be noted, two of the analysed stay cables, S1S and S4S, have a deviation due to temperature which is superior to the value identified by the MBA 18.6%, and can therefore reasonably explain the increase of its value. These values are typeset in bold. Furthermore, it should be noted that the data from the current state belong to a colder month (March), which is associated with higher cable force values. In reality, during long-term monitoring almost all parameters often experience some variability. In summary, what the current analysis highlights, is that the MBA performs optimally when external factors do not affect the structural components under evaluation. If instead the components have some inherent variability, the MBA can develop type II errors.

6 Conclusions

The current work has highlighted the first implementation of the MBA for damage identification in SHM. The methodology uses supervised learning, comparing information which is separated into a reference and current state. Additionally, to the presentation of the methodology, health assessments of a simulated cantilever beam and the Tamar long suspension bridge have been detailed. Specifically, the beam’s Young’s modulus, and the Tamar bridge’s main, stay cables and its bearings have been examined, and their probability of damage has been computed.

As shown in the simulated example, the MBA is able to detect damage under the presence of model discrepancy and operational variations. The resulting unbiased estimate is attributed to the correlated structure of the mrGp which approximates model discrepancy. However, further investigation is required to reduce the considerable estimation uncertainty, which stems from the assumption of independence between the reference and current states.

Additionally, the MBA is capable of assessing complex civil infrastructure using full-scale FE models, as shown for the Tamar bridge case study. Effects due to environmental and operational effects, noise and model discrepancy were taken into account. Results indicate that the bridge main cables and bearings did not indicate any signs of structural anomalies. In situ tests corroborate the latter conclusion. On the other hand, the bridge stay cables indicated a considerable increase of its initial strain, which was, however, attributed to temperature. Therefore, it is important to note that the MBA does not consider the inherent variability of identified structural parameters.

Lastly, the current study presented a local damage detection through the effect of structural parameters. Although not shown, it should be noted that the MBA formulation also allows to assess damage at a global level (through the model discrepancy). With this work, the authors hope to motivate further developments of the MBA for damage detection, and to enhance the state of the art of the SHM community.



This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) reference number EP/N509796. The leading author gratefully acknowledges the computing resources provided by the Scientific Computing Research Technology Platform of the University of Warwick. The authors would like to thank the two journal reviewers for their time and pertinent comments, which have greatly improved the manuscript.


  1. 1.
    Arendt PD, Apley DW, Chen W (2012) Quantification of model uncertainty: calibration, model discrepancy, and identifiability. J Mech Design 134(10):100908. CrossRefGoogle Scholar
  2. 2.
    Arendt PD, Apley DW, Chen W, Lamb D, Gorsich D (2012) Improving identifiability in model calibration using multiple responses. J Mech Design 134(10):100909. CrossRefGoogle Scholar
  3. 3.
    Beck J, Au SK (2002) Bayesian updating of structural models and reliability using Markov chain Monte Carlo simulation. J Eng Mech 128(4):380–391. CrossRefGoogle Scholar
  4. 4.
    Beck J, Katafygiotis L (1998) Updating models and their uncertainties. I: Bayesian statistical framework. J Eng Mech 124(4):455–461. CrossRefGoogle Scholar
  5. 5.
    Beck J, Yuen K (2004) Model selection using response measurements: Bayesian probabilistic approach. J Eng Mech 130(2):192–203. CrossRefGoogle Scholar
  6. 6.
    Behmanesh I, Moaveni B (2015) Model updating of structures with temperature-dependent properties using a Hierarchical Bayesian framework. In: SHMII-2015Google Scholar
  7. 7.
    Behmanesh I, Moaveni B (2015) Probabilistic identification of simulated damage on the Dowling Hall footbridge through Bayesian finite element model updating. Struct Control Health Monit 22(3):463–483. CrossRefGoogle Scholar
  8. 8.
    Behmanesh I, Moaveni B (2016) Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification. J Sound Vib 374:92–110. CrossRefGoogle Scholar
  9. 9.
    Behmanesh I, Moaveni B, Papadimitriou C (2017) Probabilistic damage identification of a designed 9-story building using modal data in the presence of modeling errors. Eng Struct 131:542–552. CrossRefGoogle Scholar
  10. 10.
    Ben Abdessalem A, Dervilis N, Wagg D, Worden K (2018) Model selection and parameter estimation in structural dynamics using approximate Bayesian computation. Mech Syst Signal Process 99:306–325. CrossRefGoogle Scholar
  11. 11.
    Carden EP, Brownjohn JMW (2008) Fuzzy clustering of stability diagrams for vibration-based structural health monitoring. Comput Aided Civil Infrastruct Eng 23(5):360–372. CrossRefGoogle Scholar
  12. 12.
    Conti S, O’Hagan A (2010) Bayesian emulation of complex multi-output and dynamic computer models. J Stat Plan Inference 140(3):640–651. MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Cross EJ, Worden K, Chen Q (2011) Cointegration: a novel approach for the removal of environmental trends in structural health monitoring data. Proc R Soc A Math Phys Eng Sci 467(2133):2712–2732. CrossRefzbMATHGoogle Scholar
  14. 14.
    de Battista N, Brownjohn JM, Tan HP, Koo KY (2015) Measuring and modelling the thermal performance of the Tamar Suspension Bridge using a wireless sensor network. Struct Infrastruct Eng 11(2):176–193. CrossRefGoogle Scholar
  15. 15.
    DiazDelaO F, Garbuno-Inigo A, Au S, Yoshida I (2017) Bayesian updating and model class selection with subset simulation. Comput Methods Appl Mech Eng 317:1102–1121. MathSciNetCrossRefGoogle Scholar
  16. 16.
    Farrar CR, Worden K (2007) An introduction to structural health monitoring. Philos Trans R Soc A Math Phys Eng Sci 365(1851):303–315. CrossRefGoogle Scholar
  17. 17.
    Figueiredo E, Radu L, Worden K, Farrar CR (2014) A Bayesian approach based on a Markov-chain Monte Carlo method for damage detection under unknown sources of variability. Eng Struct 80:1–10. CrossRefGoogle Scholar
  18. 18.
    Goulet JA, Smith IF (2013) Structural identification with systematic errors and unknown uncertainty dependencies. Comput Struct 128:251–258. CrossRefGoogle Scholar
  19. 19.
    Goulet JA, Smith IFC (2013) Predicting the usefulness of monitoring for identifying the behavior of structures. J Struct Eng 139(10):1716–1727. CrossRefGoogle Scholar
  20. 20.
    Huang Y, Beck JL, Li H (2017) Hierarchical sparse Bayesian learning for structural damage detection: theory, computation and application. Struct Saf 64:37–53. CrossRefGoogle Scholar
  21. 21.
    Jamali S, Chan THT, Nguyen A, Thambiratnam DP (2018) Modelling techniques for structural evaluation for bridge assessment. J Civil Struct Health Monit 8(2):271–283. CrossRefGoogle Scholar
  22. 22.
    Jesus A, Brommer P, Westgate R, Koo K, Brownjohn J, Laory I (2018) Bayesian structural identification of a long suspension bridge considering temperature and traffic load effects. Struct Health Monit. Google Scholar
  23. 23.
    Jesus A, Brommer P, Zhu Y, Laory I (2017) Comprehensive Bayesian structural identification using temperature variation. Eng Struct 141:75–82. CrossRefGoogle Scholar
  24. 24.
    Kennedy MC, O’Hagan A (2001) Bayesian calibration of computer models. J R Stat Soc Ser B (Stat Methodol) 63(3):425–464. MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Koo KY, Brownjohn JMW, List DI, Cole R (2013) Structural health monitoring of the Tamar suspension bridge. Struct Control Health Monit 20(4):609–625. CrossRefGoogle Scholar
  26. 26.
    Kostić B, Gül M (2017) Vibration-based damage detection of bridges under varying temperature effects using time-series analysis and artificial neural networks. J Bridge Eng 22(10):04017065. CrossRefGoogle Scholar
  27. 27.
    Ku H (1966) Notes on the use of propagation of error formulas. J Res Natl Bureau Stand Sect C Eng Instrum 70C(4):263. CrossRefGoogle Scholar
  28. 28.
    Moaveni B (2011) System identification study of a 7-story full-scale building slice tested on the UCSD-NEES shake table. J Struct Eng 137(6):705–717. CrossRefGoogle Scholar
  29. 29.
    Ntotsios E, Papadimitriou C, Panetsos P, Karaiskos G, Perros K, Perdikaris PC (2009) Bridge health monitoring system based on vibration measurements. Bull Earthq Eng 7(2):469–483. CrossRefGoogle Scholar
  30. 30.
    Papadimitriou C, Lombaert G (2012) The effect of prediction error correlation on optimal sensor placement in structural dynamics. Mech Syst Signal Process 28:105–127. CrossRefGoogle Scholar
  31. 31.
    Peeters B, De Roeck G (1999) Reference-based stochastic subspace identification for output-only modal analysis. Mech Syst Signal Process 13(6):855–878. CrossRefGoogle Scholar
  32. 32.
    Rasmussen CE, Williams CKI (2006) Gaussian processes for machine learning. Adaptive computation and machine learning. MIT Press, CambridgezbMATHGoogle Scholar
  33. 33.
    Simoen E, Moaveni B, Conte J, Lombaert G (2013) Uncertainty quantification in the assessment of progressive damage in a 7-story full-scale building slice. J Eng Mech 139(12):1818–1830. CrossRefGoogle Scholar
  34. 34.
    Simoen E, Papadimitriou C, Lombaert G (2013) On prediction error correlation in Bayesian model updating. J Sound Vib 332(18):4136–4152. CrossRefGoogle Scholar
  35. 35.
    Sohn H (2007) Effects of environmental and operational variability on structural health monitoring. Philos Trans R Soc A Math Phys Eng Sci 365(1851):539–560. CrossRefGoogle Scholar
  36. 36.
    Sohn H, Law KH (1997) A Bayesian probabilistic approach for structure damage detection. Earthquake Engineering & Structural Dynamics 26:1259–1281.;2-3 CrossRefGoogle Scholar
  37. 37.
    Stull CJ, Earls CJ, Koutsourelakis PS (2011) Model-based structural health monitoring of naval ship hulls. Comput Methods Appl Mech Eng 200(9–12):1137–1149. CrossRefzbMATHGoogle Scholar
  38. 38.
    Westgate R (2012) Environmental effects on a suspension bridge’s performance. PhD, SheffieldGoogle Scholar
  39. 39.
    Westgate R, Koo KY, Brownjohn J (2015) Effect of solar radiation on suspension bridge performance. J Bridge Eng 20(5):04014077. CrossRefGoogle Scholar
  40. 40.
    Westgate RJ, Brownjohn JMW (2011) Development of a Tamar Bridge finite element model. In: Proulx T (ed) Dynamics of bridges, vol 5. Springer, New York, pp 13–20CrossRefGoogle Scholar
  41. 41.
    Xia Q, Cheng Y, Zhang J, Zhu F (2017) In-service condition assessment of a long-span suspension bridge using temperature-induced strain data. J Bridge Eng 22(3):04016124. CrossRefGoogle Scholar
  42. 42.
    Yarnold MT, Moon FL (2015) Temperature-based structural health monitoring baseline for long-span bridges. Eng Struct 86:157–167. CrossRefGoogle Scholar
  43. 43.
    Zheng W, Yu W (2015) Probabilistic approach to assessing scoured bridge performance and associated uncertainties based on vibration measurements. J Bridge Eng 20(6):04014089. CrossRefGoogle Scholar
  44. 44.
    Zhu Y, Ni YQ, Jesus A, Liu J, Laory I (2018) Thermal strain extraction methodologies for bridge structural condition assessment. Smart Mater Struct. Google Scholar
  45. 45.
    Zhu YC, Au SK (2018) Bayesian operational modal analysis with asynchronous data, part I: most probable value. Mech Syst Signal Process 98:652–666. CrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of Computing and EngineeringUniversity of West LondonLondonUK
  2. 2.Warwick Centre for Predictive Modelling, School of EngineeringUniversity of WarwickCoventryUK
  3. 3.Centre for Scientific ComputingUniversity of WarwickCoventryUK
  4. 4.College of Engineering, Mathematics and Physical SciencesUniversity of ExeterExeterUK
  5. 5.Civil Research Group, School of EngineeringUniversity of WarwickCoventryUK

Personalised recommendations