The variability of the Atlantic meridional circulation since 1980, as hindcast by a datadriven nonlinear systems model
Abstract
The Atlantic meridional overturning circulation (AMOC), an important component of the climate system, has only been directly measured since the RAPID array’s installation across the Atlantic at 26°N in 2004. This has shown that the AMOC strength is highly variable on monthly timescales; however, after an abrupt, shortlived, halving of the strength of the AMOC early in 2010, its mean has remained ~ 15% below its pre2010 level. To attempt to understand the reasons for this variability, we use a control systems identification approach to model the AMOC, with the RAPID data of 2004–2017 providing a trial and test data set. After testing to find the environmental variables, and systems model, that allow us to best match the RAPID observations, we reconstruct AMOC variation back to 1980. Our reconstruction suggests that there is interdecadal variability in the strength of the AMOC, with periods of both weaker flow than recently, and flow strengths similar to the late 2000s, since 1980. Recent signs of weakening may therefore not reflect the beginning of a sustained decline. It is also shown that there may be predictive power for AMOC variability of around 6 months, as ocean density contrasts between the source and sink regions for the North Atlantic Drift, with lags up to 6 months, are found to be important components of the systems model.
Keywords
Atlantic meridional overturning circulation (AMOC) System identification Data driven modelling Forecasting HindcastIntroduction
There has been considerable interest in investigating past natural variability in the AMOC as simulated in climate or ocean models (Danabasoglu et al. 2015), and through ocean reanalysis reconstruction (Tett et al. 2014; Karspeck et al. 2015; Jackson et al. 2016). While variability is a characteristic of the AMOC in all models, the climate models disagree on the change that has occurred at 26°N over the last few decades, although most suggest an increase over the period from the mid1970s to the mid1990s (Danabasoglu et al. 2015). Ocean reanalyses might be expected to do better, as they assimilate a range of observations (although not the RAPID array) into reanalysis ocean model systems. They show AMOC strengths of the right order (Tett et al. 2014), but many bear little resemblance to the RAPID observations (Tett et al. 2014), or do not cover very much of the RAPID period (Karspeck et al. 2015). However, the latter does show an increase in the 1990s and early 2000s compared to the 1960s–1980s. The one reanalysis product that does match the variation in the RAPID observations well is the GloSea5 analysis from the U. K. Met Office (Jackson et al. 2016). This only extends back to 1995, but suggests that the AMOC during the early 2000s was stronger than both earlier and latter in the time series, with late 1990s values being similar to those of recent years.
Net northwards transport calculated by Bryden et al. (2005) for cruisebased AMOCproxy calculations near the RAPID line
Cruise date  Oct. 1957  Sep. 1981  Aug. 1992  Feb. 1998  Apr. 2004 

AMOC (Sv)  25.3  20.8  20.6  18.3  17.3 
There is general agreement that density anomalies along the western boundary current (Buckley and Marshall 2016), and particularly in the Labrador Sea (Jackson et al. 2016), lead to variation in the AMOC, through southward propagation of boundary waves rather than water masses (Hodson and Sutton 2012; Jackson et al. 2016). This paper attempts to explore and quantify this suspected relationship between decadalscale variability in the AMOC at 26°N and the density anomalies of the northern Atlantic, but also takes into account the density contrasts between the tropical source waters of the Gulf Stream and the northern convection zones as well, through use of a control systems identification approach to model the RAPID AMOC timeseries. Such control systems model formulations have been recently shown to be useful in understanding the causes underlying variability in a range of environmental processes, including iceberg discharge from the Greenland Ice Sheet (Bigg et al. 2014; Zhao et al. 2016) and fish population response to environmental and fishing pressures (Marshall et al. 2016). Having found the most important environmental variables underlying determination of the strength of the AMOC, the resulting model is used in hindcast mode to reconstruct the AMOC back to 1980, through use of ocean parameters produced in the GODAS reanalysis system. As an output from the model analysis, the possibility of some predictability of the AMOC will be discussed. The main contribution of the paper includes the finding of a dominant timescale of ~ 6–7 months between changes in meridional density difference and AMOC strength, which means that the AMOC is predictable. Through a hindcasting study, it reveals that the recent slowing of the AMOC is not outside the range of model variability since 1980; this is important and helpful for understanding the recent behavior of the AMOC.
Data and methods
The conceptual approach of the control systems identification model to be described in “Description of model construction” section is to: (1) select key input variables that are a priori regarded as critical in determining the output variable, in this case the AMOC strength at 26°N; and (2) build a model containing terms involving linear or nonlinear lagged combinations of these inputs, to be selected sequentially according to the magnitude of their contribution to the output variable’s variance. The resulting equation is thus analogous in some respects to a multiple regression model, but using more complex terms of an initially unknown number, constructed in a statistically rigorous fashion. This approach has proven very successful in a range of engineering environments since its first development by Chen and Billings (1989), and has recently begun to show its versatility within environmental contexts, as noted above.
The current analysis uses a threestage process, partly owing to its origin in the RAPID Challenge of late 2015 (www.rapid.ac.uk/challenge/; Smeed 2017), to predict the AMOC over April 2014–September 2015 prior to the retrieval of the mooring data from the RAPID array (Cunningham et al. 2007) from its (then) latest deployment. Input variables, described in “Data used in the control systems model” section, are first used to construct a range of test models to fit the then existing series of RAPID AMOC data from April 2004 to March 2013. Trial predictions of the AMOC output variable are then made for April 2013–March 2014, using input variables available over this time span. These trial predictions are then compared to the AMOC strength up to March 2014. The best test model, in terms of its reproduction of the AMOC signal from a range of statistical measures, is then selected. Finally, the model, having been verified as robust over a decade, is tested on the retrieved data up to February 2017 and used to hindcast the AMOC strength back to 1980.
Data used in the control systems model
Description of model construction
The θ’s are the model parameters, and \( \ell \) is the nonlinear degree of the polynomial model. A NARX model of degree \( \ell \) implies that the degree of each term in Eq. (2) is not higher than \( \ell \). For example, x_{1}x_{2} is a term of nonlinear degree 2, while \( x_{1}^{2} \)x_{2} is a nonlinear term of degree 3.
The most popular algorithm for building NARX models is the Orthogonal Forward Regression (OFR) algorithm (Billings 2013; Guo et al. 2015). This is a stepwise algorithm that identifies the most significant predictors and regressors that explain the output variable’s variance using an Error Reduction Ratio (ERR) index (Chen et al. 1989). A comprehensive explanation of the meaning of ERR may be found in Chen et al. (1989), Wei et al. (2004a, b) or Billings (2013). In recent years, several improvements to the original OFR algorithm have been developed. One such improvement is the use of new metrics, such as mutual information (MI) (Billings and Wei 2007) and distance correlation (Ayala Solares and Wei 2015), given that the ERR index, defined as a squared correlation function (Wei and Billings 2008), is only able to capture linear dependencies.
Performance metrics have advantages and disadvantages. In general, performance metrics provide a better estimate of the test error, and make fewer assumptions about the true underlying model (James et al. 2013). However, by splitting the data into training and testing sets, the sample size is reduced for both model training and testing. It is also computationally expensive since the process may need to be repeated several times to achieve good estimates of accuracy.
Model evaluation
The data set is divided in three parts. The first part contains data from April 2004 to March 2013, which is used for training several models using the ERR and MI indices, together with performance metrics. The second part uses data from April 2013 to March 2014 for model validation/comparison and model evaluation. The last part contains data from April 2014 to February 2017, which is used to test models’ predictive performance on data that were not used in the model identification and selection phase.
 Mean Error:$$ {\text{ME}} = \frac{1}{N}\mathop \sum \limits_{n = 1}^{N} e_{i} $$(6a)
 Mean Absolute Error:$$ {\text{MAE}} = \frac{1}{N}\mathop \sum \limits_{n = 1}^{N} \left {e_{i} } \right $$(6b)
 Root Mean Squared Error:$$ {\text{RMSE}} = \sqrt {\frac{1}{N}\mathop \sum \limits_{n = 1}^{N} \left( {e_{i} } \right)^{2} } $$(6c)
The above metrics are widely used in traditional modeling practices. The \( {\text{ME}} \) is used to check if the mean of the model error is close to zero. The \( {\text{RMSE}} \) is usually used to measure the overall performance of the model, while the \( {\text{MAE}} \) can be used to measure the model predictive power to detect extreme or peak values of the system response.
Results
The models
Based on the data description given in “Data used in the control systems model” section, three sets of variables are used to build three different NARX models of the AMOC. As described in “Model evaluation” section, data from April 2004 to March 2014 are used for training and validation, while that from April 2014 to February 2017 are used for testing. The maximum order of the polynomials used in model construction (Eq. 2) was 2, in accord with the findings of “Nonlinear versus linear models” section. In all cases, the ERR and MI are used together with the performance metrics of (Eq. 6) to determine the most appropriate models.
The three cases involve selections of varying groups of input variables as summarized below, where the variables are defined in “Data used in the control systems model” section. All cases assume that both atmospheric and oceanic quantities contribute towards the observed AMOC variation, in line with the mix of Ekman transport and densitydriven elements contributing to the flow. In all cases, the atmospheric component is represented by the largescale atmospheric circulation measure of the NAO. However, we examine three different ways in which density may contribute to the AMOC: Case 1 uses the mean surface density over the origin and sinking regions for the North Atlantic Drift; Case 2 considers the density gradient between the sinking and origin regions; and Case 3 allows both of these density measures to play a role in the model.
 Case 1—forced by the relative contributions of the atmosphere and ocean mean states:

AMOC strength (output variable)

NAO index—N (input variable)
 Mean of the density variables (input variable) defined as$$ U = \frac{{{\text{GM}} + {\text{LS}} + {\text{NS}}}}{3} $$(7)

 Case 2—forced by the relative contributions of the atmosphere and the meridional density difference between surface and deep water source waters:

AMOC strength (output variable)

NAO index—N (input variable)

 Case 3—forced by the relative contributions of the atmosphere and the contrasting mean and meridional differences in ocean density:

AMOC strength (output variable)

NAO index—N (input variable)

Mean of the density variables—U (input variable)

Difference of the density variables—V (input variable)

It is noteworthy that the variables U and V, while retaining the annual cycle visible in Fig. 2, have opposite extremes, namely, the highest value of the mean density, U, is during the winter, while the largest difference in density, V, occurs during the summer. It is also important to mention that for each of the three variables, AMOC, U, and V, the corresponding mean value is removed prior to the model building procedure. The mean values of the three variables, estimated based on the training data (i.e., data from April 2004 to March 2013) are 16.97 Sv for AMOC, 1026.98 kg m^{−3} for U, and 3.4 kg m^{−3} for V, respectively. This is done partly because the magnitudes of the density variables are much larger than the N index and the AMOC strength. Removing the mean value ensures that the density variables do not dominate the training and validation phases, and that the resulting models are more robust.
TOP: model terms selected when modeling the AMOC strength using the NAO index and means of density variables for each of the three model cases
Case 1  Case 2  Case 3  

Variable  Parameter  ERR (%)  Variable  Parameter  ERR (%)  Variable  Parameter  ERR (%) 
\( U\left( {t  7} \right) \)  2.221  17.95  \( V\left( {t  7} \right) \)  −2.449  20.60  \( V\left( {t  7} \right) \)  −2.500  20.60 
\( N\left( t \right) \)  1.307  13.88  \( N\left( t \right) \)  1.316  14.46  \( N\left( t \right) \)  1.207  14.46 
\( N\left( t \right)U\left( {t  6} \right) \)  −1.363  6.63  \( N\left( t \right)V\left( {t  6} \right) \)  1.237  5.27  \( N\left( t \right)U\left( {t  6} \right) \)  −1.240  5.90 
\( N\left( {t  8} \right)U\left( {t  3} \right) \)  1.096  4.42  \( N\left( {t  8} \right)V\left( {t  3} \right) \)  − 1.065  5.10  
\( N\left( {t  3} \right)V\left( {t  3} \right) \)  1.018  4.61  
Performance metrics  
\( {\text{ME}} \) (Sv)  −0.3188  \( {\text{ME}} \) (Sv)  − 0.2123  \( {\text{ME}} \) (Sv)  −0.3479  
\( {\text{MAE}} \) (Sv)  1.8603  \( {\text{MAE}} \) (Sv)  1.6908  \( {\text{MAE}} \) (Sv)  1.8940  
\( {\text{RMSE}} \) (Sv)  2.3282  \( {\text{RMSE}} \) (Sv)  2.0761  \( {\text{RMSE}} \) (Sv)  2.2852 
Each trained model is evaluated using the training and validation data sets up to March 2014. The weighted performance metrics for the three models are shown in the bottom panel of Table 2. From these, it is argued that the Case 2 model performs best overall as the lowest value for each of the metrics occurs for this Case. This suggests that the difference in density between the deep water formation areas and the upstream Gulf Stream source region 7 months ago provides the best indication of variation in the AMOC strength, this being the leading term for Case 2. Furthermore, an important observation is that all three cases agree that the current NAO index plays a discernible role in the AMOC strength, as all models have a second term linearly dependent on N(t), with a lagged element of N being involved in all higher order terms (Table 2).
Performance metrics on the training dataset (April 2004–March 2013), validation dataset (Aril 2013–March 2014), test dataset (April 2014–February 2017), and validation + test dataset (April 2013–February 2017), using the best model found (model from Case 2)
Performance metrics  

Training (Sv)  Validation (Sv)  Test (Sv)  Validation + test (Sv)  
\( {\text{ME}} \)  −0.2123  0.7304  −0.3626  −0.0836 
\( {\text{MAE}} \)  1.6908  1.3569  2.1810  1.9706 
\( {\text{RMSE}} \)  2.0761  1.6092  2.6476  2.4251 
It is noteworthy that over the whole period of the RAPID dataset, the correlation between the model simulation output (from the Case 2 model) and the observations is 0.62, statistically significant well beyond the 1% level. It is also worth noting that Fig. 5 shows that the model successfully captures the transition from a semiregular annual cycle prior to 2012 to the more chaotic variability since then. Nevertheless, as well as the poor performance of the model in 2014/2015, high peak levels tend to be underestimated throughout (Fig. 5). It is not clear what has caused this, however, only largescale measures of atmospheric and oceanic conditions have been used as inputs to the model so any more locally related variability will not be captured by the model.
Nonlinear versus linear models
The above linear model is applied to predict the AMOC strength. The weighted performance metrics of the linear model on the training and validation data sets are: \( {\text{ME}} = 0.5393 \) Sv, \( {\text{MAE}} = 1.9481 \) Sv, and \( {\text{RMSE}} = 2.5678 \) Sv, all of which are significantly larger than the metrics for the Case 2 model in Table 2 and so clearly suggesting that the purely linear model is inferior to the Case 2 model. This is consistent with the poorer fit of the linear model shown in the bottom panel of Fig. 5.
P values obtained from the Ramsey regression equation specification error test (RESET) to determine the appropriate order of the model
Polynomial degree  P value 

2  4.591e−05 
3  0.9573 
Hindcasting
Discussion
The NARX model of the AMOC strength at 26°N has been shown in Figs. 5, 7 and Table 2 to match reasonably well the RAPID training data, while producing the right magnitude of the recent, test, dataset and its irregular nature compared to a “normal” annual cycle. This gives confidence in the broad structure of the hindcast back to 1980. Note also that the more recent cruise calculations from Table 1 agree reasonably well with the model estimates of the AMOC (Fig. 6). Nevertheless, details of the variation in the AMOC are not always wellcaptured. The extrema during the training and test period are often underestimated, although there are periods when these are captured well. This underestimation of the extrema seems particularly true of the maxima, while the occurrence of major negative excursions is found within the model. In this context it is notable that the extended reduction in observed AMOC strength around the beginning of 2010 is wellpredicted by the model (Fig. 5). This is linked to an extreme variation in the mean density difference, V, between a peak maximum in 2009 and a peak minimum in 2010, associated with the prolonged negative excursion in NAO around this period (Fig. 2), which led to the coldest winter in the UK since 1978/1979 (Prior and Kendon 2011). On the other hand, all the high AMOC peaks in the mid2000s are underestimated by the model. In some measures an averaging of the three case models gives a good performance (see “Appendix”), but this inability to reproduce high extremes remains (Fig. 9).
The models found that a dominant lag time in many terms tends to be around 6–8 months, particularly in V, the density difference between the convection regions and the Gulf Stream source. This timescale agrees well with those found in previous studies that suggest AMOC variation is linked to the transit time for boundary waves generated by density fluctuations in the Labrador Sea and then traveling south along the American shelf (Hodson and Sutton 2012; Jackson et al. 2016). The AMOC’s variation is thus driven by wave signals and not direct change in water mass properties, which would have a much longer timescale if important. However, some modulation of this signal is found in more shortterm signals from the atmosphere, through the NAO, where Table 2 shows a strong, but secondary signal of instantaneous linear terms in N. These terms arise from direct responses of the upper ocean due to the windinduced Ekman transport. Our analysis also showed that while the leading terms of each model are linear, the best model has distinct nonlinear components, involving a modulation of the wind and density difference variables. This nonlinearity was important in providing the best reproduction of the observed AMOC variation, and its inclusion was statistically robust. This necessity for including nonlinearity to provide the best model is consistent with the nonlinear nature of many densitydriven wave processes (Gill 1982).
Looking at the longer model reconstruction, back to 1980, an element of decadalscale change is visible (Fig. 6). While there is essentially no trend over the whole record (− 0.02 Sv/year), the 1980s tended to have a higher modeled AMOC (17.2 ± 1.7 Sv) than the late 1990s (16.2 ± 2.1 Sv over 1995–1999). Furthermore, it is also notable that the hindcasted AMOC varies in a range approximately between 13 and 20 Sv. Rapid and significant change in the strength of the AMOC within this range is a characteristic of the longer term pattern, and recent changes since 2010 are not unprecedented.
Conclusions
Using the control system identification model, NARX, it has been shown that the variation in the AMOC during the RAPID observational program is consistent with variability over the preceding 25 years. This includes the ability to experience periods of distinctly reduced flows, and decadalscale variation in the long term flow strength. Thus, recent slower flows (AMOC observed mean over 2009–2013 is 15.6 Sv) are not dissimilar to model hindcasts for the late 1990s (AMOC modeled mean over 1995–1999 is 16.2 Sv), given that the model is not normally able to capture shortterm negative excursions. In addition, the difference in transect measurements of the AMOC from before the RAPID era shown in Table 1 agree well with the model predictions since the mid1990s. Those from 1981 and 1992 are ~ 2 Sv above the model estimates for cruise months (18.7 and 17.9 Sv, respectively), which was within the error estimate for these observations (Bryden et al. 2005).
It has also been shown that the variation of the AMOC is linked strongly to the variation in the density difference between the northern sinking waters and the Gulf of Mexico source waters of the main overturning current, with a time lag of ~ 7 months, commensurate with the physical driving force being boundary density waves. This offers the future opportunity for some predictive power of the strength of the AMOC in the subtropical North Atlantic.
Notes
Acknowledgements
We thank the UK RAPID programme for providing the AMOC data at http://www.rapid.ac.uk/rapidmoc/. GODAS data was provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their Web site at http://www.esrl.noaa.gov/psd/.
Compliance with ethical standards
Conflict of interest
The authors have no financial conflicts of interest in carrying out this research.
References
 Ayala Solares JR (2017) Machine learning and data mining for environmental systems modelling and analysis. PhD Thesis, University of SheffieldGoogle Scholar
 Ayala Solares JR, Wei HL (2015) Nonlinear model structure detection and parameter estimation using a novel bagging method based on distance correlation metric. Nonlinear Dyn 82:201–215CrossRefGoogle Scholar
 Barnston AG, Livezey RE (1987) Classification, seasonality and persistence of lowfrequency atmospheric circulation patterns. Mon Weather Rev 115:1083–1126CrossRefGoogle Scholar
 Bigg GR, Levine RC, Green CJ (2011) Modelling abrupt glacial North Atlantic freshening: rates of change and their implications for Heinrich events. Glob Planet Change 79:176–192CrossRefGoogle Scholar
 Bigg GR, Wei HL, Wilton DJ, Zhao Y, Billings SA, Hanna E, Kadirkamanathan V (2014) A century of variation in the dependence of Greenland iceberg calving on ice sheet surface mass balance and regional climate change. Proc R Soc Ser A 470:20130662CrossRefGoogle Scholar
 Billings SA (2013) Nonlinear system identification: NARMAX methods in the time, frequency, and spatiotemporal domains. Wiley, New YorkCrossRefGoogle Scholar
 Billings SA, Wei HL (2007) Sparse model identification using a forward orthogonal regression algorithm aided by mutual information. IEEE Trans Neural Netw 18(1):306–310CrossRefGoogle Scholar
 Bryden HL, Longworth HR, Cunningham SA (2005) Slowing of the Atlantic meridional overturning circulation at 25°N. Nature 438:655–657CrossRefGoogle Scholar
 Buckley MW, Marshall J (2016) Observations, inferences, and mechanisms of the Atlantic Meridional Overturning Circulation: a review. Rev Geophys 54:5–63CrossRefGoogle Scholar
 Chen S, Billings SA (1989) Representations of nonlinear systems: the NARMAX model. Int J Control 49:1013–1032CrossRefGoogle Scholar
 Chen S, Billings SA, Luo W (1989) Orthogonal least squares methods and their application to nonlinear system identification. Int J Control 50:1873–1896CrossRefGoogle Scholar
 Collins M, Knutti R, Arblaster J, Dufresne JL, Fichefet T, Friedlingstein P et al (2013) Longterm climate change: projections, commitments and irreversibility. In: Stocker TF, Qin D, Plattner GK, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM (eds) Climate change 2013: the physical science basis. Contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change. Cambridge University Press, Cambridge, pp 1029–1136Google Scholar
 Cunningham SA, Kanzow T, Rayner D, Baringer MO, Johns WE, Marotzke J et al (2007) Temporal variability of the Atlantic Meridional Overturning Circulation at 26.5°N. Science 317:935–938CrossRefGoogle Scholar
 Danabasoglu G, Yeagera SG, Kimb WM, Behrensc E, Bentsend M, Bie D et al (2015) North Atlantic simulations in coordinated oceanice reference experiments phase II (COREII). Part II: interannual to decadal variability. Ocean Model 97:65–90CrossRefGoogle Scholar
 FrajkaWilliams E, Meinen CS, Johns WE, Smeed DA, Duchez A, Lawrence AJ, Cuthbertson DA, McCarthy GD, Bryden HL, Moat BI, Rayner D (2016) Compensation between meridional flow components of the Atlantic MOC at 26°N. Ocean Sci 12:481–493CrossRefGoogle Scholar
 Gill AE (1982) Atmosphereocean dynamics. Academic Press, London, p 662Google Scholar
 Guo Y, Guo L, Billings S, Wei HL (2015) An iterative orthogonal forward regression algorithm. Int J Syst Sci 46(5):776–789CrossRefGoogle Scholar
 Hodson D, Sutton R (2012) The impact of resolution on the adjustment and decadal variability of the Atlantic meridional overturning circulation in a coupled climate model. Clim Dyn 39:3057–3073CrossRefGoogle Scholar
 Jackson LC, Peterson KA, Roberts CD, Wood RA (2016) Recent slowing of Atlantic overturning circulation as a recovery from earlier strengthening. Nat Geosci 9:518–522CrossRefGoogle Scholar
 James G, Witten D, Hastie T, Tibshirani R (2013) An introduction to statistical learning, vol 6. Springer, New YorkCrossRefGoogle Scholar
 Karspeck AR et al (2015) Comparison of the Atlantic meridional overturning circulation between 1960 and 2007 in six ocean reanalysis products. Clim Dyn. https://doi.org/10.1007/s0038201527877 Google Scholar
 Kennedy J, Morice C, Parker D, Kendon M (2016) Global and regional climate in 2015. Weather 71:185–192CrossRefGoogle Scholar
 Marshall A, Bigg GR, van Leeuwen SM, Pinnegar JK, Wei HL, Webb TJ, Blanchard JL (2016) Quantifying heterogeneous responses of fish community size structure using novel combined statistical methods. Glob Change Biol 22:1755–1768CrossRefGoogle Scholar
 Prior J, Kendon M (2011) The UK winter of 2009/2010 compared with severe winters of the last 100 years. Weather 66:4–10CrossRefGoogle Scholar
 Ramsey JB (1969) Tests for specification errors in classical linear leastsquares regression analysis. R Stat Soc 31(2):350–371Google Scholar
 Smeed DA (2017) The RAPID challenge: observational oceanographers challenge their modelling colleagues. Ocean Chall 22:16–18Google Scholar
 Smeed DA, Smeed DA, McCarthy G, Cunningham SA, FrajkaWilliams E, Rayner D, Johns WE, Meinen CS, Baringer MO, Moat BI, Duchez A, Bryden HL (2014) Observed decline of the Atlantic meridional overturning circulation 2004–2012. Ocean Sci 10:29–38CrossRefGoogle Scholar
 Smeed D, McCarthy GG, Rayner D, Moat BI, Johns WE, Baringer MO, Meinen CS (2017) Atlantic meridional overturning circulation observed by the RAPIDMOCHAWBTS (RAPIDMeridional Overturning Circulation and Heatflux ArrayWestern Boundary Time Series) array at 26 N from 2004 to 2017. British oceanographic Data Centre—Natural Environment Research Council, UK. https://doi.org/10.5285/5acfd14311047b58e0536c86abc0d94btett
 Tett SFB, Sherwin TJ, Shravat A, Browne O (2014) How much has the North Atlantic Ocean overturning circulation changed in the last 50 years? J Climate 27:6325–6342. https://doi.org/10.1175/JCLID1200095.1 CrossRefGoogle Scholar
 Wei HL, Billings SA (2008) Model structure selection using an integrated forward orthogonal search algorithm assisted by squared correlation and mutual information. Int J Model Identif Control 3(4):341–356CrossRefGoogle Scholar
 Wei HL, Billings SA, Balikhin MA (2004a) Prediction of the Dst index using multiresolution wavelet models. J Geophys Res Space Phys 109(A7):A07212CrossRefGoogle Scholar
 Wei HL, Billings SA, Liu J (2004b) Term and variable selection for nonlinear system identification. Int J Control 77(1):86–110CrossRefGoogle Scholar
 Zhao Y, Bigg GR, Billings SA, Hanna E, Sole AJ, Wei HL, Kadirkamanathan V, Wilton DJ (2016) Inferring the variation of climatic and glaciological contributions to West Greenland iceberg discharge in the twentieth century. Cold Reg Sci Technol 121:167–178CrossRefGoogle Scholar