Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Improving prediction of two ENSO types using a multi-model ensemble based on stepwise pattern projection model

Abstract

This study focuses on improving prediction of the two types of ENSO by combining multi-model ensemble (MME) with a statistical error correction method that is based on a stepwise pattern projection and applied to all models before doing the MME. We evaluate such a combinational approach using five dynamical model datasets from the North American Multi-model Ensemble (NMME) project for the period of 1982–2010. The prediction skills of the proposed MME show an improvement over most tropical Pacific regions. With regard to the two ENSO types, improvements in prediction skills of the proposed MME are particularly evident for the Niño indices for short lead time. The differences between the Eastern Pacific and Central Pacific ENSO types are more pronounced in the corrected forecasts compared with the uncorrected ones. The zonal center position of sea surface temperature anomalies for the corrected MME is closer to the observed than that for the uncorrected MME. The results indicate that reducing prediction errors of each model member by a good correction method before applying the MME method can provide an effective way for empirically improving forecasts of the two ENSO types.

Introduction

El Niño-Southern Oscillation (ENSO) is the most pronounced climate phenomena in the tropical Pacific with a sustained period of warming and cooling in sea surface temperatures (SST) in the central and east-central equatorial Pacific. As the dominant mode of natural climate variability, it has a major impact on the global climate and particularly the East Asian climate variability (e.g., Brönnimann 2007; Rasmusson and Carpenter 1982; Zhang et al. 1996; Zhai et al. 2016). A successful ENSO prediction can offer decision-makers with an opportunity to take the anticipated climate anomalies into account, which potentially help to reduce the social and economic losses induced by this natural phenomenon. Many studies showed that coupled atmosphere–ocean general circulation models (CGCMs) has become a powerful tool for ENSO prediction. However, it still remains challenging to predict the two ENSO types due to model biases and other deficiencies (Yu and Kim 2010; Ham and Kug 2012).

A large amount of evidence indicates that there exist two flavors of ENSO events in the tropical Pacific (Ashok et al. 2007; Kao and Yu 2009; Kug et al. 2009; Larkin and Harrison 2005a), with distinctly different climate impacts (e.g., Hegyi et al. 2014; Hegyi and Deng 2011; Kim et al. 2009; Wang and Wang 2014; Weng et al. 2007; Zhang et al. 2012; Zhang et al. 2011). In contrast to the canonical ENSOs featured by SST anomalies in the eastern Pacific, the non-canonical types are featured by SST anomalies centered in the equatorial central Pacific (e.g., Ashok et al. 2007; Kao and Yu 2009; Kug et al. 2009; Ren and Jin 2011, 2013; Wang and Ren 2017; Weng et al. 2007; Xiang et al. 2013). This non-canonical type is alternatively referred to as dateline El Niño (Larkin and Harrison 2005a), El Niño Modoki (Ashok et al. 2007; Weng et al. 2007), CP El Niño (Kao and Yu 2009), and warm-pool El Niño (Kug et al. 2009). In this study, we adopt the terminology of the EP and CP ENSO to describe the two types.

In recent years, major advancements have been made in examining the typical features and mechanisms of the two ENSO types (Yeh et al. 2014 and references therein). Many previous studies revealed that the CP-type of El Niño has become more frequent since the late 1970s (Ashok et al. 2007; Kao and Yu 2009; Kug et al. 2009; Larkin and Harrison 2005a, b; Ren and Jin 2013), and the frequency tends to increase under a warming climate (Yeh et al. 2009). Large amounts of international studies have also focused on the prediction assessments and predictability for the two ENSO types using single models and multi-models (e.g., Hendon et al. 2009; Imada et al. 2015; Jeong et al. 2012; Lee et al. 2017; Ren et al. 2019a; Yang and Jiang 2014). Their studies indicated that the CP ENSO may be more difficult to predict due to their smaller amplitude (Imada et al. 2015; Zhu et al. 2015), even though they tend to possess longer persistence or a weaker persistence barrier than the EP ENSO (Kim et al. 2009; Ren et al. 2016; Tian et al. 2019). Barnston et al. (2012) pointed out that the prediction skill of ENSO experienced a clear interdecadal decline since the 2000s and that the major factor causing such a decline in prediction skill is related to the ENSO diversity. Most dynamical models have difficulty in capturing the typical observed features of the two ENSO types (e.g., Yu and Kim 2010). Therefore, exploring how to improve the performance of the climate models in distinguishing the two types of ENSO is essential for further improving seasonal climate predictions (e.g., Ren et al. 2019a).

Together with increased understanding of the ENSO-related physical processes, numerous efforts have been made to improve ENSO predictions (e.g., Cane et al. 1986; Chen et al. 1995, 2004; Cheng et al. 2010; Ham et al. 2009; Izumo et al. 2010; Kang and Kug 2000; Liu and Ren 2017; Luo et al. 2005, 2008; Ren et al. 2014; Zebiak and Cane 1987; Zheng et al. 2006; Zhu et al. 2012, 2017). Among them, applying post-processing procedures is one way that has been proposed in the past decades, e.g., the multi-model ensemble (MME) technique (Barnston et al. 2003; Palmer et al. 2004, 2010; Wang et al. 2009; Kirtman et al. 2014) and the statistical error correction (or downscaling) method (e.g. Anthony et al. 2015; Latif et al. 1998; Luo et al. 2008; Zhang et al. 2003), which have been demonstrated to be able to reduce the uncertainty of coupled models.

As an effective and relatively simple approach, MME combines various dynamical predictions, which differ in their representations of physical processes, in their numerical schemes, or in their use of observations to construct initial conditions. The model errors contained in individual models are expected to cancel each other (DelSole et al. 2014; Hagedorn et al. 2005; Tippett and Barnston 2008). As a result, MME prediction systems are currently utilized at several operational centers that routinely provide MME seasonal forecasts, among which the most popular and frequently viewed project is the North American Multi-model Ensemble (NMME) project (Kirtman et al. 2014). The NMME is a collaboration project from US modeling centers and the Canadian Meteorological Centre (CMC), aimed at improving subseasonal-to-seasonal prediction capability (Kirtman et al. 2014). Furthermore, the NMME project has proven effective at forecasting ENSO and its relevant climate variables (Kirtman et al. 2014). Hence, the hindcast datasets from the NMME system are used in this work.

Ren et al. (2019a) found that the MME has a limited ability to predict the different zonal positions of SST anomaly centers between the two types, despite considerable successes in prediction skill of the Niño indices. As another promising approach, empirical or statistical correction methods have been widely used in climate prediction (Feddersen et al. 1999; Graham et al. 1994; Kang et al. 2004; Kug et al. 2004, 2007b, 2008a; Ren et al. 2014; Sailor and Li 1999; Von Storch et al. 1993; Ward and Navarra 1997; Zorita et al. 1995; Zorita and Von Storch 1999). Based on a pattern projection model (Kug et al. 2007b), Kug et al. (2008a) proposed a Stepwise Pattern Projection Model (SPPM) that effectively improved the prediction skill of Seoul National University (SNU) coupled GCM. The main idea of the SPPM is to produce a prediction at the predictand grid by projecting the predictor field onto its covariance pattern with the one-point predictand after selecting the predictor domain. The SPPM combined with MME method has been successfully applied to the seasonal prediction (Kug et al. 2008b; Min et al. 2014).

As previously mentioned, the current generation of climate models has a limited capability of reproducing the typical observed features of the two ENSO types. However, in comparison to the number of studies on prediction of the canonical ENSO, relatively few studies have been conducted to improve the skill of the dynamical models in predicting the two ENSO types. Therefore, we aim at carrying out an effective method by combining the SPPM and MME technique to improve the predictions of two types of ENSO. The prediction skill of the combinational method will compare against the uncorrected MME and individual models. The data and MME methodologies used are described in Sect. 2. The results of MME predictions using four different MME schemes are presented in Sect. 3. Section 4 gives a brief summary and discussion.

Data and methods

Hindcast datasets

The NMME has been launched in the United States (Kirtman et al. 2014), with real-time experimental operational forecasts made at National Oceanic and Atmospheric Administration (NOAA)/National Centers for Environmental Prediction (NCEP) starting in August 2011. In this study, we select five models from the NMME (Becker et al. 2014; Kirtman et al. 2014), including Climate Forecast System, version 2 (CFSv2; Saha et al. 2006, 2014), the Forecast-Oriented Low Ocean Resolution (FLOR) version of Geophysical Fluid Dynamics Laboratory (GFDL) Climate Model (Delworth et al. 2006; Vecchi et al. 2014), 2.1 version of GFDL (Delworth et al. 2006) and Community Climate System Model, version 4 (CCSM4; Danabasoglu et al. 2012). All hindcasts and the climatology are based on the period of 1982–2010. The predicted anomalies are derived by subtracting climatology which are a function of both initial condition and lead time, and no additional time smoothing is applied. At different calendar months and lead time, the predicted anomalies are derived by subtracting their corresponding climatology. The number of ensemble member ranges from 10 to 24 and descriptions for each prediction model are given in Table 1. Since each prediction model has several ensemble members, the ensemble mean of each model is taken. Using the five different SST predictions, the final SST prediction is made by an ensemble mean.

Table 1 The details of MME members

Observational dataset

For validation, the SST datasets used are the observed monthly mean from the Improved Extended Reconstructed Sea Surface Temperature version 3b (ERSST v3b) (Smith et al. 2008; Smith and Reynolds 2004) produced by the National Centers for Environmental Information (NCEI). Observational anomalies in this paper are calculated from the climatology during the period of 1982–2010. For comparison purpose, the ensemble mean prediction of each model is interpolated to a common 2° latitude × 2° longitude grid.

Stepwise pattern projection model (SPPM)

In this study, we employ the SPPM to correct the systematic errors in each model before applying the MME method. The leave-one-out cross validation process is used. The data at the target year are completely excluded in the cross-validation process. The main idea of the SPPM is to produce the corrected SST prediction at each grid by projecting the model prediction field onto the covariance pattern between the large-scale model prediction field and the one-point observed predictand (Kug et al. 2008a). The predictand is the observed SST at each grid point, indicated by Y(t). The predictor is an SST pattern of model variable in an optimal domain, denoted by Ψ(x, y, t), where x denote the east–west longitude grids, y denote north–south latitude grids and t denote temporal grids, respectively.

The optimal predictor domain (D), whose selection plays a crucial role in the SPPM, can be automatically selected by calculating correlation coefficients between a predictand and the two-dimensional candidate predictors. Among all possible predictor grid points, the grid points having a relatively high correlation are selected as the optimal predictor grid points. First of all, only grids with correlation more than 0.95 are used as a reconstructed domain. When the grid number of the first group is less than 200 grid points, the grids with correlation more than 0.9 are also included as a reconstructed domain, and so on. If the grid numbers are not enough in spite of using all the grids with correlation more than 0.3, the corresponding climatological value of the model will be used as a final prediction. The selected grid points can be located in several regions. As a result, the reconstructed domain is regarded as a predictor domain (D). In this study, the whole global SST anomalies are used to search the optimal predictor domain.

Using the selected predictors, a corrected prediction can be produced based on the SPPM. Firstly, a linear regression prediction model is established as

$$Y(t) = \alpha \cdot P(t),$$
(1)

where P indicates a projected time series from the covariance pattern and model prediction Ψ(x, y, t) as:

$$P(t) = \sum\limits_{x,y}^{D} {Cov(x,y) \cdot \psi (x,y,t)} .$$
(2)

The Cov(x,y) is the covariance pattern between the predictand Y(t) and predictor field Ψ(x, y, t) with in the predictor domain (D), as follow:

$$Cov(x,y) = \frac{1}{T}\sum\limits_{t = 1}^{T} {Y(t) \cdot \psi (x,y,t)} ,$$
(3)

where, T indicates a training period. The covariance pattern indicates a pattern of the model prediction which is related to the observed predictand. With the change of predictand grid, the covariance pattern will be different. The regression coefficient of Eq. (1), α, can be expressed as

$$\alpha = \frac{{\frac{1}{T}\sum\nolimits_{t = 1}^{T} {Y{(}t{)} \cdot P{(}t{)}} }}{{\frac{1}{T}\sum\nolimits_{t = 1}^{T} {P^{2} {(}t{)}} }}.$$
(4)

The result of the corrected model prediction \(\hat{Y}\) for target year, tf, is indicated as

$$\hat{Y}(t_{f} ) = \alpha \cdot P(t_{f} ).$$
(5)

For more details on the SPPM procedure, one may refer to Kug et al. (2008a).

After applying SPPM, the final SST prediction is made by equal weighting ensemble method, called “corrected MME”. As a comparison, the mean SST anomalies of each model without statistical correction procedure are also averaged and called “uncorrected MME”.

Niño indices for two types of ENSO

The Niño4, Niño3.4, and Niño3 indices are calculated by averaging SST anomalies in the regions (5° S–5° N, 150° W–160° E; 5° S–5° N, 120°–170° W; and 5° S–5° N, 90°–150° W, respectively). To measure the two ENSO types, the Niño warm-pool index (NiñoWPI) and Niño cold-tongue index (NiñoCTI) as proposed by Ren and Jin (2011) have been used to represent the CP and EP types, respectively. The two new indices are simply a piecewise combination of the Niño3 and Niño4 indices conditioned by the ENSO phase. The formula is as follows:

$$\left\{\begin{array}{c}CTI={N}_{3}-\alpha {N}_{4}\\ WPI={N}_{4}-\alpha {N}_{3}\end{array}, \quad \alpha =\left\{\begin{array}{c}2/5, \quad {N}_{3}{N}_{4}>0\\ 0, \quad otherwise\end{array}\right..\right.$$
(6)

Here, N3 and N4 denote Niño3 and Niño4 indices, respectively. The parameter α of this formula is determined by a minimization procedure to make the apparent clusters along the indicated diagonal centered on the transformed coordinate axes as much as possible. The clusters mentioned above are the points of WP El Niño and CT El Niño in the phase space of the Niño3 and Niño4 indices. α of the observation is set to 2/5 as shown in Eq. (6), which is consistent with the study of Ren and Jin (2011). For each uncorrected and corrected model, α is determined by such a minimization procedure, respectively. For detailed description of the procedure, one may refer to Ren and Jin (2011, 2013).

Results

Prediction skills of the El Niño

To quantitatively present how well the SPPM improves the performance of SST predictions, the spatial distributions of temporal correlation coefficients (TCC) between observed SST anomalies and predicted SST anomalies for lead times of 4 months are presented in Fig. 1. As expected, positive correlations are found almost throughout the Pacific oceans and the high correlation regions are generally located in the central equatorial Pacific. The uncorrected individual models exhibit correlations > 0.6 over the tropical Pacific (10° S–10° N, 165° E–90° W), but the correlations are difficult to reach 0.8. The uncorrected MME has marginally better skill than those individual models, with the correlations > 0.8 covering a small area. The corrected predictions perform better than corresponding uncorrected ones, especially the corrected MME, as they show the largest areas where correlations > 0.8 in the ENSO region. The area-averaged skill over the equatorial Pacific domain between 15° S and 15° N is 0.53 (0.67), 0.59 (0.65), 0.51 (0.66), 0.55 (0.65), 0.58 (0.67) for the five uncorrected (corrected) models and 0.63 (0.70) for the uncorrected MME (corrected MME), respectively. The TCC increments of the corrected prediction over the uncorrected prediction are significant over central Pacific. Besides, it is clear that the positive increments of the correlations are much larger in the vicinity of western equatorial Pacific for CM2p1_a and CM2p1. Note that significant negative correlations are found in the same region from TCC map, which may come from the westward shift biases of ENSO anomalies. The corrected MME improve the TCC skills over western equatorial Pacific regions. The TCC skills of the uncorrected MME are maintained at around 0.2 while the corrected MME are maintained at more than 0.5, indicating that the corrected MME can provide a greater capacity of error correction.

Fig. 1
figure1

Spatial patterns of temporal correlation coefficients of the uncorrected individual models (ae), uncorrected MME (f), corrected individual models (gk), and corrected MME (l) at 4-month lead as well as the difference between corrected and uncorrected results (mr), respectively. Black dots denote confidence levels greater than 99%

Figure 2 gives the correlation coefficients of Niño3.4 index as a function of lead time, which are widely used in the verification of ENSO prediction skill (Barnston et al. 1997). It can be seen that the correlation coefficients of all the uncorrected individual models are above 0.6 at 6-month lead time but no one exceeds 0.7. The uncorrected MME exhibits a relatively reliable prediction skill with the correlation coefficient above 0.7 at the 6-month lead time, showing a slightly better skill level than the uncorrected individual models. This agrees with the results that multi-model forecasts are generally more skillful than single-model forecasts (Krishnamurti et al. 1999; Kharin and Zwiers 2002; Palmer et al. 2004; Jan van Oldenborgh et al. 2005; Kug et al. 2007a). Whereas, it is difficult for the uncorrected MME to exceed the skill of the SPPM. Correlation coefficients for most of the corrected individual models are consistently higher than those from the uncorrected MME forecasts for most lead months. The red solid line is for the corrected MME with a correlation coefficient of 0.8 at 6-month lead and over 0.7 for up to a 9-month lead time. It is clear that the corrected MME outperforms any of the corrected individual models and the uncorrected MME in forecasting Niño3.4 SST for all lead months. The Niño 3.4 skill score for the corrected MME increases by more than 11% on average compared with the uncorrected MME, and more than 24% on average compared with the CM2p1_a (the worst one), during all lead months.

Fig. 2
figure2

The TCC skill scores for the Niño3.4 index in corrected individual models (solid line), the corrected MME (solid line), uncorrected individual models (dashed line) and uncorrected MME (dashed line), based on the cross validations during 1982–2010

Figure 3 illustrates the changing of the Niño 3.4 index skill scores with the initial months. It is encouraging to see that the skill scores at nearly all the initial and lead months are increased by applying the SPPM and MME. The corrected MME has a significantly high skill, with correlation score of around 0.9 for 9-month lead initialized in June (Fig. 3a). As we know, the prediction skill of ENSO always experiences a more obvious decline during boreal spring than during other seasons, which is referred to as the spring predictability barrier (e.g., Webster and Yang 1992; McPhaden 2003; Duan and Hu 2016; Duan et al. 2016a, b). Although the skill still drops sharply during springtime for the corrected MME, the barriers are visibly much stronger for the uncorrected MME (Fig. 3b). To clearly show the skill improvement of the corrected MME, the skill differences between the corrected forecasts and uncorrected forecasts are illustrated in Fig. 3c–h. It shows that the skills of the corrected forecasts are consistently higher than those for the uncorrected forecasts, with the improvements seen on most lead months and initial months. In particular, the target month with a significant improvement usually occurs in spring and winter of the next year, indicating that the increased skills of the corrected MME can be attributed to a weakening of the spring barriers.

Fig. 3
figure3

The TCC skill scores of the Niño3.4 index as a function of lead month (x-axis) and initial month (y-axis) for the MMEs (a, b), and the TCC differences between corrected forecasts and uncorrected forecasts (ch)

Prediction skills of the two types of ENSO

In this section, we will comprehensively assess the effectiveness of the SPPM and MME in improving predictions of the two types of ENSO. Figure 4 shows the TCC skill scores in term of the (a) Niño3, (b) Niño4, (c) NiñoCT and (d) NiñoWP indices from the corrected and uncorrected individual models. The skill scores feature a gradual decline with increasing lead months for all the Niño indices. Both Niño3 and NiñoCT indices, which are sometimes used to represent the EP type, show higher TCC skill in corrected models at all lead months, except that FLOR_B and CCSM4 remain higher skill than the uncorrected models within 5 lead months. For Niño4 SST prediction, each corrected model presents significant superiority over uncorrected one. The NiñoWP index shows relatively lower skills compared to the other indices. For NiñoWP SST prediction, the corrected FLOR_B and CCSM4 have comparable skills with their uncorrected ones, while other corrected models have higher skills at lead month up to 5 months. The NiñoWPI skill for the uncorrected models decreases fast at short lead months. When lead time is longer than 2 months, it is hard for the corrected models to beat the skill of the uncorrected models as the lead time gets longer. In general, however, the SPPM can significantly contribute to the skills of the Niño3, Niño4 and NiñoCT and NiñoWP indices for most models and lead months compared to the uncorrected models. We further examine the skill scores in term of the four Niño indices in the uncorrected and corrected MME (Fig. 5). The Niño3 and Niño4 indices show relatively higher skills compared to the other two, which is consistent with the results from Fig. 4. The TCC skill scores of the Niño3 and Niño4 indices in the corrected MME are significantly higher than those for the uncorrected MME at all lead months. Meanwhile, the skill scores are obviously improved for the corrected MME when lead time is less than 6 months for NiñoCT SST prediction, while up to 3-month lead time for NiñoWPI SST prediction.

Fig. 4
figure4

The TCC skill scores of Niño3, Niño4, NiñoCT, and NiñoWP indices in the uncorrected individual models (dashed line), and corrected individual models (solid line), based on the cross validations during 1982–2010

Fig. 5
figure5

The TCC skill scores of Niño3, Niño4, NiñoCT, and NiñoWP indices in the uncorrected MME (dashed line), and corrected MME (solid line), based on the cross validations during 1982–2010

Now we focus on how well the combinational SPPM and MME method can capture the main features of the two types of ENSO. Figure 6 shows the regression patterns of observed SST onto normalized observational NiñoCTI and NiñoWPI, and those for the model hindcasts are given in Figs. 7 and 8. The observed patterns clearly present that the magnitude of the SST anomalies tends to be weaker during CP ENSO compared to EP ENSO. The EP ENSO exhibits significant anomalous SST warming in the central and eastern equatorial Pacific, while the CP ENSO show obvious SST anomalies only existing in the central Pacific.

Fig. 6
figure6

Regression patterns of observed seasonal SST anomalies on the observed normalized NiñoCTI (top) and on the observed normalized NiñoWPI (bottom). Blue dots denote the SST anomaly center longitudes of two types of El Niño where the amplitude of equatorial-mean (5° S–5° N) SST anomaly reaches maximum

Fig. 7
figure7

Regression patterns of the predicted seasonal SST anomalies from uncorrected forecasts (left) and corrected forecasts (right) at 4-month lead on the normalized NiñoCTI. Blue dots denote the SST anomaly center longitudes where the amplitude of equatorial-mean (5° S–5° N) SST anomaly reaches maximum. Black numbers on right top of panels are pattern correlation coefficients (PCCs) between the model patterns and the corresponding observation pattern

Fig. 8
figure8

Same as Fig. 7, but the regression patterns on the normalized NiñoWPI

The regression patterns of predicted SST anomalies on the normalized NiñoCTI for the 4-month lead are shown in Fig. 7. Compared with the observed patterns (Fig. 6), all the uncorrected forecasts capture the general features in the eastern and central Pacific, but some of them tend to produce stronger-than-observed warm SST anomalies over central Pacific. The uncorrected models from GFDL (Figs. 7b, d, f) extend positive anomalies westwards to 150° E while the observational limit it around 170° E. A similar discrepancy to observations is found in the uncorrected MME. In addition, although most of the uncorrected models well simulate positive anomalies center in central Pacific, they fail to depict the maximum positive anomalies observed near the east boundary of the equatorial pacific. As for the forecasts from the corrected models, though the amplitudes of the SST anomalies are slightly overestimated in the eastern Pacific, the spatial range of positive anomalies and the location of the positive value center are highly consistent with those in the observed pattern. Compared to the forecasts from the uncorrected models, it is clearly seen that the corrected models generally better capture the features of EP ENSO, with most pattern correlation coefficients (PCCs) are generally higher in the corrected models.

Figure 8 shows the regression patterns on the normalized NiñoWPI. The SST anomalies from model hindcasts are generally weaker in the CP ENSO than in the EP ENSO, agreeing well with the observations. However, almost all the results (Fig. 8) have bias in overestimating the magnitudes of SST anomalies related to CP ENSO and exhibiting warming SST anomalies extending to the eastern boundary, particularly the uncorrected CM2p1_a and uncorrected CM2p1. This may be due to the influence of the model EP events whose SST anomalies remain strong in this region. It is also obvious that the locations of the maximum SST anomalies from the uncorrected CM2p1_a, CM2p1 and the uncorrected MME exist to the east of the observed center. The other uncorrected models show a maximum over the central Pacific. In comparison, the positive value centers from the corrected models are closer to the observed center. This conclusion can be seen more clearly in Fig. 9 when we extract main longitudinal center positions of the two types. Furthermore, a comparison between Figs. 7 and 8 show that the uncorrected CM2p1_a and CM2p1 fail to depict the differences between CP ENSO and EP ENSO. In contrast, the ability of the corrected models to distinguish the patterns of the two types is significantly higher than that of the uncorrected models.

Fig. 9
figure9

Scatter maps of SST anomaly center longitudes of uncorrected (top) and corrected predictions (bottom) from the regression patterns on the NiñoCTI (a, b) and NiñoWPI (c, d). The reference lines indicate the center longitudes of the observation patterns

In addition, Fig. 9 presents the center longitude index (CLI, proposed by Ren et al. 2019a) that is defined as the longitude where the amplitude of equatorial-mean (5° S–5° N) SST anomalies reaches maximum, which collectively contrasts the center positions of the SST anomaly patterns as shown in Figs. 6, 7 and 8 for the regressed EP and CP ENSO patterns. Since many models have common biases in exaggerating positive values near the eastern boundary, the centers that appear east of 90° W have been ignored. In observation, the SST center of the EP type is steadily located west of 105° W and that of the CP type is east of 168° W. In Fig. 9a, almost all of the models show maximum SST anomalies to the west of the observed maximum. It is interesting to see that the uncorrected MME is one of the forecasts farthest from the observed. Thus, we may conclude that the MME method has a limit ability to distinguish the two types in terms of their center positions, which is consistent with the result of Ren et al. (2019a).

After the correction, almost all of the models have center positions closer to the observed compared to the uncorrected results, except that CFS_v2 shows a strong eastward shift while the uncorrected CFS_v2 has a clear westward displacement. Still, the corrected MME is not the best one among the corrected ones, but the center positions of the corrected MME is closer to the observed center than that in the uncorrected MME. For the CP type, both the uncorrected CCSM4 and CFS_v2 have positions close to the observed while the other models show a clear eastward displacement, especially the CM2p1_a, CM2p1 and the MME with lead time ranging from 1 to 5 months. Similar to the findings from EP type, most models have the position closer to the observation after the correction, except for CFS_v2. Comparing the two MME, the center position from the corrected MME is much closer to the observed. Therefore, we conclude that the MME after the error correction has an advantage in predicting the locations of the SST centers in term of the two types of ENSO.

Summary and discussions

Successful ENSO prediction will help decision makers to reduce the social and economic loss. However, ENSO prediction is still far from perfect. The decline in prediction skill of ENSO is believed to be related to the increased occurrences of CP-type ENSOs. Therefore, it is essential to improve the predictions of the two types of ENSO. In this paper, the five models from the NMME are used to construct an MME prediction together with a statistical error correction method (SPPM) for all the models. The prediction skills of the combinational method show an improvement over most regions and lead times. The MME after the correction also has an advantage in distinguishing the patterns between the two types of ENSO.

To assess the value of the combinational method, the prediction skill of the corrected MME has been compared with the uncorrected MME and single models. We evaluated the spatial distribution of the TCCs between the predicted and observed SST anomalies. It shows that the corrected MME always has the highest prediction skill over the tropical Pacific. Increased skill suggests that the corrected MME can effectively eliminate prediction errors, particularly over the equatorial western Pacific where some individual models have serious biases. The results of Niño3.4 index skill indicate that the corrected MME is generally more skillful at most initial and lead months. Compared with the uncorrected MME, the averaged skill for the corrected MME has increased by more than 11%. Although the corrected MME still features a spring predictability barrier, it achieves more significant improvements over the uncorrected MME for the target months in spring. This suggests that its spring predictability barrier appears to be much weaker than the uncorrected MME and single models.

We examined the ability of the SPPM-MME combined procedure to improve prediction of the two types of ENSO. The SPPM can significantly contribute to the skills of the Niño indices in terms of the CP ENSO and EP ENSO for most models and lead months, indicating that the SPPM has the potential to increase prediction skill for the two ENSO types, which are currently difficult to be simulated adequately in climate models. Further analysis shows that the corrected MME has a great advantage in improving the skills for the Niño3 and Niño4 indices at all lead months, and for the NiñoWPI and NiñoCTI at short lead time compared to the uncorrected MME.

The regression analysis has been used to evaluate spatial differences between the two types of ENSO in the forecasts and we found significant differences among the uncorrected models. For the EP ENSO, some of the uncorrected models overestimate the magnitude of the SST anomalies and some of the models have a bias in extending the anomalies westward to 150° E while the observations limit extension around 170° E. Almost all of the uncorrected models have failed to reproduce the maximum anomalies around western boundary of equatorial Pacific. However, the center location, the range and magnitude of SST anomalies in the corrected forecasts are highly consistent with the observed. For the WP ENSO, the results from both the corrected and uncorrected forecast have a bias in exhibiting SST anomalies extending eastwards to the eastern boundary. The SST patterns of the corrected models are better than the uncorrected ones, with most PCCs higher in the former. In general, the differences between the two types are more pronounced in the corrected forecasts compared with the uncorrected forecasts. We also focused on the center positions of the SST anomalies. Almost all of the uncorrected models show maximum SST anomalies to the west of the observed maximum for the EP type and a clear eastward displacement for the CP type at short lead months. The corrected forecasts have center positions closer to the observation compared with the uncorrected ones, except for CFS_v2. In addition, we noted that though the corrected MME is not the best one among the corrected forecasts, the center positions of the corrected MME is much closer to the observed center than that of the uncorrected MME.

From the aforementioned results, we demonstrate that the corrected MME has the potential to effectively correct ENSO prediction. In this study, it is difficult for the uncorrected MME to exceed the skill of the SPPM at most lead months, which indicates that it is insufficient to improve realistic ENSO prediction only by the MME method. Moreover, our results are consistent with the previous study (Ren et al. 2019a), where the MME method has no evident contributions to distinguishing the two types in terms of their center positions. However, the ability of distinguishing the two types in majority of models can be improved by the SPPM. Therefore, the SPPM-MME combination method can empirically improve the performance of the climate models in reproducing the differences between the two ENSO types. In general, the corrected MME provides an effective way of empirically improving ENSO forecasts, particularly with regard to forecasts of the two ENSO types. Also, it is of interest that patterns of all models for the CP type have a small positive territory in the eastern Pacific compared to the observed. Further improvement of a dynamical model itself is still required.

References

  1. AchutaRao K, Sperber KR (2006) ENSO simulation in coupled ocean-atmosphere models: are the current models better? Clim Dyn 27(1):1–15. https://doi.org/10.1007/s00382-006-0119-7

  2. Ashok K, Behera SK, Rao SA, Weng H, Yamagata T (2007) El Niño Modoki and its possible teleconnection. J Geophys Res 112:(c11)11007. https://doi.org/10.1029/2006JC003798

  3. Barnston AG, Chelliah M, Goldenberg SB (1997) Documentation of a highly ENSO-related SST region in the equatorial. Pacific Atmos-Ocean 35(3):367–383. https://doi.org/10.1080/07055900.1997.9649597

  4. Barnston AG, Mason SJ, Goddard L, DeWitt DG, Zebiak SE (2003) Multimodel ensembling in seasonal climate forecasting at IRI. Bull Am Meteor Soc 84(12):1783–1796. https://doi.org/10.1175/BAMS-84-12-1783

  5. Barnston AG, Tippett MK, L'Heureux ML, Li S, DeWitt DG (2012) Skill of real-time seasonal ENSO model predictions during 2002–2011: is our capability increasing? Bull Am Meteor Soc 93(5):631–651. https://doi.org/10.1175/BAMS-D-11-00111.1

  6. Barnston AG, Tippett MK, Hv DD, Unger DA (2015) Toward an improved multimodel ENSO prediction. J Appl Meteorl Clim 54:1579–1595. https://doi.org/10.1175/JAMC-D-14-0188.1

  7. Becker EH, Hv DD, Zhang Q (2014) Predictability and forecast skill in NMME. J Clim 27:5891–5906. https://doi.org/10.1175/JCLI-D-13-00597.1

  8. Brönnimann S (2007) The impact of El Niño-Southern Oscillation on European climate. Rev Geophys 45(3):RG3003. https://doi.org/10.1029/2006RG000199

  9. Cane MA, Zebiak SE, Dolan SC (1986) Experimental forecasts of El Niño. Nature 321:827–832. https://doi.org/10.1038/321827a0

  10. Chen D, Zebiak SE, Busalacchi AJ, Cane MA (1995) An improved procedure for El Niño forecasting: implications for predictability. Science 269:1699–1702. https://doi.org/10.1126/science.269.5231.1699

  11. Chen D, Cane MA, Kaplan A, Zebiak SE, Huang DJ (2004) Predictability of El Niño over the past 148 years. Nature 428:733–736. https://doi.org/10.1038/nature02439

  12. Cheng Y-J, Tang Y-M, Zhou X-B, Jackson P, Chen D (2010) Further analysis of singular vector and ENSO predictability in the Lamont model part I: singular vector and the control factors. Clim Dyn 35(5):807–826. https://doi.org/10.1007/s00382-009-0595-7

  13. Danabasoglu G, Bates SC, Briegleb BP, Jayne SR, Jochum M, Large WG et al (2012) The CCSM4 ocean component. J Clim 25(5):1361–1389​. https://doi.org/10.1175/JCLI-D-11-00091.1

  14. DelSole T, Nattala J, Tippett MK (2014) Skill improvement from increased ensemble size and model diversity. Geophys Res Lett 41(20):7331–7342. https://doi.org/10.1002/2014GL060133

  15. Delworth TL, Broccoli AJ, Rosati A, Stouffer RJ, Balaji V, Beesley JA et al (2006) GFDL’s CM2 global coupled climate models. Part I: formulation and simulation characteristics. J Clim 19:643–674. https://doi.org/10.1175/JCLI3629.1

  16. Duan W-S, Hu J-Y (2016) The initial errors that induce a significant “spring predictability barrier” for El Niño events and their implications for target observation: results from an earth system model. Clim Dyn 46(11):3599–3615. https://doi.org/10.1007/s00382-015-2789-5

  17. Duan W-S, Zhao P, Hu J-Y, Xu H (2016) The role of nonlinear forcing singular vector tendency error in causing the “spring predictability barrier” for ENSO. J Meteorol Res 30(6):853–866. https://doi.org/10.1007/s13351-016-6011-4

  18. Feddersen H, Navarra A, Ward MN (1999) Reduction of model systematic error by statistical correction for dynamical seasonal prediction. J Clim 12:1974–1989. https://doi.org/10.1175/1520-0442(1999)012%3c1974:ROMSEB%3e2.0.CO;2

  19. Graham NE, Barnett P, Wilde R, Ponater M, Schubert S (1994) On the roles of tropical and midlatitude SSTs in forcing interannual to interdecadal variability in the winter Northern Hemisphere circulation. J Clim 7:1416–1441. https://doi.org/10.1175/1520-0442(1994)007%3c1416:OTROTA%3e2.0.CO;2

  20. Hagedorn R, Doblas-Reyes FJ, Palmer RN (2005) The rationale behind the success of multimodel ensembles in seasonal Forecasting—I. Basic concept. Tellus 57(3):219–233. https://doi.org/10.1111/j.1600-0870.2005.00103.x

  21. Ham Y-G, Kug J-S (2012) How well do current climate models simulate two types of El Nino? Clim Dyn 39(1–2):383–398. https://doi.org/10.1007/s00382-011-1157-310.1007/s00382-011-1157-3

  22. Ham YG, Kug J-S, Kang I-S (2009) Optimal initial perturbations for El Niño ensemble prediction with ensemble Kalman filter. Clim Dyn 33(7):959–973. https://doi.org/10.1007/s00382-009-0582-z

  23. Hegyi BM, Deng Y (2011) A dynamical fingerprint of tropical Pacific sea surface temperatures in the decadal-scale variability of the cool-season Arctic precipitation. J Geophys Res. https://doi.org/10.1029/2011JD016001

  24. Hegyi BM, Deng Y, Black RX, Zhou R (2014) Initial transient response of the winter polar stratospheric vortex to idealized equatorial pacific sea surface temperature anomalies in the NCAR WACCM. J Clim 27:2699–2713. https://doi.org/10.1175/JCLI-D-13-00289.1

  25. Hendon HH, Lim E, Wang G, Alves O, Hudson D (2009) Prospects for predicting two flavors of El Niño. Geophys Res Lett 36(19):L19713. https://doi.org/10.1029/2009GL040100

  26. Imada Y, Tatabe H, Ishii M, Chikamoto Y, Mori M, Arai M et al (2015) Predictability of two types of El Niño assessed using an extended seasonal prediction system by MIROC. Mon Weather Rev 143:4597–4617. https://doi.org/10.1175/MWR-D-15-0007.1

  27. Izumo T, Vialard J, Lengaigne M, Montegut CDB, Behera SK, Luo J-J et al (2010) Influence of the state of the Indian Ocean dipole on the following year’s El Niño. Nature geosci 3(4):168–172. https://doi.org/10.1038/ngeo760

  28. Jan van Oldenborgh G, Balmaseda MA, Ferranti L, Stockdale TN, Anderson DL (2005) Did the ECMWF seasonal forecast model outperform statistical ENSO forecast models over the last 15 years? J Clim 18(16):3240–3249. https://doi.org/10.1175/JCLI3420.1

  29. Jeong H-I, Lee DY, Ashok K, Ahn J-B, Lee J-Y, Luo J-J et al (2012) Assessment of the APCC couple MME suite in predicting the distinctive climate impacts of two flavors of ENSO during boreal winter. Clim Dyn 39(1–2):475–493. https://doi.org/10.1007/s00382-012-1359-3

  30. Jin EK, Kinter JL, Wang B, Park C-K, Kang I-S, Kirtman BP et al (2008) Current status of ENSO prediction skill in coupled ocean–atmosphere models. Clim Dyn 31(6):647–664. https://doi.org/10.1007/s00382-008-0397-3

  31. Kang I-S, Kug J-S (2000) An El Niño prediction system using an intermediate ocean and a statistical atmosphere. Geophys Res Lett 15:1167–1170. https://doi.org/10.1029/1999GL011023

  32. Kang I-S, Lee J-Y, Park C-K (2004) Potential predictability of summer mean precipitation in a dynamical seasonal prediction system with systematic error correction. J Clim 17:834–844. https://doi.org/10.1175/1520-0442(2004)017%3c0834:PPOSMP%3e2.0.CO;2

  33. Kao HY, Yu JY (2009) Contrasting Eastern-Pacific and Central-Pacific types of ENSO. J Clim 22:615–632. https://doi.org/10.1175/2008JCLI2309.1

  34. Kharin V, Zwiers F (2002) Climate predictions with multimodel ensembles. J Clim 15:793–799. https://doi.org/10.1175/1520-0442(2002)015%3c0793:CPWME%3e2.0.CO;2

  35. Kim HM, Webster PJ, Curry JA (2009) Impact of shifting patterns of Pacific Ocean warming on north Atlantic tropical cyclones. Science 325(5936):77–80. https://doi.org/10.1126/science.1174062

  36. Kirtman BP, Min D, Infanti JM, Kinter JL, Paolino DA, Zhang Q et al (2014) The North American multi-model ensemble (NMME): phase-1, seasonal-to-interannual prediction; phase-2, toward developing intraseasonal prediction. Bull Am Meteor Soc 95:585–601. https://doi.org/10.1175/BAMS-D-12-00050.1

  37. Kishtawal CM, La Row TE, Bachiochi DR, Zhang Z, Williford CE, Gadgil S, Surendran S (1999) Improved weather and seasonal climate forecasts from multi-model superensemble. Science 285(5433):1548–1550. https://doi.org/10.1126/science.285.5433.1548

  38. Krishnamurti TN, Kishtawal CM, LaRow TE, Bachiochi DR, Zhang Z, Williford CE et al (1999) Improved weather and seasonal climate forecasts from multimodel superensemble​. Science 285(5433):1548–1550. https://doi.org/10.1126/science.285.5433.1548

  39. Kug J-S, Kang I-S, Lee J-Y, Jhun J-G (2004) A statistical approach to Indian Ocean sea surface temperature prediction using a dynamical ENSO prediction. Geophys Res Lett 31(9):09212. https://doi.org/10.1029/2003GL019209

  40. Kug J-S, Kang I-S, Choi D-H (2007a) Seasonal climate predictability with tier-one and tier-two prediction systems. Clim Dyn 31(4):403–426. https://doi.org/10.1007/s00382-007-0264-7

  41. Kug J-S, Lee J-Y, Kang J (2007b) Global sea surface temperature prediction using a multi-model ensemble. Mon Weather Rev 135:3239–3247. https://doi.org/10.1175/MWR3458.1

  42. Kug J-S, Lee J-Y, Kang I-S (2008a) Systematic error correction of dynamical seasonal prediction of sea surface temperature using a stepwise pattern project method. Mon Weather Rev 136(9):3501–3512. https://doi.org/10.1175/2008mwr2272.1

  43. Kug J-S, Lee J-Y, Kang I-S, Wang B, Park C-K (2008b) Optimal multi-model ensemble method in seasonal climate prediction. Asia Pac J Atmos Sci 44(3):259–267

  44. Kug J-S, Jin F-F, An S-I (2009) Two types of El Niño events: cold tongue El Niño and warm pool El Niño. J Clim 22:1499–1515. https://doi.org/10.1175/2008JCLI2624.1

  45. Kumar A, Hu Z-Z, Jha B, Peng P (2017) Estimating ENSO predictability based on multi-model hindcasts. Clim Dyn 48(1–2):39–51. https://doi.org/10.1007/s00382-016-3060-4

  46. Larkin NK, Harrison DE (2005a) On the definition of El Niño and associated seasonal average US weather anomalies. Geophys Res Lett 32(13):13705. https://doi.org/10.1029/2005GL022738

  47. Larkin NK, Harrison DE (2005b) Global seasonal temperature and precipitation anomalies during El Niño autumn and winter. Geophys Res Lett 32(16):L16705. https://doi.org/10.1029/2005GL022860

  48. Latif MD, Anderson D, Barnett T, Cane M, Kleeman R, Leetmaa A et al (1998) A review of the predictability and prediction of ENSO. J Geophys Res 103(C7):14375–14393. https://doi.org/10.1029/97JC03413

  49. Lee RWK, Tam CY, Sohn SJ, Ahn J-B (2017) Predictability of two types of El Niño and their climate impacts in boreal spring to summer in coupled models. Clim Dyn 51(11–12):4555–4571. https://doi.org/10.1007/s00382-017-4039-5

  50. Liu Y, Ren H-L (2017) Improving ENSO prediction in CFSv2 with an analogue-based correction method. Inter J Clim 37(15):5035–5046. https://doi.org/10.1002/joc.5142

  51. Luo J-J, Masson S, Behera S, Shingu S, Yamagata T (2005) Seasonal climate predictability in a coupled OAGCM using a different approach for ensemble forecasts. J Clim 18:4474–4497. https://doi.org/10.1175/JCLI3526.1

  52. Luo J-J, Masson S, Behera S, Yamagata T (2008) Extended ENSO predictions using a fully coupled ocean–atmosphere model. J Clim 21:84–93. https://doi.org/10.1175/2007JCLI1412.1

  53. McPhaden MJ (2003) Tropical Pacific Ocean heat content variations and ENSO persistence barriers. Geophys Res Lett 30(19):1480. https://doi.org/10.1029/2003GL016872

  54. Min Y-M, Kryjov VN, Oh SM (2014) Assessment of APCC multimodel ensemble prediction in seasonal climate forecasting: retrospective (1983–2003) and real-time forecasts (2008–2013). J Geophys Res 119(21):12132–12150. https://doi.org/10.1002/2014JD022230

  55. Palmer TN, Alessandri A, Andersen U, Cantelaube P, Davey M, Délécluse P et al (2004) Development of a European multimodel ensemble system for seasonal-to prediction (DEMETER). Bull Am Meteor Soc 85(6):853–872. https://doi.org/10.1175/BAMS-85-6-853

  56. Palmer TN, Branković Č, Richardson DS (2010) A probability and decision-model analysis of PROBOST seasonal multi-model ensemble integrations. Quart J R Met Soc 126(567):2013–2033. https://doi.org/10.1002/qj.49712656703

  57. Rasmusson EM, Carpenter TH (1982) Variations in tropical sea surface temperature and surface wind fields associated with the Southern Oscillation El Niño. Mon Weather Rev 110:354–384. https://doi.org/10.1002/qj.49712656703

  58. Ren H-L, Jin F-F (2011) Niño indices for two types of ENSO. Geophys Res Lett 38(4):L04704. https://doi.org/10.1029/2010GL046031

  59. Ren H-L, Jin F-F (2013) Recharge oscillator mechanisms in two types of ENSO. J Clim 26:6506–6523. https://doi.org/10.1175/JCLI-D-12-00601.1

  60. Ren H-L, Liu Y, Jin F-F, Yan Y-P, Liu X-W (2014) Application of the analogue-based correction of errors method in ENSO prediction. Atmos Oceanic Sci Lett 7(2):157–161. https://doi.org/10.3878/j.issn.1674-2834.13.0080

  61. Ren H-L, Jin F-F, Tian B, Scaife AA (2016) Distinct persistence barriers in two types of ENSO. Geophys Res Lett 43(20):10973–10979. https://doi.org/10.1002/2016GL071015

  62. Ren H-L, Scaife AA, Dunstone N, Tian B, Liu Y, Ineson S et al (2019a) Seasonal predictability of winter ENSO types in operational dynamical model predictions. Clim Dyn 52(7–8):3869–3890. https://doi.org/10.1007/s00382-018-4366-1

  63. Ren H-L, Zuo J, Deng Y (2019b) Statistical predictability of Niño indices for two types of ENSO. Clim Dyn 52(9–10):5361–5382. https://doi.org/10.1007/s00382-018-4453-3

  64. Saha S, Nadiga S, Thiaw C, Wang J, Wang W, Zhang Q et al (2006) The NCEP climate forecast system. J Clim 19:3483–3517. https://doi.org/10.1175/JCLI3812.1

  65. Saha S, Moorthi S, Wu X, Wang J, Nadiga S, Tripp P et al (2014) The NCEP climate forecast system version 2. J Clim 27:2185–2208. https://doi.org/10.1175/JCLI-D-12-00823.1

  66. Sailor DJ, Li X (1999) A semiempirical downscaling approach for predicting regional temperature impacts associated with climatic change. J Clim 12:103–114. https://doi.org/10.1175/1520-0442(1999)012%3c0103:ASDAFP%3e2.0.CO;2

  67. Smith TM, Reynolds RW (2004) Improved extended reconstruction of SST 1854–1997. J Clim 17:2466–2477. https://doi.org/10.1175/1520-0442(2004)017%3c2466:IEROS%3e2.0.CO;2

  68. Smith TM, Reynolds RW, Peterson TC, Lawrimore J (2008) Improvements NOAAs historical merged land–ocean temp analysis (1880–2006). J Clim 21:2283–2296. https://doi.org/10.1175/2007JCLI2100.1

  69. Tian B, Ren HL, Jin FF, Stuecker MF (2019) Diagnosing the representation and causes of the ENSO persistence barrier in CMIP5 simulations. Clim Dyn 53(3–4):2147–2160. https://doi.org/10.1007/s00382-019-04810-4

  70. Tippett MK, Barnston AG (2008) Skill of Multimodel ENSO probability forecasts. Mon Weather Rev 136(136):3933. https://doi.org/10.1175/2008MWR2431.1

  71. Vecchi GA, Wittenberg AT (2010) El Niño and our future climate: where do we stand? Wiley Interdiscip Rev: Clim Chang 1(2):260–270. https://doi.org/10.1002/wcc.33

  72. Vecchi GA, Delworth T, Gudgel R, Kapnick S, Rosati A, Wittenberg AT et al (2014) On the seasonal forecasting of regional tropical cyclone activity. J Clim 27(21):7994–8016. https://doi.org/10.1175/JCLI-D-14-00158.1

  73. Von Stroch H, Zorita E, Cubasch U (1993) Downscaling of global climate change estimates to regional scales: an application to Iberian Rainfall in Wintertime. J Clim 6:1161–1171. https://doi.org/10.1175/1520-0442(1993)006%3c1161:DOGCCE%3e2.0.CO;2

  74. Wang R, Ren H-L (2017) The linkage of two ENSO types/modes with the interdecadal changes of ENSO around the year 2000. Atmos Oceanic Sci Lett 10(2):168–174. https://doi.org/10.1080/16742834.2016.1258952

  75. Wang X, Wang C (2014) Different impacts of various El Niño events on the Indian Ocean Dipole. Clim Dyn 42(3–4):991–1005. https://doi.org/10.1007/s00382-013-1711-2

  76. Wang B, Lee J-Y, Kang I-S, Shukla J, Park C-K, Kumar A et al (2009) Advance and prospectus of seasonal prediction: assessment of the APCC/CliPAS 14-model ensemble retrospective seasonal prediction (1984–2004). Clim Dyn 33(1):93–117. https://doi.org/10.1007/s00382-008-0460-0

  77. Ward MN, Navarra A (1997) Pattern analysis of SST-forced variability in ensemble GCM simulations: examples over Europe and the tropical Pacific. J Clim 10:2210–2220. https://doi.org/10.1175/1520-0442(1997)010%3c2210:PAOSFV%3e2.0.CO;2

  78. Webster PJ, Yang S (1992) Monsoon and ENSO: selectively interactive systems. Quart J R Met Soc 118(507):877–926. https://doi.org/10.1002/qj.49711850705

  79. Weigel AP, Liniger MA, Appenzeller C (2008) Can multi-model combination really enhance the prediction skill of probabilistic ensemble forecasts? Quart J R Met Soc 134(630):241–260. https://doi.org/10.1002/qj.210

  80. Weng HY, Ashok K, Behera SK, Rao SA, Yamagata T (2007) Impacts of recent El Niño Modoki on dry/wet conditions in the Pacific Rim during boreal summer. Clim Dyn 29(2–3):113–129. https://doi.org/10.1007/s00382-007-0234-0

  81. Xiang B-Q, Wang B, Li T (2013) A new paradigm for the predominance of standing central Pacific warming after the late 1990s. Clim Dyn 41(2), 327–340. https://doi.org/0.1007/s00382-012-1427-8

  82. Yang S, Jiang X (2014) Prediction of eastern and central Pacific ENSO events and their impacts on East Asian climate by the NCEP Climate Forecast System. J Clim 27:4451–4472. https://doi.org/10.1175/JCLI-D-13-00471.1

  83. Yeh S-W, Kug J-S, Dewitte B, Kwon MH, Kirtman BP, Jin F-F (2009) El Niño in a changing climate. Nature 461:511–514. https://doi.org/10.1038/nature08316

  84. Yeh S-W, Kug J-S, An S-I (2014) Recent progress on two types of El Niño: observations, dynamics, and future changes. Asia Pac J Atmos Sci 50(2):69–81. https://doi.org/10.1007/s13143-014-0028-3

  85. Yu J-Y, Kim ST (2010) Identification of Central-Pacific and Eastern-Pacific types of ENSO in CMIP3 models. Geophys Res Lett 37(15):L15705. https://doi.org/10.1029/2010GL044082

  86. Zebiak SE, Cane MA (1987) A model El Niño-Southern Oscillation. Mon Weather Rev 115:2262–2278. https://doi.org/10.1175/1520-0493(1987)115%3c2262:AMENO%3e2.0.CO;2

  87. Zhai P-M, Yu R, Guo Y-J, Li Q-X, Ren X-J, Wang Y-Q et al (2016) The strong El Niño of 2015/16 and its dominant impacts on global and China’s climate. J Meteorol Res 30(3):283–297. https://doi.org/10.1007/s13351-016-6101-3

  88. Zhang R, Sumi A, Kimoto M (1996) Impact of El Niño on the East Asia monsoon: a diagnostic study of the '86/87 and '91/92 events. J Meteor Soc Jpn 74(1):49–62. https://doi.org/10.2151/jmsj1965.74.1_49

  89. Zhang R-H, Zebiak SE, Kleeman R, Keenlyside N (2003) A new intermediate coupled model for El Niño simulation and prediction. Geophys Res Lett. https://doi.org/10.1029/2003GL018010

  90. Zhang W, Jin F-F, Li J, Ren H-L (2011) Contrasting impacts of two-type El Niño over the western north Pacific during boreal autumn. J Meteor Soc Japan 89(5):563–569. https://doi.org/10.2151/jmsj.2011-510

  91. Zhang P, Jin F-F, Ren H-L, Li J, Zhao J-X (2012) Differences in Teleconnection over the North Pacific and Rainfall Shift over the USA Associated with Two Types of El Niño during Boreal Autumn. J Meteor Soc Jpn 90(4):535–552. https://doi.org/10.2151/jmsj.2012-407

  92. Zhang W, Wang L, Xiang B, Qi L, He J (2015) Impacts of two types of La Niña on the NAO during boreal winter. Clim Dyn 44(5–6):1351–1366. https://doi.org/10.1007/s00382-014-2155-z

  93. Zheng F, Zhu J, Zhang R-H, Zhou G-Q (2006) Ensemble hindcasts of SST anomalies in the tropical Pacific using an intermediate coupled model. Geophys Res Lett 331(19):L19604. https://doi.org/10.1029/2006GL026994

  94. Zhu J, Huang B, Marx L, Kinter JL III, Balmaseda MA, Zhang R-H, Hu Z-Z (2012) Ensemble ENSO hindcasts initialized from multiple ocean analyses. Geophys Res Lett 39(9):L09602. https://doi.org/10.1029/2012GL051503

  95. Zhu J, Huang B, Cash B, Kinter JL, Manganello J, Barimalala R et al (2015) ENSO prediction in project minerva: sensitivity to atmospheric horizontal resolution and ensemble size. J Clim 28:2080–2095. https://doi.org/10.1175/JCLI-D-14-00302.1

  96. Zhu J, Kumar A, Huang B, Balmaseda MA, Hu Z-Z, Marx L, Kinter JL III (2016) The role of off-equatorial surface temperature anomalies in the 2014 El Niño prediction. Sci Rep 6:19677. https://doi.org/10.1038/srep19677

  97. Zhu J, Kumar A, Wang W, Hu Z-Z, Huang B, Balmaseda MA (2017) Importance of convective parameterization in ENSO predictions. Geophys Res Lett 44(12):6334–6342. https://doi.org/10.1002/2017GL073669

  98. Zorita E, Von Storch H (1999) The analog method as a simple statistical downscaling technique: comparison with more complicated methods. J Clim 12:2474–2489. https://doi.org/10.1175/1520-0442(1999)012%3c2474:TAMAAS%3e2.0.CO;2

  99. Zorita E, Hughes JP, Lettenmaier DP, Von Stroch H (1995) Stochastic downscaling of regional circulation patterns for climate model diagnosis and estimation of local precipitation. J Clim 8:1023–1043. https://doi.org/10.1175/1520-0442(1995)008%3c1023:SCORCP%3e2.0.CO;2

Download references

Acknowledgements

The ERSST_v3b data used in this work are provided by the NOAA Earth System Research Laboratory's Physical Sciences Division from their Web site at https://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.v3.html. The NMME project and its dataset are available on NOAA’s Climate Test Bed website (https://www.nws.noaa.gov/ost/CTB/nmme.htm). This work was jointly supported by the National Key Research and Development Program on monitoring, Early Warning and Prevention of Major Natural Disaster (2017YFC1502302, 2018YFC1506005), China National Science Foundation project (41975094, 41606019), State Oceanic Administration International Cooperation Program (GASI-IPOVAI-03), and China Meteorological Administration Special Public Welfare Research Fund (GYHY201506013). The authors give their appreciations to Prof. Jong-Seong Kug for his kind technical support and to the Reviewers for sharing valuable comments and suggestions.

Author information

Correspondence to Hong-Li Ren.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wang, L., Ren, H., Zhu, J. et al. Improving prediction of two ENSO types using a multi-model ensemble based on stepwise pattern projection model. Clim Dyn (2020). https://doi.org/10.1007/s00382-020-05160-2

Download citation

Keywords

  • ENSO prediction
  • Multi-model ensemble
  • Two types of ENSO
  • Error correction