Skip to main content
Log in

Abstract

The AMMI/GGE model can be used to describe a two-way table of genotype–environment means. When the genotype–environment means are independent and homoscedastic, ordinary least squares (OLS) gives optimal estimates of the model. In plant breeding, the assumption of independence and homoscedasticity of the genotype–environment means is frequently violated, however, such that generalized least squares (GLS) estimation is more appropriate. This paper introduces three different GLS algorithms that use a weighting matrix to take the correlation between the genotype–environment means as well as heteroscedasticity into account. To investigate the effectiveness of the GLS estimation, the proposed algorithms were implemented using three different weighting matrices, including (i) an identity matrix (OLS estimation), (ii) an approximation of the complete inverse covariance matrix of the genotype–environment means, and (iii) the complete inverse covariance matrix of the genotype–environment means. Using simulated data modeled on real experiments, the different weighting methods were compared in terms of the mean-squared error of the genotype–environment means, interaction effects, and singular vectors. The results show that weighted estimation generally outperformed unweighted estimation in terms of the mean-squared error. Furthermore, the effectiveness of the weighted estimation increased when the heterogeneity of the variances of the genotype–environment means increased.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to H. P. Piepho.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (docx 1749 KB)

Supplementary material 2 (zip 10 KB)

Appendices

Appendix

The criss-cross algorithm describes matrix \({{\varvec{H}}}=({{\varvec{h}}}_1\ldots {{\varvec{h}}}_J)\) by the model \({{\varvec{H}}}={{{\varvec{GF}}}}'+{{\varvec{E}}}\) where \({{\varvec{E}}}\) is a random error. The matrices \({{\varvec{G}}}\) and \({{\varvec{F}}}\) both consist of n columns such that the product \({{{\varvec{GF}}}}'\) represents a rank n approximation of \({{\varvec{H}}}\). The algorithm makes particular use of the fact that \({{\varvec{H}}}\) is linear in \({{\varvec{F}}}'\) for a given matrix \({{\varvec{G}}}\) and vice versa. Using \({{\varvec{vec}}}({{\varvec{H}}})=({{\varvec{h}}}_1^{\prime } \ldots {{\varvec{h}}}_J^{\prime })^{{\prime }}={{\varvec{h}}}\) (Searle et al. 1992), and \({{\varvec{vec}}}({{{\varvec{GF}}}}')=({{\varvec{I}}}_{{\textit{n}}}\otimes {{\varvec{G}}}){{\varvec{vec}}}({{\varvec{F}}'}) =({{\varvec{F}}}\otimes {{\varvec{I}}}_{{\textit{n}}}){{\varvec{vec}}}({{\varvec{G}}})\), the model can be written as

$$\begin{aligned} {{\varvec{h}}}={{\varvec{X}}}_{{\varvec{f}}} {{\varvec{f}}} +{\varvec{\varepsilon }} = {{\varvec{X}}}_{{\varvec{g}}} {{\varvec{g}}} +{\varvec{\varepsilon }} \end{aligned}$$

where \({{\varvec{X}}}_{{\varvec{f}}} = ({{\varvec{I}}}_{\textit{n}} \otimes {{\varvec{G}}}), {{\varvec{f}}}={{\varvec{vec}}}({{{\varvec{F}}}}'), {{\varvec{X}}}_{{\varvec{g}}} =({{\varvec{F}}}\otimes {{\varvec{I}}}_{\textit{n}} ), {{\varvec{g}}}={{\varvec{vec}}}({{\varvec{G}}})\), and \({\varvec{\varepsilon }} ={{\varvec{vec}}}({{\varvec{E}}})\). These two models for \({{\varvec{h}}}\) can be used alternatingly to estimate \({{\varvec{G}}}\) and \({{\varvec{F}}}\) by minimizing \({\varvec{\varepsilon }}^{\prime }{{\varvec{M}}}{\varvec{\varepsilon }}\), where \({{\varvec{M}}}\) is a weighting matrix. For this purpose, \({{\varvec{G}}}\) may initialized by a nonzero matrix to compute an estimate of \({{\varvec{F}}}\) which in turn is used to update \({{\varvec{G}}}\). In this way, the GLS estimates in the zth iteration are

$$\begin{aligned} {{\varvec{f}}}^{{\textit{z}}}= & {} ({{\varvec{X}}}_{{\varvec{f}}}^{{\prime }} {{\varvec{MX}}}_{{\varvec{f}}}) {{\varvec{X}}}_{{\varvec{f}}}^{{\prime }} {{\varvec{Mh}}}\\ {{\varvec{g}}}^{{\textit{z}}}= & {} ({{\varvec{X}}}_{{\varvec{g}}}^{{\prime }} {{\varvec{MX}}}_{{\varvec{g}}}) {{\varvec{X}}}_{{\varvec{g}}}^{{\prime }} {{\varvec{Mh}}} \end{aligned}$$

where \({{\varvec{X}}}_{{\varvec{f}}} =({{\varvec{I}}}_{\textit{n}} \otimes {{\varvec{G}}}^{{\textit{z}}-{1}})\), and \({{\varvec{X}}}_{{\varvec{g}}} =({{\varvec{F}}}^{{\textit{z}}}\otimes {{\varvec{I}}}_{\textit{n}})\). One could equivalently start the algorithm by initializing \({{\varvec{F}}}\). To estimate the AMMI/GGE model by criss-cross algorithm, all the effects of the AMMI/GGE model are estimated iteratively. In particular, the interaction of the AMMI/GGE model is reparameterized and the effects of the reparameterized model are estimated iteratively. In matrix form, the reparameterized model for the observed genotype–environment means using n multiplicative terms is

$$\begin{aligned} \hat{{\varvec{\varTheta }}} =\mu \mathbf{1}_I \mathbf{1}_J^{\prime } +{{\varvec{g}}}{} \mathbf{1}_J^{\prime } +\mathbf{1}_I {{\varvec{e}}}'+{{\varvec{AB}}}'+{{\varvec{E}}}, \end{aligned}$$

where \({{\varvec{A}}}\) and \({{\varvec{B}}}\) consist of n columns and the product \({{\varvec{AB}}}'\) models the interaction effects \({{\varvec{W}}}(N)\) in (2). Using the \(vec\left( \cdot \right) \) operator, the equation can be written as

$$\begin{aligned} \hat{{\varvec{\theta }}} =\mu {{\varvec{X}}}_\mu +{{\varvec{X}}}_{{\varvec{g}}} {{\varvec{g}}} +{{\varvec{X}}}_{{\varvec{e}}} {{\varvec{e}}} +{{\varvec{X}}}_{{\varvec{b}}} {{\varvec{b}}}+{\varvec{\varepsilon }}, \end{aligned}$$
(4)

where \(\hat{{\varvec{\theta }}} =vec(\hat{{\varvec{\varTheta }}}(n))\), \({{\varvec{X}}}_\mu =\mathbf{1}_{IJ}, {{\varvec{X}}}_{{\varvec{g}}} =(\mathbf{1}_J \otimes {{\varvec{I}}}_I)\), \({{\varvec{X}}}_{{\varvec{e}}} =({{\varvec{I}}}_J \otimes \mathbf{1}_I)\), \({{\varvec{X}}}_{{\varvec{b}}} = ({{\varvec{I}}}_J \otimes {{\varvec{A}}})\), \({{\varvec{b}}}=vec({{\varvec{B}}}')\), and \({\varvec{\varepsilon }} =vec({{\varvec{E}}})\). The iteration can be started by initializing \({{\varvec{A}}}\) in (4) to estimate \({{\varvec{b}}}\) by minimizing \({\varvec{\varepsilon }}^{\prime }{{\varvec{M}}}{\varvec{\varepsilon }}\), where \({{\varvec{M}}}\) is a weighting matrix. In the zth iteration, the system of equations resulting from the derivatives of \({\varvec{\varepsilon }}^{\prime }{{\varvec{M}}}{\varvec{\varepsilon }}\) is

$$\begin{aligned} \left( \begin{array}{llll} {{\varvec{X}}}_\mu ^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{b}}}\\ {{\varvec{X}}}_{{\varvec{e}}}^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{b}}}\\ {{\varvec{X}}}_{{\varvec{g}}}^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{b}}}\\ {{\varvec{X}}}_{{\varvec{b}}}^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_{{\varvec{b}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_{{\varvec{b}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_{{\varvec{b}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{b}}}\\ \end{array}\right) \left( \begin{array}{l} \mu ^z\\ {{\varvec{e}}}^{\textit{z}}\\ {{\varvec{g}}}^{\textit{z}}\\ {{\varvec{b}}}^{\textit{z}}\\ \end{array}\right) =\left( \begin{array}{l} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_{{\varvec{b}}}^{\prime }{{\varvec{M}}}\\ \end{array}\right) \hat{\varvec{\theta }} \end{aligned}$$

where \({{\varvec{X}}}_{{\varvec{b}}} =({{\varvec{I}}}_J \otimes {{\varvec{A}}}^{z-1})\). To estimate \({{\varvec{a}}}=vec({{\varvec{A}}})\), the term \({{\varvec{X}}}_{{\varvec{b}}} {{\varvec{b}}}\) in (4) is replaced by \({{\varvec{X}}}_{{\varvec{a}}} {{\varvec{a}}}=({{\varvec{B}}}^{{\textit{z}}}\otimes {{\varvec{I}}}_I){{\varvec{a}}}\). In this case, the system of equations is

$$\begin{aligned} \left( \begin{array}{llll} {{\varvec{X}}}_\mu ^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{a}}}\\ {{\varvec{X}}}_{{\varvec{e}}}^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{a}}}\\ {{\varvec{X}}}_{{\varvec{g}}}^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{a}}}\\ {{\varvec{X}}}_{{\varvec{a}}}^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_{{\varvec{a}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_{{\varvec{a}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_{{\varvec{a}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{a}}}\\ \end{array}\right) \left( \begin{array}{l} \mu ^z\\ {{\varvec{e}}}^{\textit{z}}\\ {{\varvec{g}}}^{\textit{z}}\\ {{\varvec{a}}}^{\textit{z}}\\ \end{array}\right) =\left( \begin{array}{l} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_{{\varvec{a}}}^{\prime }{{\varvec{M}}}\\ \end{array}\right) \hat{\varvec{\theta }} . \end{aligned}$$

In this way, the model effects are estimated alternatingly until the change in the estimated genotype–environment means in two successive iterations is lower than a certain threshold (for details, see Figures 1, 2 and 3). The two equation systems can be solved in different ways. Here, we use three different algorithms to solve it, which will be denoted by Algorithm 1–3.

Algorithm 1

In Algorithm 1, the system of equations resulting from derivatives of \({\varvec{\varepsilon }}'{{\varvec{M}}}{\varvec{\varepsilon }}\) with regard to the model effects is inverted using a generalized inverse. For a given matrix \({{\varvec{A}}}\), the estimated effects of the AMMI model in the zth iteration can be displayed by

$$\begin{aligned} \left( \begin{array}{l} \mu ^z\\ {{\varvec{e}}}^{\textit{z}}\\ {{\varvec{g}}}^{\textit{z}}\\ {{\varvec{b}}}^{\textit{z}}\\ \end{array}\right) =\left( \begin{array}{llll} {{\varvec{X}}}_\mu ^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{b}}}\\ {{\varvec{X}}}_{{\varvec{e}}}^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{b}}}\\ {{\varvec{X}}}_{{\varvec{g}}}^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{b}}}\\ {{\varvec{X}}}_{{\varvec{b}}}^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_{{\varvec{b}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{e}}} &{} {{\varvec{X}}}_{{\varvec{b}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{g}}} &{} {{\varvec{X}}}_{{\varvec{b}}}^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{{\varvec{b}}}\\ \end{array}\right) ^- \left( \begin{array}{l} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_{{\varvec{e}}}^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_{{\varvec{g}}}^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_{{\varvec{b}}}^{\prime }{{\varvec{M}}}\\ \end{array}\right) \hat{\varvec{\theta }} \end{aligned}$$

where \({{\varvec{X}}}_{{\varvec{b}}} =({{\varvec{I}}}_J \otimes {{\varvec{A}}}^{z-1})\) and \((\cdot )^{-}\) is a generalized inverse. The estimates of \(\mu , {{\varvec{e}}}\), and \({{\varvec{b}}}\) are reparameterized by \(\tilde{\mu }=\mu +\frac{1}{J}{} \mathbf{1}_J^{\prime } {{\varvec{e}}}+\frac{1}{I}{} \mathbf{1}_I^{\prime } {{\varvec{g}}}, \tilde{e} ={{\varvec{e}}}-\frac{1}{J}{} \mathbf{1}_J \mathbf{1}_J^{\prime } {{\varvec{{{\varvec{e}}}}}}\), and \(\tilde{{{\varvec{B}}}}={{\varvec{B}}}-\frac{1}{J}{} \mathbf{1}_J \mathbf{1}_J^{\prime } {{\varvec{B}}}\). For a given estimate of \(\tilde{{{\varvec{B}}}}\), the effects \({{\varvec{g}}}\), and \({{\varvec{a}}}\) were estimated by

$$\begin{aligned} \left( \begin{array}{l} \mu ^z\\ {{\varvec{e}}}^{{\varvec{z}}}\\ {{\varvec{g}}}^{{\varvec{z}}}\\ {{\varvec{a}}}^{{\varvec{z}}}\\ \end{array}\right) =\left( \begin{array}{llll} {{\varvec{X}}}_\mu ^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{e}} &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{g}} &{} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{a}}\\ {{\varvec{X}}}_e^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_e^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{e}} &{} {{\varvec{X}}}_e^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{g}} &{} {{\varvec{X}}}_e^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{a}}\\ {{\varvec{X}}}_g^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_g^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{e}} &{} {{\varvec{X}}}_g^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{g}} &{} {{\varvec{X}}}_g^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{a}}\\ {{\varvec{X}}}_a^{\prime } {{\varvec{M}}}\,{{\varvec{X}}}_\mu &{} {{\varvec{X}}}_a^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{e}} &{} {{\varvec{X}}}_a^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{g}} &{} {{\varvec{X}}}_a^{\prime }{{\varvec{M}}}\,{{\varvec{X}}}_{\varvec{a}}\\ \end{array}\right) ^- \left( \begin{array}{l} {{\varvec{X}}}_\mu ^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_e^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_g^{\prime }{{\varvec{M}}}\\ {{\varvec{X}}}_a^{\prime }{{\varvec{M}}}\\ \end{array}\right) \hat{\varvec{\theta }} \end{aligned}$$

where \({{\varvec{X}}}_{{\varvec{a}}} =(\tilde{{{\varvec{B}}}}^{{{\varvec{z}}}}\otimes {{\varvec{I}}}_I)\). The estimates of \({{\varvec{g}}}\) and \({{\varvec{a}}}\) are reparameterized by \(\tilde{{{\varvec{g}}}}={{\varvec{g}}} -\frac{1}{I}{} \mathbf{1}_I \mathbf{1}_I^{\prime } {{\varvec{g}}}\), and \(\tilde{{{\varvec{A}}}}={{\varvec{A}}}-\frac{1}{I}{} \mathbf{1}_I \mathbf{1}_I^{\prime } {{\varvec{A}}}\) . The estimated genotype–environment means are then obtained by \(\hat{{\varvec{\theta }}} (n) =\tilde{\mu }{{\varvec{X}}}_\mu +{{\varvec{X}}}_{{\varvec{g}}} \tilde{{{\varvec{g}}}} +{{\varvec{X}}}_{{\varvec{e}}} \tilde{{{\varvec{e}}}}+{{\varvec{X}}}_{{\varvec{b}}} \tilde{{{\varvec{b}}}}\). In case of the GGE model, the main genotype effects are not estimated and the estimates of \({{\varvec{B}}}\) are not reparameterized.

Algorithm 2

In Algorithm 2, the two systems of equations were solved using the Jacobi iterative method (Kotz et al. 2006) and the sum-to-zero constraints were implemented using the method of Lagrange multipliers. In the Jacobi iteration, the solution of an equation system of the form

$$\begin{aligned} \left( \begin{array}{ll} {{\varvec{X}}}_1^{\prime } {{\varvec{X}}}_1 &{} {{\varvec{X}}}_1^{\prime } {{\varvec{X}}}_2 \\ {{\varvec{X}}}_2^{\prime } {{\varvec{X}}}_1 &{} {{\varvec{X}}}_2^{\prime } {{\varvec{X}}}_2 \\ \end{array}\right) \left( \begin{array}{l} {{\varvec{b}}}_1\\ {{\varvec{b}}}_2\\ \end{array}\right) =\left( \begin{array}{l} {{\varvec{X}}}_1^{\prime }\\ {{\varvec{X}}}_2^{\prime }\\ \end{array}\right) {{\varvec{y}}}, \end{aligned}$$

where \({{\varvec{y}}}\) is a vector of observations, \({{\varvec{X}}}_1 \) and \({{\varvec{X}}}_2 \) are design matrices for the parameter vectors \({{\varvec{b}}}_1 \) and \({{\varvec{b}}}_2 \), which is obtained by iteratively computing

$$\begin{aligned} \left( \begin{array}{l} {{\varvec{b}}}_1^z\\ {{\varvec{b}}}_2^z\\ \end{array}\right) =\left( \begin{array}{ll} ({{\varvec{X}}}_1^{\prime } {{\varvec{X}}}_1)^{-1} &{} \mathbf{0} \\ \mathbf{0} &{} ({{\varvec{X}}}_2^{\prime } {{\varvec{X}}}_2)^{-1}\\ \end{array}\right) \left( \left( \begin{array}{l} {{\varvec{X}}}_1^{\prime }\\ {{\varvec{X}}}_2^{\prime }\\ \end{array}\right) {{\varvec{y}}}-\left( \begin{array}{ll} \mathbf{0} &{}{{\varvec{X}}}_1^{\prime } {{\varvec{X}}}_2 \\ {{\varvec{X}}}_2^{\prime } {{\varvec{X}}}_1 &{} \mathbf{0} \\ \end{array}\right) \left( \begin{array}{l} {{\varvec{b}}}_1^{z-1}\\ {{\varvec{b}}}_2^{z-1}\\ \end{array}\right) \right) . \end{aligned}$$

When constraints on the parameters need to be imposed, the Lagrange multipliers need to be included in the objective function that is to be minimized. In case of the AMMI/GGE model, the objective function becomes \({\varvec{\varepsilon }}'{{\varvec{M}}}{\varvec{\varepsilon }} +{{\varvec{P}}}'{{\varvec{l}}}\) where \({{\varvec{l}}}\) is a vector of Lagrange multipliers and \({{\varvec{P}}}'\) the matrix containing the coefficients for the desired linear constraint. When the Lagrange multipliers are included in the objective function, the system of equations takes the form

$$\begin{aligned} \left( \begin{array}{lll} {{\varvec{X}}}_1^{\prime } {{\varvec{X}}}_1 &{} {{\varvec{X}}}_1^{\prime } {{\varvec{X}}}_2 &{} {{\varvec{P}}}_{1} \\ {{\varvec{X}}}_2^{\prime } {{\varvec{X}}}_1 &{} {{\varvec{X}}}_2^{\prime } {{\varvec{X}}}_2 &{} {{\varvec{P}}}_{2} \\ {{\varvec{P}}}_{1}^{\prime } &{} {{\varvec{P}}}_{2}^{\prime } &{} \mathbf{0} \\ \end{array}\right) \left( \begin{array}{l} {{\varvec{b}}}_1\\ {{\varvec{b}}}_2\\ {{\varvec{l}}} \\ \end{array}\right) =\left( \begin{array}{l} {{\varvec{X}}}_1^{\prime } {{\varvec{y}}} \\ {{\varvec{X}}}_2^{\prime } {{\varvec{y}}} \\ {{\varvec{c}}} \\ \end{array}\right) , \end{aligned}$$

where \(({{\varvec{P}}}_1^{\prime } {{\varvec{P}}}_2^{\prime }) \left( \begin{array}{l} {{\varvec{b}}}_1 \\ {{\varvec{b}}}_2 \\ \end{array}\right) ={{\varvec{c}}}\) represents the desired constraints. Using the Jacobi iteration to solve that system, one may write

$$\begin{aligned} \left( {{\begin{array}{l} {\varvec{b}_1^z } \\ {\varvec{b}_2^z } \\ {{\varvec{l}}}^{z} \\ \end{array} }} \right) =\left( {{\begin{array}{lll} ( {\varvec{X}_1^{\prime } \varvec{X}_1})^{-1} &{} \mathbf{0} &{} \mathbf{0} \\ \mathbf{0} &{} ( {\varvec{X}_2^{\prime } \varvec{X}_2 } )^{-1} &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0} &{} \mathbf{0} \\ \end{array} }} \right) \left( \left( {{\begin{array}{l} \varvec{X}_1^{\prime } \varvec{y} \\ \varvec{X}_2^{\prime } \varvec{y} \\ {{\varvec{c}}} \\ \end{array} }} \right) -\left( {{\begin{array}{lll} \mathbf{0} &{} \varvec{X}_2^{\prime } \varvec{X}_1 &{} \varvec{P}_1^{\prime } \\ \varvec{X}_1^{\prime } \varvec{X}_2 &{} \mathbf{0} &{} \varvec{P}_2^{\prime } \\ \varvec{P}_1 &{} \varvec{P}_2 &{} \mathbf{0} \\ \end{array} }}\right) \left( {{\begin{array}{l} \varvec{b}_1^{z-1} \\ \varvec{b}_2^{z-1} \\ {{\varvec{l}}}^{z-1} \\ \end{array} }} \right) \right) . \end{aligned}$$

Obviously, the Lagrange multipliers cannot be updated from the equation above. Instead, the Lagrange multipliers \({{\varvec{l}}}^{z-1}\) can be found by solving \(({{\varvec{P}}}_1^{\prime } {{\varvec{P}}}_2^{\prime }) \left( \begin{array}{l} {{\varvec{b}}}_1^z \\ {{{\varvec{b}}}_2^z } \\ \end{array}\right) =c\) for \({{\varvec{l}}}^{z-1}\) and plugging the solution back into \(\left( \begin{array}{l} {{{\varvec{b}}}_1^z } \\ {{{\varvec{b}}}_2^z } \\ \end{array}\right) \). With \({{\varvec{c}}}=\mathbf{0}\), which represents sum-to-zero constraints, this yields

$$\begin{aligned} \left( {{\begin{array}{l} {{{\varvec{b}}}_1^z } \\ {{{\varvec{b}}}_2^z } \\ \end{array} }} \right) =({{\varvec{I}}}-{{\varvec{D}}}^{-1}{{\varvec{P}}} ({{\varvec{P}}}' {{\varvec{D}}}^{-1}{{\varvec{P}}})^{-1}{{\varvec{P}}}'){{\varvec{D}}}^{-1}, \left( \left( \begin{array}{l} {{\varvec{X}}}_1^{\prime }\\ {{\varvec{X}}}_2^{\prime }\\ \end{array}\right) {{\varvec{y}}}-\left( \begin{array}{ll} \mathbf{0} &{}{{\varvec{X}}}_1^{\prime } {{\varvec{X}}}_2 \\ {{\varvec{X}}}_2^{\prime } {{\varvec{X}}}_1 &{} \mathbf{0} \\ \end{array}\right) \left( \begin{array}{l} {{\varvec{b}}}_1^{z-1}\\ {{\varvec{b}}}_2^{z-1}\\ \end{array}\right) \right) , \end{aligned}$$

where \({{\varvec{D}}}^{-1}=\left( {{\begin{array}{ll} ({{\varvec{X}}}_1^{\prime } {{\varvec{X}}}_1 )^{-1} &{} \mathbf{0} \\ \mathbf{0} &{} ( {{\varvec{X}}}_2^{\prime } {{\varvec{X}}}_2 )^{-1} \\ \end{array} }} \right) \) and \({{\varvec{P}}}=\left( {{\begin{array}{l} {{{\varvec{P}}}_1} \\ {{{\varvec{P}}}_2} \\ \end{array} }} \right) \). When \({{\varvec{P}}}\) can be partitioned as \({{\varvec{P}}}=\left( {{\begin{array}{ll} {{{\varvec{Q}}}_1 } &{} \mathbf{0} \\ \mathbf{0} &{} {{{\varvec{Q}}}_2 } \\ \end{array} }} \right) \), which is the case for the constraints in AMMI/GGE models, the estimates in the zth iteration are

$$\begin{aligned} \left( {{\begin{array}{l} {{{\varvec{b}}}_1^z } \\ {{{\varvec{b}}}_2^z } \\ \end{array} }} \right) =\left( {{\begin{array}{l} ({{{\varvec{I}}}-{{\varvec{R}}}_1} )({{{\varvec{X}}}_1^{\prime } {{\varvec{X}}}_1})^{-1}{{\varvec{X}}}_1^{\prime }({{{\varvec{y}}}-{{\varvec{X}}}_2 {{\varvec{b}}}_2^{z-1}}) \\ ({{{\varvec{I}}}-{{\varvec{R}}}_2} )({{{\varvec{X}}}_2^{\prime } {{\varvec{X}}}_2})^{-1}{{\varvec{X}}}_2^{\prime }({{{\varvec{y}}}-{{\varvec{X}}}_1 {{\varvec{b}}}_1^{z-1}}) \\ \end{array} }} \right) , \end{aligned}$$

where \({{\varvec{R}}}_1 =({{{\varvec{X}}}_1^{\prime } {{\varvec{X}}}_1 } )^{-1} {{\varvec{Q}}}_1 ({{{\varvec{Q}}}_1^{\prime } ({{{\varvec{X}}}_1^{\prime } {{\varvec{X}}}_1 } )^{-1}{{\varvec{Q}}}_1 } )^{-1}{{\varvec{Q}}}_1^{\prime } \), and \({{\varvec{R}}}_2=({{{\varvec{X}}}_2^{\prime } {{\varvec{X}}}_2})^{-1} {{\varvec{Q}}}_2 ({{\varvec{Q}}}_2^{\prime }({{{\varvec{X}}}_2^{\prime } {{\varvec{X}}}_2})^{-1}{{\varvec{Q}}}_2)^{-1}{{\varvec{Q}}}_2^{\prime }\). These estimates can be used for weighted estimation of the AMMI/GGE model by the criss-cross algorithm. Starting the algorithm by initializing \({{\varvec{A}}}\), the estimates in the zth iteration are

$$\begin{aligned} \mu ^{z}= & {} ( {{{\varvec{X}}}_\mu ^{\prime } {{\varvec{MX}}}_\mu })^{-1} {{\varvec{X}}}_\mu ^{\prime } {{\varvec{M}}}( {\hat{{\varvec{\theta }}} -{{\varvec{X}}}_{{\varvec{e}}} {{\varvec{e}}}^{z-1}-{{\varvec{X}}}_{{\varvec{g}}} {{\varvec{g}}}^{z-1} -{{\varvec{X}}}_{{\varvec{b}}} {{\varvec{b}}}^{z-1}} )\\ {{\varvec{e}}}^{z}= & {} ( {{{\varvec{I}}}-( {{{\varvec{X}}}_{{\varvec{e}}}^{\prime } {{\varvec{MX}}}_{{\varvec{e}}} })^{-1}{{\varvec{Q}}}_{{\varvec{e}}} ( {{{\varvec{Q}}}_{{\varvec{e}}}^{\prime } ( {{{\varvec{X}}}_{{\varvec{e}}}^{\prime } {{\varvec{MX}}}_{{\varvec{e}}} } )^{-1}{{\varvec{Q}}}_{{\varvec{e}}} } )^{-1}{{\varvec{Q}}}_{{\varvec{e}}}^{\prime } } )( {{{\varvec{X}}}_{{\varvec{e}}}^{\prime } {{\varvec{MX}}}_{{\varvec{e}}} } )^{-1}{{\varvec{X}}}_{{\varvec{e}}}^{\prime } {{\varvec{M}}}( {\hat{{\varvec{\theta }}} -{{\varvec{X}}}_\mu \mu ^{z-1}-{{\varvec{X}}}_{{\varvec{g}}} {{\varvec{g}}}^{z-1}-{{\varvec{X}}}_{{\varvec{b}}} {{\varvec{b}}}^{z-1}} )\\ {{\varvec{g}}}^{z}= & {} ( {{{\varvec{I}}}-( {{{\varvec{X}}}_{{\varvec{g}}}^{\prime } {{\varvec{MX}}}_{{\varvec{g}}} } )^{-1}{{\varvec{Q}}}_{{\varvec{g}}} ( {{{\varvec{Q}}}_{{\varvec{g}}}^{\prime } ( {{{\varvec{X}}}_{{\varvec{g}}}^{\prime } {{\varvec{MX}}}_{{\varvec{g}}} } )^{-1}{{\varvec{Q}}}_{{\varvec{g}}} } )^{-1}{{\varvec{Q}}}_{{\varvec{g}}}^{\prime } } )( {{{\varvec{X}}}_{{\varvec{g}}}^{\prime } {{\varvec{MX}}}_{{\varvec{g}}} } )^{-1}{{\varvec{X}}}_{{\varvec{g}}}^{\prime } {{\varvec{M}}} ( {\hat{{\varvec{\theta }}} -{{\varvec{X}}}_\mu \mu ^{z-1} -{{\varvec{X}}}_{{\varvec{e}}} {{\varvec{e}}}^{z-1}-{{\varvec{X}}}_{{\varvec{b}}} {{\varvec{b}}}^{z-1}} )\\ {{\varvec{b}}}^{z}= & {} ( {{{\varvec{I}}}-( {{{\varvec{X}}}_{{\varvec{b}}}^{\prime } {{\varvec{MX}}}_{{\varvec{b}}} } )^{-1}{{\varvec{Q}}}_{{\varvec{b}}} ( {{{\varvec{Q}}}_{{\varvec{b}}}^{\prime } ( {{{\varvec{X}}}_{{\varvec{b}}}^{\prime } {{\varvec{MX}}}_{{\varvec{b}}} } )^{-1}{{\varvec{Q}}}_{{\varvec{b}}} } )^{-1}{{\varvec{Q}}}_{{\varvec{b}}}^{\prime } } )( {{{\varvec{X}}}_{{\varvec{b}}}^{\prime } {{\varvec{MX}}}_{{\varvec{b}}} } )^{-1}{{\varvec{X}}}_{{\varvec{b}}}^{\prime } {{\varvec{M}}} ( {\hat{{\varvec{\theta }}} -{{\varvec{X}}}_\mu \mu ^{z-1} -{{\varvec{X}}}_{{\varvec{e}}} {{\varvec{e}}}^{z-1}-{{\varvec{X}}}_{{\varvec{g}}} {{\varvec{g}}}^{z-1}} )\\ {{\varvec{a}}}^{z}= & {} ( {{{\varvec{I}}}-( {{{\varvec{X}}}_{{\varvec{a}}}^{\prime } {{\varvec{MX}}}_{{\varvec{a}}} } )^{-1}{{\varvec{Q}}}_{{\varvec{a}}} ( {{{\varvec{Q}}}_{{\varvec{a}}}^{\prime } ( {{{\varvec{X}}}_{{\varvec{a}}}^{\prime } {{\varvec{MX}}}_{{\varvec{a}}} } )^{-1}{{\varvec{Q}}}_{{\varvec{a}}} } )^{-1}{{\varvec{Q}}}_{{\varvec{a}}}^{\prime } } )( {{{\varvec{X}}}_{{\varvec{a}}}^{\prime } {{\varvec{MX}}}_{{\varvec{a}}} } )^{-1}{{\varvec{X}}}_{{\varvec{a}}}^{\prime } {{\varvec{M}}} ( {\hat{{\varvec{\theta }}} -{{\varvec{X}}}_\mu \mu ^{z-1} -{{\varvec{X}}}_{{\varvec{e}}} {{\varvec{e}}}^{z-1}-{{\varvec{X}}}_{{\varvec{g}}} {{\varvec{g}}}^{z-1}} ) \end{aligned}$$

where \({{\varvec{X}}}_{{\varvec{a}}} =({{{\varvec{B}}}^{z}\otimes {{\varvec{I}}}_I}), {{\varvec{X}}}_{{\varvec{b}}} =({{{\varvec{I}}}_J \otimes {{\varvec{A}}}^{z-1}}), {{\varvec{Q}}}_{{\varvec{e}}} =\mathbf{1}_J, {{\varvec{Q}}}_{{\varvec{g}}} =\mathbf{1}_I, {{\varvec{Q}}}_{{\varvec{b}}} =({\mathbf{1}_J \otimes {{\varvec{I}}}_n })\), and \({{\varvec{Q}}}_{{\varvec{a}}} =({{{\varvec{I}}}_n \otimes \mathbf{1}_I } )\).

The convergence of this iterative algorithm can be accelerated by using the effects that were already estimated in the zth iteration to estimate the subsequent effects, e.g., using \(\mu ^{z}\) to estimate \({{\varvec{e}}}^{z}\) (Figure 2).

Algorithm 3

This algorithm was also implemented by the Jacobi iterative method. The difference to Algorithm 2 is that the weighted SVD proposed by Rodrigues et al. (2014) was used to estimate the multiplicative interaction. In matrix form, the weighted SVD (Rodrigues et al. 2014) is computed by

$$\begin{aligned} {{\varvec{W}}}_{\mathrm{SVD}}^z =\mathrm{SVD}[{{{\varvec{M}}}\odot {{\varvec{Z}}} +({\mathbf{11}^{{\prime }}-{{\varvec{M}}}})\odot {{\varvec{W}}}_{\mathrm{SVD}}^{z-1}}], \end{aligned}$$

where \({{\varvec{Z}}}\) is the data, \({{\varvec{M}}}\) is a matrix containing weights which are smaller than or equal to one, \({{\varvec{W}}}_{\mathrm{SVD}}^{z-1} \) are the estimated interaction effects in the \(({z-1})\)th iteration using n multiplicative terms, \(\odot \) is the Hadamard (or element-wise) product of matrices, and SVD\([\cdot ]\) represents the SVD of a matrix. Using the \(vec(\cdot )\) operator, the argument of the SVD can be written as

$$\begin{aligned} vec({{{\varvec{M}}}\odot {{\varvec{Z}}}+({\mathbf{11}^{{\prime }} -{{\varvec{M}}}})\odot {{\varvec{W}}}_{\mathrm{SVD}}^{z-1}}) ={{\varvec{D}}}vec({{\varvec{Z}}})+({{{\varvec{I}}}-{{\varvec{D}}}} )vec({{{\varvec{W}}}_{\mathrm{SVD}}^z}), \end{aligned}$$

where \({{\varvec{D}}}\) is a diagonal matrix containing the elements of \({{\varvec{M}}}\) on the diagonal. In this representation, any non-diagonal \({{\varvec{D}}}\) may also be used to estimate \({{\varvec{W}}}_{\mathrm{SVD}}\) by the proposed weighted SVD of Rodrigues et al. (2014). The iterative estimation of \({{\varvec{W}}}_{\mathrm{SVD}}\) may be incorporated in the Jacobi iteration by replacing the estimates of \({{\varvec{a}}}^{z}\) and \({{\varvec{b}}}^{z}\) in Algorithm 2 by \(vec({{{\varvec{W}}}_{\mathrm{SVD}}^z })\) and by replacing \({{\varvec{X}}}_{{\varvec{b}}} {{\varvec{b}}}^{z-1}\) by \(vec({{{\varvec{W}}}_{\mathrm{SVD}}^{z-1} })\). In the Jacobi iteration proposed here, we used \({{\varvec{Z}}}=\hat{{\varvec{\varTheta }}} -\mu ^{z} \mathbf{1}_I \mathbf{1}_J^{\prime } - {{\varvec{g}}} ^{{{\varvec{z}}}} \mathbf{1}_J^{\prime } -\mathbf{1}_I {{\varvec{e}}^{\prime }}^{z}\) to compute the weighted SVD. Due to the weights, the data subjected to the SVD may not be row- and column-centered; therefore, the left and right singular vectors of the SVD of \({{\varvec{W}}}_{\mathrm{SVD}}^z\) need to be centered to establish the constraints of the AMMI/GGE model (Figure 3).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hadasch, S., Forkman, J., Malik, W.A. et al. Weighted Estimation of AMMI and GGE Models. JABES 23, 255–275 (2018). https://doi.org/10.1007/s13253-018-0323-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13253-018-0323-z

Keywords

Navigation