1 Introduction

Using finite element techniques, the equation of motion of an n-degree-of-freedom damped linear system in free vibration can be written in the form

$$ M_a \ddot{q}(t)+ D_a \dot{q}(t)+K_a q(t)=0. $$
(1)

Here, q(t) is the displacement vector and M a , D a and K a are the analytical mass, damping and stiffness matrices, respectively. In many practical applications, the matrix M a is often symmetric positive definite and K a is symmetric positive semi-definite. The damping matrix D a is hard to determine in practice, however, very often, for the sake of computational convenience and other practical considerations, it is assumed to be symmetric. If a fundamental solution to (1) is represented by

$$q(t)=xe^{\lambda t}, $$

then the scalar λ and the vector x must solve the quadratic eigenvalue problem (QEP)

$$ \bigl(\lambda^2 M_a +\lambda D_a+K_a \bigr)x=0. $$
(2)

Complex numbers λ and nonzero complex vectors x for which this relation holds are, respectively, the eigenvalues and eigenvectors of the system. It is known that the equation of (2) has 2n finite eigenvalues over the complex field because the leading matrix coefficient M a is nonsingular. Note that the dynamical bearing of the differential system (1) usually can be interpreted via the eigenvalues and eigenvectors of Eq. (2). Because of this connection, a lot of efforts have been devoted to the QEP in the literature. Readers are referred to the treatise by Tisseur and Meerbergen [1] for a good survey of many applications, mathematical properties, and a variety of numerical techniques for the QEP.

Model updating is a quadratic inverse eigenvalue problem (QIEP) which concerns the modification of an existing but inaccurate model with measured modal data. The model updating process requires that the updated model can reproduce a given set of measured data by replacing the corresponding ones from the original analytical model, and preserves the symmetry of the original model. The problem of updating damping and stiffness matrices simultaneously can be mathematically formulated as follows.

Problem 1

Let Λ=diag{λ 1,…,λ p }∈C p×p and X=[x 1,…,x p ]∈C n×p be the measured eigenvalue and eigenvector matrices, where rank(X)=p, p<n and both Λ and X are closed under complex conjugation in the sense that \(\lambda_{2j} = \bar{\lambda}_{2j-1} \in{\mathbf{C}} \), \(x_{2j} = \bar{x}_{2j-1} \in{\mathbf{C}}^{n} \) for j=1,…,l, and λ k R, x k R n for k=2l+1,…,p. Find real-valued symmetric matrices D and K such that

$$ M_a X\varLambda^{2}+ D X\varLambda+KX=0. $$
(3)

A quick count shows that there are np equations in n(n+1) unknowns, which implies that the matrices D and K in Eq. (3) can not be uniquely determined. It is known that D a and K a are good approximations of D and K. The strategy for obtaining an improved model is to find D and K that satisfy (3) and deviate as little as possible from D a and K a . Thus, we should further solve the following optimal approximation problem.

Problem 2

Let S E be the solution set of Problem 1. Find \((\hat{D}, \hat{K}) \in \mathbf{S}_{\mathbf{E}}\) such that

(4)

In this paper we provide a gradient based iterative (GI) algorithm to solve Problems 1 and 2. The proposed iterative method is developed from an optimization point of view and contains the well-known Jacobi iteration, Gauss-Seidel iteration and some recently reported iterative algorithms by using the hierarchical identification principle, as special cases [24]. The convergence analysis indicates that the iterative solutions generated by the GI algorithm always converge to the unique minimum Frobenius norm symmetric solution of Problem 2 when a suitable initial symmetric matrix pair is chosen. The merits of the proposed algorithm include: (i) It can be easily constructed without any factorizations on the known matrices; (ii) Only matrix multiplication is required during the iteration; (iii) Convergence of the algorithm can be guaranteed provided the convergence factor μ is suitable chosen and (iv) Compared with the finite iterative method proposed by Yuan and Liu [5], we observe that the GI algorithm is more simple and easy to perform, and seems to have enough generality that, with some suitable modifications, it can be applied to other types of structural dynamics model updating problems as well.

The model updating problem is a practical industrial problem that arises in vibration industries, including aerospace, automobile, manufacturing, and others. In these industries a theoretical finite element model described by (1) often needs to be updated using a few measured frequencies and mode shapes (eigenvalues and eigenvectors) from a real-life structure. The reason for doing so is that the theoretical model of a structure is constructed on the basis of highly idealized engineering blue prints and designs that may not truly represent all the physical aspects of an actual structure. In fact, an analytical (finite element) model will be erroneous due to inevitable difficulties in modeling of joints, boundary conditions and damping. When field dynamic tests are performed to validate the theoretical model, inevitably their results, commonly natural frequencies and mode shapes, do not coincide well with the expected results from the theoretical model. In this situation a vibration engineer needs to update the theoretical model so that inaccurate modeling assumptions can be corrected in the original analytical model and the updated model may be considered to be a better dynamic representation of the structure. This model may be used with greater confidence for the analysis of the structure under different boundary conditions.

In the past 30 years, structural dynamics model updating problems have received considerable discussions. A significant number of model updating techniques for updating mass and stiffness matrices for undamped systems (i.e., D a =0) using measured response data have been discussed by Baruch [6], Baruch and Bar-Itzhack [7], Berman [8], Berman and Nagy [9], Wei [1012], Yang and Chen [13], and Yuan [14, 15], etc. For an account of the earlier methods, we refer readers to see the authoritative book by Friswell and Mottershead [16], an integral introduction of the basic theory of finite element model updating is given. For damped structural systems, the theory and computation have been considered by Friswell et al. [17], Pilkey [18], Kuo et al. [19], Chu et al. [20] and Yuan [21, 22], etc. Although these existing methods for updating damped structural systems are direct methods, the explicit solution is too difficult to be obtained by applying matrix computation techniques, which restrict their usefulness in real applications. We notice that the iterative methods for structural dynamics model updating have received little attention in these years. Iterative algorithms are not only widely applied in system identification [23, 24], but have also been developed for solving linear matrix equations [4, 25]. This paper we will offer a simple yet effective iterative method to solve damped structural model updating problems. We believe that our method is new in the field and the features of simple operations and easy performance make the method practical for large-scale applications.

The rest of the paper is outlined as follows. In Sect. 2, an efficient gradient based iterative method is presented to solve Problems 1 and 2 and the convergence properties are discussed. In Sect. 3, two numerical examples are used to test the effectiveness of the proposed algorithm. Concluding remarks are given in Sect. 4.

Throughout this paper, we shall adopt the following notation. C m×n and R m×n denote the set of all m×n complex and real matrices, respectively. SR n×n denotes the set of all symmetric matrices in R n×n. A , A +, tr(A) and R(A) stand for the transpose, the Moore-Penrose generalized inverse, the trace and the column space of the matrix A, respectively. λ max(M M) denotes the maximum eigenvalue of M M. I n represents the identity matrix of order n, \(\bar{\alpha}\) denotes the conjugation of the complex number α. For A,BR m×n, an inner product in R m×n is defined by (A,B)=tr(B A), then R m×n is a Hilbert space. The matrix norm ∥⋅∥ induced by the inner product is the Frobenius norm. Given two matrices A=[a ij ]∈R m×n and BR p×q, the Kronecker product of A and B is defined by AB=[a ij B]∈R mp×nq. Also, for an m×n matrix A=[a 1,a 2,…,a n ], where a i , i=1,…,n, is the i-th column vector of A, the stretching function vec(A) is defined as \(\mbox{vec}(A)=[a_{1}^{\top}, a_{2}^{\top}, \ldots, a_{n}^{\top}]^{\top}\).

2 The solution of Problem 1 and Problem 2

Define a complex matrix T p as

(5)

where \(i=\sqrt{-1}\). It is easy to verify that T p is a unitary matrix, that is, \(\bar{T}_{p}^{\top}T_{p}=I_{p}\). Using this transformation matrix, we have

(6)
(7)

where ζ j and η j are respectively the real part and the imaginary part of the complex number λ j , and y j and z j are respectively the real part and the imaginary part of the complex vector x j for j=1,3,…,2l−1.

It follows from (6) and (7) that the equation of (3) can be equivalently written as

For a given symmetric matrix pair (D a ,K a ), we have

Let

then solving Problem 1 and Problem 2 is equivalent to finding the minimum Frobenius norm solution of the matrix equation

(8)

We should point out that the equation of (8) must be consistent. In fact, let

It is easy to check that (D D a ,K K a ) is a particular solution of (8). Therefore, once the minimum Frobenius norm solution \((\tilde{D}^{*}, \tilde{K}^{*})\) of (8) is obtained, the solution of the matrix optimal approximation Problem 2 can be computed. In this case, can be expressed as

$$ \hat{D}=D_a+ \tilde{D}^*, \qquad \hat{K}=K_a+ \tilde{K}^*. $$
(9)

Lemma 1

[2, 3, 26]

If the linear equation system Mx=b, where MR m×n, bR m, has a unique solution x , then the gradient based iterative algorithm

$$\left \{ \begin{array}{l} x_k=x_{k-1}+\mu M^\top(b-Mx_{k-1}), \\\noalign{\vspace*{3pt}} 0< \mu< \dfrac{2}{\lambda_{\max}(M^\top M)}, \end{array} \right . $$

yields lim k→∞ x k =x .

Lemma 2

[27]

Suppose that the consistent linear equation Mx=b, where MR m×n, bR m, has a solution xR(M ), then x is the unique minimum Frobenius norm solution of the linear equation.

Lemma 3

The equation of (8) has a symmetric solution pair \((\tilde{D}, \tilde {K})\) if and only if the matrix equations

$$ \begin{aligned}[c] &\tilde{D} \tilde{X}\tilde{\varLambda}+\tilde{K} \tilde{X}=F, \\ &\tilde{\varLambda}^\top\tilde{X}^\top\tilde{D}+\tilde{X}^\top\tilde {K}=F^\top, \end{aligned} $$
(10)

are consistent.

Proof

If the equation of (8) has a symmetric solution pair \((\tilde{D}^{*}, \tilde{K}^{*})\), then \(\tilde{D}^{*} \tilde{X}\tilde{\varLambda }+\tilde{K}^{*} \tilde{X}=F\), and \((\tilde{D}^{*} \tilde{X}\tilde{\varLambda}+\tilde{K}^{*} \tilde{X})^{\top}= \tilde{\varLambda}^{\top}\tilde{X}^{\top}\tilde{D}^{*}+\tilde{X}^{\top}\tilde {K}^{*}=F^{\top}\). That is to say, \((\tilde{D}^{*}, \tilde{K}^{*})\) is a solution of (10).

Conversely, if the matrix equations of (10) has a solution, say, \(\tilde{D}=U\), \(\tilde{K}=V\). Let \(\tilde{D}^{*}=\frac{1}{2}(U+U^{\top})\), \(\tilde{K}^{*}=\frac{1}{2}(V+V^{\top})\), then \(\tilde{D}^{*} \) and \(\tilde {K}^{*}\) are symmetric matrices, and

Hence, \((\tilde{D}^{*}, \tilde{K}^{*})\) is a symmetric solution pair of (8).

Using the Kronecker product and the stretching function, we know that the equations of (10) are equivalent to

$$\left [ \begin{array}{c@{\quad}c} \tilde{\varLambda}^\top\tilde{X}^\top\otimes I_n & \tilde{X}^\top \otimes I_n \\ I_n\otimes\tilde{\varLambda}^\top\tilde{X}^\top& I_n\otimes\tilde {X}^\top \end{array} \right ]\left [ \begin{array}{c} {\mathrm{vec}}(\tilde{D}) \\ {\mathrm{vec}}(\tilde{K}) \end{array} \right ]=\left [ \begin{array}{c} {\mathrm{vec}}(F) \\ {\mathrm{vec}}(F^\top) \end{array} \right ]. $$

Let

$$M=\left [ \begin{array}{c@{\quad}c} \tilde{\varLambda}^\top\tilde{X}^\top\otimes I_n & \tilde{X}^\top \otimes I_n \\ I_n\otimes\tilde{\varLambda}^\top\tilde{X}^\top& I_n\otimes\tilde {X}^\top \end{array} \right ]. $$

According to Lemma 1, we have the gradient based iterative algorithm for the equations of (10) described as following:

(11)

After some algebra manipulations this results in

(12)
(13)

From (12) and (13) we can easily see that if the initial matrices \(\tilde{D}_{0}\), \(\tilde{K}_{0} \in \mathbf{SR}^{n \times n}\), then \(\tilde{D}_{s} \in\mathbf{SR}^{n \times n}\) and \(\tilde{K}_{s} \in\mathbf{SR}^{n \times n}\) for s=1,2,… . □

Theorem 1

Suppose that the equation of (8) has a unique symmetric solution \((\tilde{D}^{*}, \tilde{K}^{*})\). If we choose the convergence factor as

(14)

then the sequences \(\{\tilde{D}_{i}\}\) and \(\{\tilde{K}_{i}\}\) generated by (12) and (13) satisfy

$$ \lim_{s\rightarrow\infty} \tilde{D}_s =\tilde{D}^*, \qquad \lim _{s\rightarrow\infty} \tilde{K}_s =\tilde{K}^* $$
(15)

for any arbitrary initial matrix pair \(( \tilde{D}_{0}, \tilde{K}_{0})\) with \(\tilde{D}_{0}, \tilde{K}_{0} \in \mathbf{SR}^{n \times n}\).

Proof

Define the error matrices \(\tilde{D}_{s}^{*}\) and \(\tilde {K}_{s}^{*}\) as

$$\tilde{D}_s^*= \tilde{D}_s-\tilde{D}^*, \qquad \tilde{K}_s^*= \tilde {K}_s-\tilde{K}^*. $$

Using (12), (13) and (10), we have

(16)
(17)

Let

$$P_{s-1}=\tilde{D}_{s-1}^* \tilde{X}\tilde{\varLambda}, \qquad Q_{s-1}=\tilde {K}_{s-1}^* \tilde{X}. $$

By (16) and noting that the symmetry of \(\tilde{D}_{s}^{*}\), i=0,1,… , we obtain

$$ \begin{aligned}[b] \bigl\|\tilde{D}_s^*\bigr\|^2 =& \bigl\|\tilde{D}_{s-1}^*\bigr\|^2-4\mu\|P_{s-1}\|^2-4\mu \operatorname{tr} \bigl(P_{s-1}^\top Q_{s-1}\bigr) \\ &{} +\mu^2\bigl\|(P_{s-1}+Q_{s-1})\tilde{\varLambda}^\top\tilde {X}^\top \\ &{}+\tilde{X}\tilde{\varLambda}\bigl(P_{s-1}^\top+Q_{s-1}^\top\bigr)\bigr\|^2. \end{aligned} $$
(18)

Observe that

Thus, it follows from (18) that

(19)

Similarly, by (17) we can obtain

$$ \begin{aligned}[b] \bigl\|\tilde{K}_s^*\bigr\|^2\leq& \bigl\|\tilde{K}_{s-1}^*\bigr\|^2-4\mu\|Q_{s-1}\| ^2-4\mu\mbox{tr} \bigl(P_{s-1}^\top Q_{s-1}\bigr) \\ &{} + 4 \mu^2\lambda_{\max}\bigl(\tilde{X}^\top\tilde{X}\bigr)\| P_{s-1}+Q_{s-1}\|^2. \end{aligned} $$
(20)

Note that

Therefore, from (19) and (20) we have

(21)

If the convergence factor μ is chosen to satisfy 0<μ<μ 0, then the inequality of (21) implies that

or

$$\sum_{s=0}^{\infty}\|P_{s}+Q_{s} \|^2<\infty, $$

it follows that

$$P_{s}+Q_{s}\rightarrow0, \quad \mbox{as } s\rightarrow \infty, $$

or equivalently,

$$\tilde{D}_{s}^* \tilde{X}\tilde{\varLambda}+\tilde{K}_{s}^* \tilde {X}\rightarrow0, \quad \mbox{as } s\rightarrow\infty. $$

Under the condition that the solution to the equation of (8) is unique, we can conclude that \(\tilde{D}_{s}^{*} \rightarrow0\) and \(\tilde {K}_{s}^{*}\rightarrow0\) as s→∞. This proves Theorem 1. □

Now, assume that JR n×p is an arbitrary matrix, then we have

It is obvious that if we choose

$$ \begin{aligned}[c] &\tilde{D}_0=J\tilde{\varLambda}^\top\tilde{X}^\top+ \tilde{X}\tilde {\varLambda}J^\top, \\ &\tilde{K}_0=J \tilde{X}^\top+\tilde{X}J^\top, \end{aligned} $$
(22)

then all \(\tilde{D}_{s}\) and \(\tilde{K}_{s}\) generated by (12) and (13) satisfy

$$\left [ \begin{array}{c} {\mathrm{vec}}(\tilde{D}_s) \\ {\mathrm{vec}}(\tilde{K}_s) \end{array} \right ] \in R\left ( \left [ \begin{array}{c@{\quad}c} \tilde{\varLambda}^\top\tilde{X}^\top\otimes I_n & \tilde{X}^\top \otimes I_n \\ I_n\otimes\tilde{\varLambda}^\top\tilde{X}^\top& I_n\otimes\tilde {X}^\top \end{array} \right ]^\top \right ). $$

It follows from Lemma 2 and Theorem 1 that if we choose the initial symmetric matrix pair by (22), then the iterative solution pair \((\tilde{D}_{s}, \tilde{K}_{s})\) obtained by the gradient iterative algorithm (12), (13) and (14) converges to the unique minimum Frobenius norm symmetric solution pair \((\tilde{D}^{*}, \tilde{K}^{*})\). In summary of above discussion, we have proved the following result.

Theorem 2

Suppose that the condition (14) is satisfied. If we choose the initial symmetric matrices by (22), where J is an arbitrary matrix, or especially, \(\tilde {D}_{0}=0\) and \(\tilde{K}_{0}=0\), then the iterative solution pair \((\tilde{D}_{s}, \tilde{K}_{s})\) obtained by the gradient iterative algorithm (12) and (13) converges to the unique minimum Frobenius norm symmetric solution pair \((\tilde{D}^{*}, \tilde{K}^{*})\) of Eq. (8), and the unique solution of Problem 2 is achieved and given by (9).

3 Numerical examples

In this section, we will give two numerical examples to illustrate our results. The test is performed using MATLAB 6.5. The iteration will stop if the corresponding relative residue satisfies \(\delta_{s}=\frac{\|F-\tilde{D}_{s}\tilde{X}\tilde{\varLambda}-\tilde {K}_{s}\tilde{X}\|}{\|M_{a}\tilde{X}\tilde{\varLambda}^{2}\| +\|\hat{D}\tilde{X}\tilde{\varLambda}\|+\|\hat{K}\tilde{X}\|}\allowbreak \leq1.0\mathrm{e}{-}005\).

Example 1

[28]

Consider an analytical five-degree-of-freedom system with mass, stiffness and damping matrices given by M a =diag{1,2,5,4,3}, K a and D a , where K a =[k aij ]5×5, D a =[d aij ]5×5 are real-valued symmetric 3-diagonal matrices with k a11=100, k a12=−20, k a22=120, k a23=−35, k a33=80, k a34=−12, k a44=95, k a45=−40, k a55=124; d a11=11, d a12=−2, d a22=14, d a23=−3.5, d a33=13, d a34=−1.2, d a44=13.5, d a45=−4, d a55=15.4.

The model used to simulate the consistent experimental data is given by M=M a , D=D a , and K=[k ij ]∈R 5×5, where K is a symmetric 3-diagonal matrix with k 11=100, k 12=−20, k 22=120, k 23=−35, k 33=70, k 34=−12, k 44=95, k 45=−40, k 55=124. Note that the difference between K a and K is in the (3,3) element. The eigensolution of the experimental model is used to create the experimental modal data. It is assumed that the measured eigenvalue and eigenvector matrices Λ and X are given by

$$\varLambda=\mbox{diag} \{ -1.116+3.057i, -1.116-3.057i \} $$

and

$$X= \small{ \begin{aligned} \left [ \begin{array}{c@{\quad}c} -0.03277 - 0.065568i & -0.03277 + 0.065568i\\ -0.14847 - 0.29217i & -0.14847 + 0.29217i\\ -0.4105 - 0.78255i & -0.4105 + 0.78255i\\ -0.11822 - 0.27484i & -0.11822 + 0.27484i\\ -0.048925 - 0.12011i & -0.048925 + 0.12011i \end{array} \right ]. \end{aligned}} $$

Taking \(\tilde{D}_{0}=0\), \(\tilde{K}_{0}=0\) and \(\mu=\frac{1}{23}\), we apply the GI algorithm in (12), (13) to compute \((\tilde{D}_{s}, \tilde{K}_{s})\). The relative residual δ s versus iteration s is shown in Fig. 1. From Fig. 1, it is clear that the relative residual δ s is becoming smaller and approaches zero as iteration time s increases. This indicates that the proposed algorithm is effective and convergent. After 100 iteration steps, we get the minimum Frobenius norm solution \(( {\tilde{D}}^{*}, { \tilde{K} }^{*})\) of Eq. (8) as follows:

$${\tilde{D}}^*=\tilde{D}_{100}= \left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} -0.0031 & -0.0097 & 0.0205 & -0.0464 & -0.0250\\[2pt] -0.0097 & -0.0241 & 0.0773 & -0.1902 & -0.1045\\[2pt] 0.0205 & 0.0773 & -0.1006 & 0.1941 & 0.1003\\ [2pt] -0.0464 & -0.1902 & 0.1941 & -0.3241 & -0.1602\\[2pt] -0.0250 & -0.1045 & 0.1003 & -0.1602 & -0.0779 \end{array} \right ], $$
$${\tilde{K}}^*=\tilde{K}_{100}=\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 0.0412 & 0.1841 & -0.1482 & 0.1702 & 0.0740\\[2pt] 0.1841 & 0.8219 & -0.6594 & 0.7603 & 0.3306\\[2pt] -0.1482 & -0.6594 & -9.4593 & -0.6281 & -0.2755\\[2pt] 0.1702 & 0.7603 & -0.6281 & 0.7000 & 0.3040\\[2pt] 0.0740 & 0.3306 & -0.2755 & 0.3040 & 0.1320 \end{array} \right ], $$

with corresponding relative residual

$$\begin{aligned} \delta_{100}&=\frac{\|F-\tilde{D}_{100}\tilde{X}\tilde{\varLambda}-\tilde {K}_{100}\tilde{X}\|}{\|M_a\tilde{X}\tilde{\varLambda}^2\| +\| D_a \tilde{X}\tilde{\varLambda}\|+\|K\tilde{X}\|} \\[3pt] &= 9.9227\mathrm{e}{-}006. \end{aligned} $$
Fig. 1
figure 1

The relative errors δ s versus s of the GI algorithm

Therefore, by (9), the updated damping and stiffness matrices are given by

$$\hat{D}=\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 10.9969 & -2.0097 & 0.0205 & -0.0464 & -0.0250\\[2pt] -2.0097 & 13.9759 & -3.4227 & -0.1902 & -0.1045\\[2pt] 0.0205 & -3.4227 & 12.8994 & -1.0059 & 0.1003\\[2pt] -0.0464 & -0.1902 & -1.0059 & 13.1759 & -4.1602\\ [2pt] -0.0250 & -0.1045 & 0.1003 & -4.1602 & 15.3221 \end{array} \right ], $$
$$\hat{K}=\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 100.0412 & -19.8159 & -0.1482 & 0.1702 & 0.0740\\ -19.8159 & 120.8219 & -35.6594 & 0.7603 & 0.3306\\ -0.1482 & -35.6594 & 70.5407 & -12.6281 & -0.2755\\ 0.1702 & 0.7603 & -12.6281 & 95.7000 & -39.6960\\ 0.0740 & 0.3306 & -0.2755 & -39.6960 & 124.1320\\ \end{array} \right ]. $$

Inspection shows that although all the elements of the damping and stiffness matrices have been adjusted, the algorithm has concentrated the major change in the proper location.

Example 2

Consider a model updating problem. The original model is the statically condensed oil rig model (M a ,D a ,K a ) represented by the triplet BCSSTRUC1 in the Harwell-Boeing collection [29]. In this model, M a and K a R 66×66 are symmetric and positive definite, and D a =1.55I 66. There are 132 eigenpairs.

The measured data for experiment is simulated by reducing the quantity of stiffness matrix in K a (1,1)=1990.33 to K(1,1)=1600. That is, the difference between K a and K is in the (1,1) element. Assume that the measured eigenvalues are λ 1=−34.62+574.48i, λ 2=−34.62−574.48i, λ 3=−12.865+465.35i and λ 4=−12.865−465.35i, and the corresponding eigenvectors are the same as those of the experimental model (M a ,D a ,K). Applying Theorem 2 and taking \(\tilde{D}_{0}=0\), \(\tilde{K}_{0}=0\) and μ=1.5106e−006, after 191 iteration steps, we get the minimum Frobenius norm solution \(( \tilde{D}_{191}, \tilde{K}_{191})\) of Eq. (8). The relative residual is estimated by

Observe that the prescribed eigenvalues and eigenvectors have been embedded in the new model \(M_{a}\tilde{X}\tilde{\varLambda}^{2}+\hat{D} \tilde{X}\tilde{\varLambda}+\hat{K} \tilde{X}=0 \), where \(\hat{D}=D_{a}+ \tilde{D}_{191}\), \(\hat{K}=K_{a}+ \tilde{K}_{191}\).

4 Concluding remarks

A gradient based iterative algorithm has been developed to incorporate measured experimental modal data into an analytical finite element model with nonproportional damping, such that the adjusted finite element model more closely matches the experimental results. The approach is demonstrated by two numerical examples and reasonable results are produced.