Parameter identification in elastoplastic material models by Small Punch Tests and inverse analysis with model reduction

Small Punch Tests (SPT) are, at present, frequently employed for diagnostic analyses of metallic structural components and are considered in codes of practice because the damage generated by miniature specimen extraction is small (“quasi-non-destructive” tests). This paper contains a description of the following contributions for improvement in the state-of-the-art of SPT practice: assessment of material parameters through inverse analysis, made faster and more economical by employing model reduction through a Proper Orthogonal Decomposition (POD) procedure. The methodology is developed to assess material parameters entering into diverse constitutive models, thus resulting in a more flexible identification framework, capable of addressing different kinds of material behaviour. Within the paper, real experimental data are used and comparisons of computed stress–strain curves with experimentally measured ones, through typical tensile tests, show an excellent agreement.


Introduction
At present, a recurrent meaningful problem in structural mechanics is the assessment of the safety margin, namely of the amplification load factor which may generate collapse if applied to some typical external action (loading). For trustworthiness of results, such a computational problem must be formulated by system discretization (usually by a finite element method) and by suitable constitutive models of materials, with reliable values attributed to the parameters contained in these models. Some of such parameters may be unknown, particularly because changed in time by damaging phenomena. Therefore, parameter identification is necessary for structural diagnosis, in order to make computational ''direct analyses'' reliably apt to assess present safety factors in structural components or in full structures.
The experiments providing data for diagnostic inverse analysis have to cause as small as possible structural ''damages''. Fully non-destructive tests, like those based on X-ray diffraction or ultra-sonic emission, are apt to estimate only elastic properties, defects and residual stresses, e.g. [33,40]. Estimation of inelastic properties of materials, specifically metals in structural components considered here, requires tests implying inelastic deformations.
Among the experiments most frequently performed on metallic structures in the present industrial engineering practice, the following ones are considered herein: (a) classic uniaxial Tension Tests (TT), on specimens extracted from the investigated structure with obvious damages or ''destructiveness'' in the present popular jargon [3]; (b) Small Punch Tests (SPT), in laboratory, on miniaturized specimens with disc shape, ''quasi-non-destructive'' as consequence, e.g. [1,20,27,30,37,42]; (c) Instrumented Indentation Tests (IIT), with historical roots in hardness tests, but at present with equipment apt to provide in digital form either penetration versus load, or imprint shape, or both, see e.g. [9,13,34]. At present, IIT techniques are adopted also at micro-and nano-scale and are considered as ''quasi-non-destructive'' like the SPT. Even though SPT is more destructive than instrumented indentation, it offers certain advantages. Within the indentation test, the state of the stress, particularly in the zone of inelastic deformation, is dominantly compressive (see e.g. [2,13]). In SPT there is significant portion of material subjected to tensile stress, which eventually leads to formation of a major crack. This offers, in comparison to indentation test, a more flexible framework and possibility to include more complex constitutive models, like those which combine plasticity with damage or those taking into account a yield plateau, as it will be shown in Sect. 3.
Significant applications of SPT technology for engineering practice have been recently reviewed, within an industrial framework, in e.g. [5,19].
The study presented in what follows concerns material parameter estimations based on experimental SPT techniques. Differently from previous studies, the assessment of parameters is here achieved through an ''inverse analysis'' made faster by performing test simulations through a ''model reduction'' technique on the basis of Proper Orthogonal Decomposition. In this paper the material behaviour considered is limited to elastic-plastic metallic materials, without rate-dependency (no creep, no dynamics), though some of the methodological results can and will be extended to such a behaviour. Inverse analyses are performed on data collected from a typical SPT instrument, now widely available in laboratories, as briefly described in Sect. 2.
The contents of this paper are specifically intended as possible contributions to the improvement of SPT methodology with respect to the state-of-the-art practice, with the aim at extending a procedure previously successfully applied to other methodologies (see e.g. [4,8,11,13,25,32] Further investigations, presented in [10], concern the employment of inverse analysis for the estimation of residual stresses based on measurements concerning the extraction of specimens for SPT, which thus might replace ad hoc tests (like Hole Drilling tests) with remarkable practical gains.

Experimental procedures and results
Reference is made in this study to a frequently adopted and now traditional SPT equipment schematically described by Fig. 1 and by the following specifications, according to [22]. The miniature specimen is a disc with diameter d 1 ¼ 8:00 mm and thickness h ¼ 0:50 mm. The periphery of the specimen is fixed by two dies, which embed it during the test by means of screws subjected to a suitable tightening torque. The cylindrical holes in axially-symmetric lower and upper dies exhibit diameter d 2 ¼ 4:00 mm. Both dies will be regarded as rigid in view of their massive size with respect to the specimen. Testing without clamping the disc may reduce radial tension in it and hence might be of practical interest [39]. The cylindrical punch with vertical axis and hemispherical end (radius r ¼ 1:25 mm), shown in Fig. 1, during the experiment is subjected to slow speed control (say 0.20 mm/min) and to a pre-established data sampling rate (say 20 samples/s). In test simulations this punch is assumed to be rigid. Finally, the push rod, shown in Fig. 1, provides, through suitable instrumentation apt to digitalization of measurements, vertical displacements of the central point of the specimen lower surface, together with correlated samples of the simultaneous punch displacements. The test results consist of a relationship between the displacement u of the lower mid-surface point and the force F exerted by the punch on the specimen (as its reaction to imposed rod displacement). Usually, a decrease of load F to 20% of its maximum is regarded as the end of the test. Figure 2a visualizes, as an example, the above F u ð Þ diagram achieved in laboratory by means of the SPT equipment with features specified in what precedes.
Since the results are protected by an industrial nondisclosure agreement, curves with real values cannot be shown. Therefore quantities on abscissa and ordinate, namely displacement and force for curve on Fig. 2a and strain and stress for curve on Fig. 2b, are normalised by reference values (u Ã , F Ã , e Ã and r Ã ), in order to take the dimensionless form.
The material tested, concerning the structural components of a hydroelectric power plant, from a metallurgical standpoint can be specified as follows: 1CrMoV. On the same steel (with specimen extraction nearby) a classic uniaxial tensile test led to the diagram of ''engineering stress'' r (i.e. load divided by the original area of specimen transversal cross section) versus ''engineering strain'' e (i.e. elongation divided by the original length). The diagram is visualized in Fig. 2b, also in dimensionless form, and will be considered later (Sect. 4) for meaningful comparisons. The cylindrical specimen, tested by a usual tension test, exhibits 8.89 mm diameter and 35.56 mm length of the monitored cylindrical part.  Procedures for parameter assessment employed at present in engineering practice on the basis of SPT are described in [22], where semi-empirical formulae are suggested in order to extrapolate the stress-strain relations from measured force-displacement curves during the test. Clearly, such relations are difficult to be interpreted in terms of governing parameters entering into an appropriate constitutive model.
The alternative and advantageous approach presented herein is based on the employment of an inverse analysis procedure apt to establish the transition from measured quantities collected in the test, to the required material parameters. Such an approach is formulated as a minimization problem, where a suitable selected objective function is designed to quantify the difference between experimentally measured values and values provided by test simulations. The objective function is subsequently minimized with respect to the sought material parameters, on which the function depends, through a test simulation. Such designed procedure turns out to be flexible, since more complex effects (i.e. creep, damage, fracture etc.) can be selectively included into the characterization. In recent studies, the inverse analysis approach has been employed for material characterization based on SPT. In order to mitigate computing efforts of repeated FEM simulations, required by minimization algorithms, either approximate solutions by interpolation between the parameters are sought [39], or simplified finite element models are used, still resulting in lengthy calibration procedures, as pointed out in [42].
In the procedure presented here, the computational burden related to test simulations is made consistent with routine industrial applications by employing a model reduction technique based on Proper Orthogonal Decomposition, trained once-for-all. Simulations within the designed procedure are performed through a ''reduced order'' model, providing the results in realtime, thus making the characterization procedure more economical. In preliminary phases of training, the approach can be an alternative to the employment of Artificial Neural Networks (ANN), like as applied in [1]. However, in previous studies (see e.g. [4]) it was pointed out that the gradient based optimization algorithms, combined with reduced order models, result in more flexible procedures, since there are no over-fitting problems which may arise when ANN are applied to solve the inverse problem (see e.g. [25]).
The main features of the proposed identification method are outlined in the following section.

Inverse analysis procedure centred on Proper Orthogonal Decomposition
In this section the computational procedure adopted herein for parameter identification is summarized, with reference to a numerical application concerning the SPT experimental data visualized in Fig. 2a.
Compared to the present practice, the main novelty consists of the application of inverse analysis and ''model reduction'' by Proper Orthogonal Decomposition (POD), rooted in diverse scientific areas (see, e.g., [7,14]), already investigated for applications on mechanical model calibration to diverse engineering purposes (e.g. [35,38]). The inverse analysis procedure is outlined below stage by stage, while some mathematical details are presented in ''Appendix A''.

Stage 1 Selection of the constitutive model and of parameters to estimate in it
The tensile test results of Fig. 2b suggest the adoption of an elastic-plastic-hardening material model, for possible structural analysis apt to assess present safety margin of structural components under investigation. The model selected here is the classical isotropic Huber-Hencky-von Mises model (HHM) with isotropic hardening (see, e.g., [31]), implemented in the commercial code Abaqus [17] and described by the following formulae as for the plasticity part: The above symbols (with usual representation of tensors by matrices) have the following meanings: r and e are stress and strain tensors, the latter with decomposition in elastic e e and plastic e p contributions; e p is the equivalent plastic strain; S represents the deviatoric stress tensor; r is the effective strength parameter which is defined by Eq. (2) as uniaxial function of equivalent plastic strain and governs the yield function through the hardening rule Eq. (3). For the calibration of the above model the parameters E, r Y and H are chosen as unknowns to be identified, whereas Poisson ratio is assumed m ¼ 0:3, as a priori known, and irrelevant to practical purposes. Since stresses and deformations do not fluctuate in SP tests, differences between kinematic, isotropic or mixed hardening are here immaterial.
A modification of the constitutive model can be considered in order to account either for a yield plateau, quantified by strain parameter e pl , or for a softening description. Such modification is fairly simple to be implemented within the proposed procedure, since it merely consists of adding constitutive parameters subjected to the identification through inverse analysis (IA).
Governing equations with plateau read: A further modification of the model can account for a softening description, with an ad hoc strain parameter e cr , such that: Stage 2 Design of the experiment The adopted SPT instruments have been specified in Sect. 2 and the measurements resulting from a test are visualized in Fig. 2a. Possible experimental improvements confined to the present equipment mentioned in Sect. 1 will be discussed in subsequent sections.
Stage 3 Test simulations A traditional finite element method, implemented in Abaqus commercial code ( [17]), has been employed with the following features: mesh with axial symmetry shown in Fig. 3; quadrilateral FEs, each one with four nodes; 650 degrees of freedom; boundary conditions shown in Fig. 3; large strain assumption; interfaces with Coulomb friction, a priori known, assuming as parameter f ¼ 0:15 with no dilatancy (therefore no normality). Clearly, axial symmetry is expected to be lost in the final stage of SPTs (and of TTs as well), as shown in Fig. 4. Therefore, a three-dimensional FE mesh is adopted for comparative test simulations and is shown in Fig. 5. The density of DOFs and the boundary conditions are practically the same as in the preceding FE model. Softening stage with large strains up to fracture implies loss of axial symmetry, but preserves symmetry with respect to a plane through the vertical axis, with main orientation depending on the unpredictable local perturbations in the specimen. Such residual symmetry can be exploited, as visualized in Fig. 5, in order to reduce computing time. The material characterization, presented in what follows, does not include fracture in the specimen. In order to save computing time, a 2D model is adopted.  Table 1, together with the mean values of their pairs. These mean values are here assumed as ''reference values'' of the sought parameters. The lower and upper bounds in the table should be provided by an expert on the analysed material; however, such bounds should be fairly wide, in order not to interfere with the optimisation algorithm. Indeed, in the examples treated in this paper in neither of the cases the algorithm converged to the parameter bounds, thus attesting the robustness of the procedure and evidencing that the identification would work even for looser boundaries.
Note that the values in Table 1 are normalized since they are related to an industrial research, subjected to a non-disclosure agreement. As a more quantitative indication of the robustness of the procedure and of the class of materials to which the tested one belongs, boundaries of search domain are selected here as quite wide, of the following orders. Youngs modulus: between 190 and 220 GPa; yield limit between 500 and 800 MPa; hardening parameter between 0.05 and 0.10.

Stage 5 Sensitivity analyses
Traditional sensitivity (S ij ) primarily quantifies the influence of sought parameters p j on measurable quantities u i by the following formula (see, e.g., [29]): wherep j is a ''reference value'' (usually, like here, the centre of the conjectured interval) of the unknown parameter p j ; vectorp gathers all these values;û i is the ith measurable quantity provided by the test simulation withp in the input. Clearly, the partial derivatives have to be approximated by finite differences: here forward, with 1% step ahead ofp j for any p j . By attributing toû i the value corresponding to the maximum load F i in Fig. 2a, the sensitivity turns out to exhibit the values gathered in Table 2.
An additional sensitivity analysis (already proposed in [11]) may contribute to useful orientation in the design of the testing procedure. First the lower bound p 0 j and, second, the upper bound p 00 j are attributed to parameter p j ; by considering reference values for the other parameters, some representative measurable quantity u i is computed first assuming p 0 j and later p 00 j . The difference between the two resulting values of u i should turn out to be some order of magnitude larger than the standard deviation of experimental errors. However, statistical approaches are not considered in the present study.
By using in input for the parameters the reference values and the lower and upper bounds in Table 1, test simulations with a 3D mesh have led to the SPT pseudo-experimental data visualized by the plots in Fig. 6. The computing times with a computer Intel Ò Core TM i7-2600 CPU @ 3.4 GHz, with 16 GB of RAM turned out to be included between 170 and 263 s. Figure 6 evidences a significantly smaller response curve range when Young's modulus is varied. This is partially related to the fact that Young's modulus has the smallest considered range and partially due to the  smaller sensitivity of this parameter. It is evidenced in the literature that also with the indentation test (see e.g. [4]) sensitivity of measurable quantities to Young's modulus is significantly smaller than the yield limit. Still, this sensitivity turned out to be good enough since all the parameters were successfully identified.
Stage 6 Formulation of the ''discrepancy function'' Obviously, the feature which characterizes the solution of the parameter search is the minimum of the discrepancy x between measured quantities (vector u), and their counterparts (vector u) as functions of the parameters p, vector of unknown variables within the search domain X. Formally: The quadratic form, Eq. (9), of the differences is based on the inverse of the covariance matrix C of the experimental data. Such matrix (symmetric, positive definite), with central role in stochastic procedures, in deterministic approaches plays a role which can be interpreted as attribution of ''more weight'' to the more accurate measurements. In the absence of accuracy quantifications, here (and frequently in practical applications) the matrix C is assumed as identity matrix.
Stage 7 Selection of the minimization algorithm The minimization problem (10) is here solved by means of a ''Trust Region Algorithm'' (TRA). The main features of this algorithm are outlined in what follows, while for detailed description the reader might refer to [15,16]. Initialization of the iterative sequence within TRA starts from a ''guess'' vector of parameters. Each iteration rests on a quadratic programming in two variable space, spanned by gradient and Newton direction, with the objective function approximated by a quadratic form. The computations involve first derivatives only, since the Hessian matrix required to build the quadratic form is approximated through the Jacobian. The iterative sequence is terminated when a predefined tolerance criterion is satisfied on minimal changes either of the objective function or of the parameters between two iterations. TRA provides a robust and stable procedure for minimization of an objective function, which does not exhibit large number of local minima. Such algorithm turned out to be effective in the present context, and it was therefore preferred against computationally more ''expensive'' alternatives based on Genetic Algorithms [18], or Artificial Neural Networks [26], frequently employed for the minimization of nonconvex or non-smooth objective functions. ð Þ n if nodes on the boundary domain are not considered. For the present identification of n = 3 parameters, m = 6 is assumed, so that N = 125. New methods of grid generation are being investigated, see e.g. [12], in order, first, to decide the number N account taken of the admissible computational effort and, second, in order to distribute the node locations according to some practical criteria.
Stage 9 Model reduction procedure The computational procedure called ''Proper Orthogonal Decomposition'' (POD) is innovative and substantially advantageous for the engineering applications considered herein. The POD procedure has been adopted already in researches and described with details in [7]. It consists of the following stages.
1. Each one of the N nodes in the parameters search domain X is assumed as input of a ''direct analysis'' which simulates the test and leads to pseudo-experimental data vector u i . 2. By exploiting ''correlation'' of the N computed vectors, a ''new basis'' is calculated for them and its axes with negligible components of vectors u i , i ¼ 1; . . .N are dropped. In such way a ''model reduction'' is performed by projection of system responses to a subspace with significantly reduced dimensionality. Each new vector u of measurable quantities is now approximated by its ''amplitude'' vector a in the new basis through a matrix U generated by means of the eigenvalue computation of a symmetric matrix of order N.
3. Starting from any new parameter vector p, the corresponding vector a and the related measurable quantities vector u can now be computed with controllable accuracy and with much smaller computational effort if compared to FEM simulations. Such computation is carried out by means of ''Radial Basis Functions'' (RBF) interpolations among the responses u i previously computed with parameters p i by test simulations.
The TRA ? POD ? RBF method briefly outlined above (and less briefly described in ''Appendix A'') is applied numerically here below for the identification of the parameters E, r Y and H in the material model chosen in Stage 1, starting from M = 20 pairs of experimental data provided by the SPT and visualized in Fig. 2a. The above outlined model reduction procedure makes the parameter identification fast and economical and apt to be performed routinely by a small computer, possibly in-situ on a structural component in service.
Stage 10 Accuracy checks The last stage of the method proposed herein consists of checks on estimation accuracy. Pseudo-experimental data are randomly perturbed according to the expected experimental ''noise'' (quantified by a probability density distribution). The consequent perturbations in the estimated parameters are quantified, by their average and standard deviation, achieved by means of the above devised inverse analysis procedure. It should be noted that this last stage is not considered in the paper because of the use of real experimental data.

Comparative remarks on inverse analysis procedures
In Sects. 2 and 3, respectively, experimental and computational procedures have been outlined and applied to a representative particular problem. On the basis of the SPT data visualized in Fig. 2a and specified in Sect. 2, quantitative comparisons are performed in this section by means of the computational simulations outlined in what follows.
A The POD ? RBF ? TRA procedure is applied based on the 2D mesh FE discretization shown in Fig. 3. The TRA step sequences with two diverse initializations are visualized in Fig. 7. Two different initialisations concern randomly selected parameter values. In general terms, multi-start optimisation strategies should be considered for the global optimum search (see e.g. [2,24]); however, for the present applications, the fact that both initialisations converged to the same results suggests that the problem is well posed as it is not dependent on the initialisation values. B Same like in (A), but using TRA without POD and RBF interpolations, namely a test simulation by FEM is performed at each step of the discrepancy function minimization, in order to test computational advantages achievable by recourse to POD. C Since the correlation between SPT and TT technique is here worth investigating, the TT experimental data visualized in Fig. 2b have been employed as input of an inverse analysis by TRA alone with same inizialization as in (B) and with FE test simulations based on the 2D mesh shown in Fig. 8a.
The computational analyses specified in what precedes have led to the results which are outlined, compared and commented in what follows, in order to motivate the improvements proposed here in the SPT methodology.
1. Table 3 Figure 9 shows the results obtained with the here proposed methodology and by the empirical formulae listed in [20] (values normalized to those given by the uniaxial tension test), the best estimations being given by the discussed methodology. 3. In the final stages of both experiments (SPT and TT), after the maximum of ''loading'' and before the ductile fracture process, there are losses of symmetry and needs of large strain modelling. These circumstances, clearly, imply higher computing times and costs, as shown in Table 3 by comparison between computing times B and C.
The following circumstances are worth noting for practical applications: ''softening'' of the overall force acting on the specimen (both in SPT and TT) is primarily due to specimen stretching; current deformations start from local geometry perturbation, not predictable due to instability of the overall response; some symmetries like those here adopted in 3D meshes for both SPT and TT simulations ( Fig. 5 and 8b, respectively) can be adopted in order to reduce computing times for routine simulations of tests. 4. In the present test simulations by a commercial code (Abaqus here) the final stage is modelled merely as large strain elastoplastic behaviour. Fracture might be modelled as material detachment and crack opening growth, or estimated, in a simplified way, by Eq. (7). 5. The final stage of SP tests with localized shrinking before fracture in the specimen clearly would require for material characterization more experimental data than those achievable with the instruments employed so far in practice and considered in what precedes. Full-field measurement of displacements are at present possible (e.g. by ''laser scanning'' or ''Digital Image Correlation'' equipments) and have been investigated e.g. in [21]. It is worth noting that, in the presented simulations, the force-displacement curve provides enough information for the identification procedure and the measurement of the deformed shape of the SPT sample is not needed, but it would be desirable in the case of a partial fracturing of the specimen (typically for the last stage of the test). 6. Resulting inverse problem is centred on the minimization of the objective function, therefore providing parameter values in deterministic sense. Alternative stochastic approaches, like Kalman filters (see e.g. [23]), starting from uncertainty within the measurements, would lead to the covariance matrix of the resulting measurements. Such approaches are not considered here. However, the achieved parameter values are verified, in an overall qualitative sense, by comparing the (nominal) stress-(nominal) strain curves, alternatively obtained by the TT experiment and by the  Fig. 9 Comparisons of here proposed methodology and diverse formulae (see [20]) for SPT material characterization (values normalized to those given by the uniaxial tension test) simulation of the same uniaxial test employing the constitutive parameters assessed by the inverse analysis on SPT. The comparison is visualized in Fig. 10 and evidences an excellent agreement. 7. Results assessed on the basis of presented identification procedure evidence the possibility to identify complete stress-strain curve of tested material. The material considered here exhibits behaviour in plasticity that can be captured well with the exponential hardening model, governed by three parameters given by Eqs. (1)-(3). The actual values of assessed parameters could not be presented, since protected for an industrial nondisclosure agreement, but normalized curves show extremely good agreement between experimentally measured and numerically simulated stressstrain curves (see Fig. 10). This corroborates the conclusion that the proposed identification procedure can be employed also to assess the material properties of other ductile metals. Furthermore, on the experimentally measured curve (see Fig. 6) segments corresponding to small and large plastic deformation are well distinguishable. It is thus possible to use SPT in combination with an inverse analysis procedure to quantify the behaviour of materials like mild steel, that exhibit yield plateau. Within the proposed procedure, this modification is straightforward and consists in the use of an appropriate constitutive model, e.g. like the one specified by Eqs. (4)- (5) or, in the presence of a softening material behaviour, the third one described by Eqs. (6)-(7).

Closing remarks
With the limitations specified in the Introdution (Sect. 1), in what precedes the following improvements in the present Small Punch Test (SPT) methodology have been envisaged and motivated by experimental and computational analyses and by the good obtained results (in comparison with the empirical formulae adopted in the present practice).

A model reduction procedure, specifically Proper
Orthogonal Decomposition (POD), followed by discrepancy function minimization as developed herein, makes much more economical the inverse analyses apt to routinely estimate parameters in popular inelastic constitutive models for metallic materials in structural components. 2. For the material inelastic behaviour, the identification of parameters governing the softening phase occurring after the ductility stage, three-dimensional simulations of the test become desirable or necessary in practical applications. Such simulations can be carried out by still preserving the here proposed POD procedure and by exploiting the symmetries consistently with the role of unknown local perturbations. The final softening phase of SPT implies large strains, ductility exhaustion and fracture modelling in the specimen.
Closely connected with the present subject, further contributions to improvements of the state-of-the-art SPT methodology may concern calibration of material models with time dependence (creep primarily, and dynamics as well) and are subjects of present investigations related particularly to metallic components in  A problem arising in engineering applications of inverse analysis concerns the consequences of experimental errors on the parameters estimates. Within the deterministic methodology considered in this study, by exploiting the computational efficiency acquired by POD, an answer to the above problem can be achieved as follows: many parameter identifications are performed starting from randomly perturbed experimental data (such randomness quantifies the measurement accuracy); the consequent ''perturbations'' in the estimates is quantified by mean value and standard deviation of their probability density distribution. In the industrial context considered in this study, namely plant components of structural steels, experimental data generated by SPT as input of inverse analyses turn out to be affected by remarkable uncertainties, rooted in metallurgical background, besides those expected as usual in measurements by the SPT instruments. Such circumstances make stochastic approaches highly desirable for parameters identifications in engineering practice (see e.g. [28]). A timely approach would lead to Kalman filter applications, namely evolutive transition from experimental data and their covariance matrix to parameter estimates and their covariance matrix. Such practically useful prospect will be subject of subsequent research.

A: On Proper Orthogonal Decomposition (POD) and Radial Basis Functions (RBF) interpolation
Proper Orthogonal Decomposition (POD) procedure adopted here is rooted originally in mathematics oriented to economics. Details related to the present purposes can be found in a broad literature, particularly in [7,8,36,41]. By assuming the parameters p i at each node of a grid generated on the search domain, a test simulation leads to the vector u i containing the pseudo-experimental data (''snapshot'' corresponding to p i ) through direct analysis, usually, by FEM. Let a M Â N matrix U gather all such snapshots.
The responses u i of the tested system to the same given external actions, but with diverse parameters p i internal to the search domain, turn out to be correlated, namely ''almost parallel'' in their space. Such correlation is clearly physically motivated in the present context and can be easily checked in matrix U.
Within the M-dimensional space of the snapshots Þa new reference axis is singled out by maximizing, with respect to all directions, an Euclidean norm of the projections on it of all N snapshots u i . Then, another axis is found by similar maximization over the set of all directions orthogonal to the one above singled out. A sequence of such optimizations leads to a new reference system, or new basis, analytically described by an orthonormal matrix U of order M such that: where the M Â N matrix A gathers as columns the vectors (called ''amplitudes'' a i in the POD jargon) which describe the snapshots u i in the new basis. The above mentioned correlation among test responses with different parameters within their search domain motivates large differences among amplitude components. Therefore a meaningful simplification is achieved by removal of axes with negligible components in the new basis. Operative details can be found in [14], while here only the main features are mentioned, namely: the above truncation is based on the computation of the eigenvalues k i i ¼ 1. . .N ð Þof matrix D ¼ U T U (of order N, symmetric positive definite or semidefinite) and on preservation of the axes corresponding to the k i larger by orders of magnitude than the smallest eigenvalues.
Thus a truncated basis (matrixÛ of order M Â K with K ( M) is generated for approximations of the test responses u i through their dependence on reduced amplitudesâ i , namely: Approximation errors implied by the above truncation can be evaluated by comparing the sum of the eigenvalues k i i ¼ 1. . .K ð Þrelated to the K preserved directions (or ''modes'') to the sum of all the original ones The truncated basisÛ exhibits the mathematical features of the original basis U expressed by Eq. (11); therefore the reduced amplitudeâ i of any snapshot u i can be computed with the approximation as in Eq. (12), namely: The ''model reduction'' procedure outlined in what precedes concerns the set of the N parameter vectors p i pre-selected as grid nodes in the search domain. Such reduction can be done once-for-all, in view of repeated practical applications of inverse analyses. The minimization of the discrepancy function xðpÞ, Eqs. (9) and (10) in Sect. 3, requires a high number of test simulations, as underlined earlier, both if a GA is employed or if an algorithm of mathematical programming, as TRA, is adopted. Such practical difficulty can be overcome by means of the computational provisions summarized in what follows.
For each parameter grid node p i i ¼ 1. . .N ð Þ a Radial Basis Function (RBF) is considered, namely: with the ''smoothing coefficient'' r to be calibrated, once-for-all (see e.g. [6]); to the present purposes r = 0.05 is assumed. Each component a k j k ¼ 1. . .K; j ¼ 1. . .N ð Þ of the reduced amplitude vectorâ j corresponding to node parameters p j , defined by Eq. (13), is expressed as a linear combination of the values acquired there by the RBFs Eq. (14): The above Eq. (15) consists of K Â N linear equations in K Â N unknowns b k i , gathered in matrix B; matrix G contains the known values g i ðp j Þ of all functions RBF in all N parameter nodes p i of the grid over the search domain.
The simple solution of Eq. (15) provides the coefficients b k i for the linear combination which leads from any new parameter vector p out of the grid nodes to the relevant reduced amplitudesâ of the snapshot u. This vector u quantifies the pseudo-experimental data resulting from the test simulation based on the parameters contained in that vector p, namely: uðpÞ ¼Ûâ ¼ÛBgðpÞ ð16Þ In this final formula, vector g gathers the N values of g i ðpÞ i ¼ 1. . .N ð Þ which are acquired by the RBF centred on p in all grid nodes, Eq. (14). When matrix B is available, since it was provided by the solution once-for-all of Eq. (15), any direct analysis (i.e. any test simulation leading to the measurable quantities), can be carried out by Eq. (16), rather than by FEM or by other methods. Consequently, the computing times become by various orders of magnitude shorter, at comparable accuracy. Clearly, the practical benefits for parameter identifications by means of either TRA or GA are significant. The above circumstance implies substantial computational advantages also when an ANN is adopted for fast inverse analyses, since the ANN input may consist of amplitude vectorâ which represents the snapshot u with much lesser number of components.