1 Introduction

Routes of Application

At least two routes of direct application are enabled actually by Meta-Modelling, namely, decision making and evaluation of description models. While calculating multi-objective weighted criteria resulting in one algebraic value applies for decision making, multi-parameter exploration for the values of one selected criterion is used for evaluation of the mathematical model which was used to generate the Meta-Model.

Visual exploration and Dimensionality Reduction

More sophisticated usage of Meta-Modelling deals with visual exploration and data manipulation like dimensionality reduction. Tools for viewing multidimensional data (Asimov 2011). are well known from iterature. Visual Exploration of High Dimensional Scalar Functions (Gerber 2010) today focusses on steepest-gradient representation on a global support, also called Morse-Smale Complex. The scalar function represents the value of the criterion as function of the different parameters. As result, at least one trace of steepest gradient is visualized connecting an optimum with a minimum of the scalar function. Typically, the global optimum performance of the system, which is represented by a specific point in the parameter space, can be traced back on different traces corresponding to the different minima. These trace can be followed visually through the high-dimensional parameter space revealing the technical parameters or physical reasons for any deviation from the optimum performance.

Analytical methods for dimensionality reduction, e.g. the well-known Buckingham Π-Theorem (Buckingham 1914), are applied since 100 years for determination of the dimensionality as well as the possible dimensionless groups of parameters. Buckingham’s ideas can be transferred to data models. As result, methods for estimating the dimension of data models (Schulz 1978), dimensionality reduction of data models as well as identification of suitable data representations (Belkin 2003) are developed.

Value chain and discrete to continuous support

The value chain of Meta-Modelling related to decision making enables the benefit rating of alternative decisions based on improvements as result of iterative design optimization including model prediction and experimental trial. One essential contribution of Meta-Modelling is to overcome the drawback of experimental trials generating sparse data in high-dimensional parameter space. Models from mathematical physics are intended to give criterion data for parameter values dense enough for successful Meta-Modeling. Interpolation in Meta-Modelling changes the discrete support of parameters (sampling data) to a continuous support. As result of the continuous support, rigorous mathematical methods for data manipulation are applicable generating virtual propositions

Resolve the dichotomy of cybernetic and deterministic approaches

Meta-Modelling can be seen as a route to resolve the separation between cybernetic/empirical and deterministic/rigorous approaches and to bring them together as well as making use of the advantages of both. The rigorous methods involved in Meta-Modelling may introduce heuristic elements into empirical approaches and the analysis of data, e.g. sensitivity measures, may reveal the sound basis of empirical findings and give hints to reduce the dimension of Meta-Models or at least to partially estimate the structure of the solution not obvious from the underlying experimental/numerical data or mathematical equations.

2 Meta-Modelling Methods

In order to gain a better insight and improve the quality of the process, the procedure of conceptual design is applied. The conceptual design is defined as creating new innovative concept from simulation data (Currie 2005). It allows creating and extracting specific rules that potentially explain complex processes depending on industrial needs.

Before applying this concept, the developers are validating their model by performing one single simulation run (s. Application: sheet metal drilling) fitting one model parameter to experimental evidence. This approach requires sound phenomenological insight. Instead of fitting the model parameter to experimental evidence multi-physics, complex numerical calculations can be used to fit the empirical parameters of the reduced model. This requires a lot of scientific modelling effort in order to achieve good results that could be comparable to real life experimental investigation.

Once the model is validated and good results are achieved, the conceptual design analysis is then possible either to understand the complexity of the process, optimize it, or detect dependencies. The conceptual design analysis is based on simulations that are performed on different parameter settings within the full design space. This allows for a complete overview of the solution properties that contribute well to the design optimization processes (Auerbach et al. 2011). However the challenge rises when either the number of parameters increases or the time required for each single simulation grows.

Theses drawbacks can be overcome by the development of fast approximation models which are called metamodels. These metamodels mimic the real behavior of the simulation model by considering only the input output relationship in a simpler manner than the full simulation (Reinhard 2014). Although the metamodel is not perfectly accurate like the simulation model, yet it is still possible to analyze the process with decreased time constraints since the developer is looking for tendencies or patterns rather than values. This allows analyzing the simulation model much faster with controlled accuracy.

Meta-modelling techniques rely on generating and selecting the appropriate model for different processes. They basically consist of three fundamental steps: (1) the creation and extraction of simulation data (sampling), (2) the mapping of the discrete sampling points in a continuous relationship (interpolation), and (3) visualization and user interaction of this continuous mapping (exploration)

  1. 1.

    Sampling

  2. 2.

    Interpolation

  3. 3.

    Exploration.

2.1 Sampling

Sampling is concerned with the selection of discrete data sets that contain both input and output of a process in order to estimate or extract characteristics or dependencies. The procedure to efficiently sampling the parameter space is addressed by many Design of Experiments (DOE) techniques. A survey on DOE methods focusing on likelihood methods can be found in the contribution of Ferrari et al. (Ferrari 2013). The basic form is the Factorial Designs (FD) where data is collected for all possible combinations of different predefined sampling levels of the full the parameter space (Box and Hunter 1978).

However for high dimensional parameter space, the size of FD data set increases exponentially with the number of parameters considered. This leads to the well-known term “Curse of dimensionality” that was defined by Bellman (Bellman 1957) and unmanageable number of runs should be conducted to sample the parameter space adequately. When the simulation runs are time consuming, these FD design could be inefficient or even inappropriate to be applied on simulation models (Kleijnen 1957).

The suitable techniques used in simulation DOE are those whose sampling points are spread over the entire design space. They are known as space filling design (Box and Hunter 1978), the two well-known methods are the Orthogonal arrays, and the Latin Hypercube design.

The appropriate sample size depends not only on the number of the parameter space but also on the computational time for a simulation run. This is due to the fact that a complex nonlinear function requires more sampling points. A proper way to use those DOE techniques in simulation is to maximize the minimum Euclidean distance between the sampling points so that the developer guarantees that the sampling points are spread along the complete regions in the parameter space (Jurecka 2007).

2.2 Interpolation

The process in which the deterministic discrete points are transformed into a connected continuous function is called interpolation. One important aspect for the Virtual Production Intelligence (VPI) systems is the availability of interpolating models that represent the process behavior (Reinhard 2013), which are the metamodels. In VPI, metamodeling techniques offer excellent possibilities for describing the process behavior of technical systems (Jurecka 2007; Chen 2001) since Meta-Modelling defines a procedure to analyze and simulate involved physical systems using fast mathematical models (Sacks 1989). These mathematical models create cheap numeric surrogates that describe cause-effect relationship between setting parameters as input and product quality variables as output for manufacturing processes. Among the available Meta-Modelling techniques are the Artificial Neural Networks (Haykin 2009), Linear Regression Taylor Expansion (Montgomery et al. 2012) Kriging (Jones 1998; Sacks 1989; Lophaven 1989), and the radial basis functions network (RBFNs). RBFN is well known for its accuracy and its ability to generate multidimensional interpolations for complex nonlinear problems (Rippa 1999; Mongillo 2010; Orr 1996). A Radial Basis Function Interpolation represented in Fig. 6.1 below is similar to a three layer feed forward neural network. It consists of an input layer which is modeled as a vector of real numbers, a hidden layer that contains nonlinear basis functions, and an output layer which is a scalar function of the input vector.

Fig. 6.1
figure 1

Architecture of Radial Basis Function Network (RBFN). Solve for the weights wi Given the parameter values xi, the base functions hi(x) and the scalar output f(x) the

The output of the network f(x) is given by:

$$ f(x) = \sum\limits_{i = 1}^{n} {w_{i} h_{i} \left( x \right)} $$
(6.1)

where \( {\text{n}}, {\text{h}}_{\text{i}} ,{\text{w}}_{\text{i}} \) correspond to number of sampling points of the training set, the ith basis function, and the ith weight respectively. The RBF methodology was introduced in 1971 by Rolland Hardy who originally presented the method for the multi-quadric (MQ) radial function (Hardy 1971). The method emerged from a cartography problem, where a bivariate interpolates of sparse and scattered data was needed to represent topography and produce contours. However, none of the existing interpolation methods (Fourier, polynomial, bivariate splines) were satisfactory because they were either too smooth or too oscillatory (Hardy 1990). Furthermore, the non-singularity of their interpolation matrices was not guaranteed. In fact, Haar’s theorem states that the existence of distinct nodes for which the interpolation matrix associated with node-independent basis functions is singular in two or higher dimensions (McLeod 1998). In 1982, Richard Franke popularized the MQ method with his report on 32 of the most commonly used interpolation methods (Franke 1982). Franke also conjectured the unconditional non singularity of the interpolation matrix associated with the multi-quadric radial function, which was later proved by Micchelli (Micchelli 1986). The multi-quadric function is used for the basis functions \( {\text{h}}_{\text{i}} \):

$$ h_{i} \left( x \right) = \sqrt {1 + \frac{{\left( {x - x_{i} } \right)^{\rm T} \left( {x - x_{i} } \right)}}{{r^{2} }}} $$
(6.2)

where \( {\text{x}}_{\text{i}} \) and \( {\text{r}} \) represent the ith sampling point and the width of the basis function respectively. The shape parameter \( {\text{r}} \) controls the width of the basis function, the larger or smaller the parameter changes, the narrower or wider the function gets. This is illustrated in Fig. 6.2 below.

Fig. 6.2
figure 2

Multi-quadric function centered at xi = 0 with different widths r = 0.1, 1, 2, 8

The learning of the network is performed by applying the method of least squares with the aim of minimizing the sum squared error with respect to the weights \( {\text{w}}_{\text{i}} \) of the model (Orr 1996). Thus, the learning/training is done by minimizing the cost function

$$ C = \sum\limits_{i = 1}^{n} {\left( {y_{i} - f\left( {x_{i} } \right)} \right)^{2} } + \sum\limits_{i = 1}^{n} {\lambda \cdot w_{i}^{2} \to { \hbox{min} }} $$
(6.3)

where \( \uplambda \) is the usual regularization parameter and \( {\text{y}}_{\text{i}} \) are the criterion values at points i. Solving the equation above

$$ w = \left( {{\rm H}^{\rm T} {\rm H} +\Lambda } \right)^{ - 1} {\rm H}^{\rm T} y $$
(6.4)

with

$$ {\rm H} = \left[ {\begin{array}{*{20}c} {h_{1} \left( {x_{1} } \right)} & {h_{2} \left( {x_{1} } \right)} & \ldots & {h_{n} \left( {x_{1} } \right)} \\ {h_{1} \left( {x_{2} } \right)} & {h_{2} \left( {x_{2} } \right)} & \ldots & {h_{n} \left( {x_{2} } \right)} \\ \vdots & \vdots & \ddots & \vdots \\ {h_{1} \left( {x_{n} } \right)} & {h_{2} \left( {x_{n} } \right)} & \ldots & {h_{n} \left( {x_{n} } \right)} \\ \end{array} } \right]\quad \quad\Lambda = \left[ {\begin{array}{*{20}c} \lambda & 0 & \ldots & 0 \\ 0 & \lambda & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & \lambda \\ \end{array} } \right] $$
(6.5)

and

$$ y = \left( {y_{1} ,y_{2} , \ldots ,y_{n} } \right) $$
(6.6)

The chosen width of the radial basis function plays an important role in getting a good approximation. The following selection of the r value was proposed by Hardy (1971) and taken over for this study:

$$ r = 0.81 \cdot d,\quad d = \frac{1}{n}\sum\limits_{i = 1}^{n} {d_{i} } $$
(6.7)

and \( {\text{d}}_{\text{i}} \) is the distance between the ith data point and its nearest neighbor.

2.3 Exploration

Visualization is very important for analyzing huge sets of data. This allows an efficient decision making. Therefore, multi-dimensional exploration or visualization tools are needed. 2D Contour plots or 3D cube plots can be easily generated by any conventional mathematical software. However nowadays, visualization of high dimensional simulation data remains a core field of interest. An innovative method was developed by Gebhardt (2013). In the second phase of the Cluster of Excellence “Integrative Production Technology for High-Wage Countries” the Virtual Production Intelligence (VPI). It relies on a hyperslice-based visualization approach that uses hyperslices in combination with direct volume rendering. The tool not only allows to visualize the metamodel with the training points and the gradient trajectory, but also assures a fast navigation that helps in extracting rules from the metamodel; hence, offering an user-interface. The tool was developed in a virtual reality platform of RWTH Aachen that is known as the aixCAVE. Another interesting method called the Morse-Smale complex can also be used. It captures the behavior of the gradient of a scalar function on a high dimensional manifold (Gerber 2010) and thus can give a quick overview of high dimensional relationships.

3 Applications

In this section, the metamodeling techniques are applied to different laser manufacturing processes. The first two applications (Laser metal sheet cutting and Laser epoxy cut) where considered a data driven metamodeling process where models where considered as a black box and a learning process was applied directly on the data. The last two applications (Drilling and Glass Ablation) a model driven metamodeiling process was applied.

The goal of this section is to highlight the importance of using the proper metamodeling technique in order to generate a specific metamodel for every process. The developer should realize that generating a metamodel is a user demanding procedure that involves compromises between many criteria and the metamodel with the greatest accuracy is not necessarily the best choice for a metamodel. The proper metamodel is the one which fits perfectly to the developer needs. The needs have to be prioritized according to some characteristics or criteria which was defined by Franke (1982). The major criteria are accuracy, speed, storage, visual aspects, sensitivity to parameters and ease of implementation.

3.1 Sheet Metal Cutting with Laser Radiation

The major quality criterion in laser cutting applications is the formation of adherent dross and ripple structures on the cutting kerf surface accompanied by a set of properties like gas consumption, robustness with respect to the most sensitive parameters, nozzle standoff distance and others. The ripples measured by the cut surface roughness are generated by the fluctuations of the melt flow during the process. One of the main research demands is to choose parameter settings for the beam-shaping optics that minimize the ripple height and changes of the ripple structure on the cut surface. A simulation tool called QuCut reveals the occurrence of ripple formation at the cutting front and defines a measure for the roughness on the cutting kerf surface. QuCut is developed at Fraunhofer ILT and the department Nonlinear Dynamics of Laser Processing (NLD) at RWTH Aachen as a numerical simulation tool for CW laser cutting taking into account spatially distributed laser radiation. The goal of this use case was to find the optimal parameters of certain laser optics that result in a minimal ripple structure (i.e. roughness). The 5 design parameters of a laser optic (i.e. the dimensions of vector in formulas (6.16.5)) investigated here are the beam quality, the astigmatism, the focal position, and the beam radius in x and y directions of the elliptical laser beam under consideration. The properties of the fractional factorial design are listed in Table 6.1.

Table 6.1 Process design domain

The selected criteria (i.e. \( y \)-vector in formulas (6.36.5)) was the surface roughness (Rz in µm) simulated at a 7 mm depth of an 8 mm workpiece. The full data set was 24948 samples in total. In order to assess the quality of the mathematical interpolation, 5 different RBFN metamodels were generated according to 5 randomly selected sample sets of size 1100, 3300, 5500, 11100 and 24948 data points from the total dataset. As shown in Fig. 6.3, the metamodels are denoted by Metamodel (A–E). Metamodel F, which is a 2D metamodel with a finer sampling points denoted by the blue points, is used as a reference for comparison.

Fig. 6.3
figure 3

2D Contour Plots of different metamodels at M2 = 10, Astigmatism = 25 mm, Beam Radius y = 134 µm. The polynomial linear regression metamodel (F) on the right contains more sampling points and is shown here for evaluation of the metamodel quality (A–E)

A 2fold cross-validation method was then used to give the quality of metamodeling, where 10 % of the training point sample was left out randomly of the interpolation step and used for validation purposes. The Mean Absolute Error (MAE) of the criterion surface roughness and the coefficient of determination (R2) were then calculated and compared to each other. The results are listed in Table 6.2.

Table 6.2 Data of the metamodels

The results show that the quality of the metamodel is dependent on the number of sampling points; the quality is improved when the number of training points is increased. As visualization technique contour plots were used, which in their entirety form the process map. The star-shaped marker, denoting the seed point of the investigation, represents the current cutting parameter settings and the arrow trajectory shows how an improvement in the cut quality is achieved. The results show that in order to minimize the cutting surface roughness in the vicinity of the seed point, the beam radius in the feed direction x should be decreased and the focal position should be increased Eppelt and Al Khawli (2014) In the special application case studied here the minimum number of sampling points with an RBFN model is already a good choice for giving an optimized working point for the laser cutting process. These metamodels have different accuracy values, but having an overview of the generated tendency can support the developer with his decision making step.

3.2 Laser Epoxy Cut

One of the challenges in cutting glass fiber reinforced plastic by using a pulsed laser beam is to estimate achievable cutting qualities. An important factor for the process improvement is first to detect the physical cutting limits then to minimize the damage thickness of the epoxy-glass material. EpoxyCut, also a tool developed at Fraunhofer ILT and the department Nonlinear Dynamics of Laser Processing (NLD) at RWTH Aachen, is a global reduced model that calculates the upper and lower cutting width, in addition to other criteria like melting threshold, time required to cut through, and damage thickness. The goal of this test case was to generate a metamodel to the process in order to: (i) minimize the lower cutting width; (ii) detect the cutting limits; and (iii) to efficiently generate an accurate metamodel and at the same time use the minimal number of simulation runs. The process parameters are the pulse duration, the laser power, the focal position, the beam diameter and the Rayleigh length. In order to better understand the idea of the smart sampling technique, the focal position, the beam diameter, and the Rayleigh length were fixed. In order to generate a fine metamodel, a 20 level full factor design was selected, this leads to a training data set that contains 400 simulation runs in total illustrated as small white round points in Fig. 6.4.

Fig. 6.4
figure 4

EpoxyCut process map generated on a 20 full factorial grid

The metamodel takes the discrete training data set as an input, and provides the operator with a continuous relationship of the pulse duration and laser power (parameters) and cutting width (quality) as an output.

In order to address the first goal which is to minimize the lower cutting width, the metamodel above allows a general overview of the the 2D process model.

It can be clearly seen that one should either decrease the pulse duration or the laser power so that the cutting limits (the blue region determines the no cut region) are not achieved. The second goal was to include the cutting limits in the Meta-modelling generation. When performing a global interpolation techniques, the mathematical value that represents the no cutting regions (the user set it to 0 in this case) affects the global interpolation techniques.

To demonstrate this, a Latin Hypercube Design with 49 training points was used (big white circles) with an RBFN interpolation. The results are shown in Fig. 6.5.

Fig. 6.5
figure 5

An RBFN metamodel with Latin Hypercube design (49 sampling points) on the left, is compared to the EpoxyCut with a Relative Error in % shown to the right

From the results in Fig. 6.5, the developer can totally realize that process that contains discontinuities, or feasible and non-feasible points (in this case cut and no cut) should be classified first into feasible metamodel and a dichotomy metamodel.

The feasible metamodel improves the prediction accuracy in the cut region and the dichotomy metamodel states whether the prediction lies in a feasible or a non-feasible domain. This is one of the development fields that the authors of this paper are focusing on.

To address the third goal which is to efficiently generate an accurate metamodel and while using the minimal number of simulation runs. A smart sampling method at the department Nonlinear Dynamics of Laser Processing NLD at RWTH is currently being developed and will be soon published. The method is based on a classification technique with a sequential approximation optimization where training points are being iteratively sampled based on defined statistical measures. The results are shown in Fig. 6.6.

Fig. 6.6
figure 6

An RBFN metamodel with smart sampling algorithm (also 49 sampling points) on the left, is compared to the EpoxyCut with a Relative Error in % shown to the right

3.3 Sheet Metal Drilling

As example for heuristic approaches a reduced model for sheet metal drilling has been implemented based on the heuristic concept of an ablation threshold. The calculated hole shapes have been compared with experimental observations. Finally, by exploring the parameter space the limits of applicability are found and the relation to an earlier model derived from mathematical physics is revealed. Let Θ denote the angle between the local surface normal of the sheet metal surface and the incident direction of the laser beam. The asymptotic hole shape is characterized by a local angle of incidence Θ which approaches its asymptotic value ΘTh .The reduced model assumes that there exists an ablation threshold characterized by the threshold fluence Fth which is material specific and has to be determined to apply the model.

$$ \cos\Theta _{Th} F = F_{TH} ,\Theta _{Th} = 0\quad {\text{for}}\quad F < F_{TH} $$
(6.8)

One single simulation run is used to estimate the threshold fluence FTh where the width of the drill at the bottom is fitted. As consequence the whole asymptotic shape of the drilled hole is calculated and is illustrated in Fig. 6.7.

Fig. 6.7
figure 7

Cross section and shape (solid curve) of the drill hole calculated by the reduced model for sheet metal drilling

Finally, classification of sheet metal drilling can be performed by identification of the parameter region where the drill hole achieves its asymptotic shape. It is worth to mention, that for the limiting case of large fluence \( F \gg F_{th} \) the reduced model is well known from literature (Schulz 1986, 1987) and takes the explicit form:

$$ \frac{dz\left( x \right)}{dx} = \frac{F}{{F_{TH} }},z\left( {x_{Th} } \right) = 0,F \gg F_{Th} $$
(6.9)

where z(x) are is the depth of drilled wall and x is the lateral coordinate with respect to the laser beam axis.

3.4 Ablation of Glass

As an example for classification of a parameter space we consider laser ablation of glass with ultrashort pulses as a promising solution for cutting thin glass sheets in display industries. A numerical model describes laser ablation and laser damage in glass based on beam propagation and nonlinear absorption as well as generation of free electrons (Sun 2013). Free-electron density increases until reaching the critical electron density \( \uprho_{\text{crit}} = (\omega^{2} m_{e} \varepsilon_{0} )/e = 3.95 \times 1021\,{\text{cm}}^{ - 3} , \) which yields the ablation threshold.

The material near the ablated-crater wall will be modified due to the energy released by high-density free-electrons. The threshold electron density ρdamage for laser damage is a material dependent quantity, that typically has the value ρdamage = 0.025 ρcrit and is used as the damage criterion in the model. Classification of the parameter region where damage and ablation takes place reveal the threshold in terms of intensity, as shown in Fig. 6.8 below, change from an intensity threshold at ns-pulses to a fluence threshold at ps-pulses.

Fig. 6.8
figure 8

Cross section and shape (solid curve) of the drill hole calculated by the reduced model for sheet metal drilling

4 Conclusion and Outlook

This contribution is focused on the application of the Meta-Modeling techniques towards Virtual Production Intelligence. The concept of Meta-modelling is applied to laser processing, e.g. sheet metal cutting, sheet metal drilling, glass cutting, and cutting glass fiber reinforced plastic. The goal is to convince the simulation ana-lysts to use the metamodeling techniques in order to generate such process maps that support their decision making. The techniques can be applied to almost any economical, ecological, or technical process, where the process itself is described by a reduced model. Such a reduced model is the object of Meta-Modeling and can be seen as a data generating black box which operates fast and frugal. Once an initial reduced model is set then data manipulation is used to evaluate and to im-prove the reduced model until the desired model quality is achieved by iteration. Hence, one aim of Meta-Modeling is to provide a concept and tools which guide and facilitate the design of a reduced model with the desired model quality. Evalu-ation of the reduced model is carried out by comparison with rare and expensive data from more comprehensive numerical simulation and experimental evidence. Finally, a Meta-Model serves as a user friendly look-up table for the criteria with a large extent of a continuous support in parameter space enabling fast exploration and optimization.

The concept of Meta-Modeling plays an important role in improving the quality of the process since: (i) it allows a fast prediction tool of new parameter settings, providing mathematical methods to carry out involved tasks like global optimiza-tion, sensitivity analysis, parameter reduction, etc.; (ii) allows a fast user interface exploration where the tendencies or patterns are visualized supporting intuition; (iii) replaces the discrete data of current conventional technology tables or catalogues which are delivered almost with all the manufacturing machines for a good operation by continuous maps.

It turns out that a reduced model or even a Meta-Model with the greatest accuracy is not necessarily the “best” Meta-Model, since choosing a Meta-Model is a decision making procedure that involves compromises between many criteria (speed, accuracy, visualization, complexity, storage, etc.) of the Meta-Model quality. In the special application case studied here the minimum number of sampling points with a linear regression model is already a good choice for giving an optimized working point for sheet metal cutting, if speed, storage and fast visualization are of dominant interest. On the other hand when dealing with high accuracy goals especially when detecting physical limits, smart sampling techniques, nonlinear interpolation models and more complex metamodels (e.g. with classification techniques) are suitable.

Further progress will focus on improving the performance of generating the met-amodel especially developing the smart sampling algorithm, and verifying it on other industrial applications. Additional progress will focus on allowing the crea-tion of a metamodel that handles distributed quantities and not only scalar quanti-ties. Last but not least, these metamodels will be interfaced to global sensitivity analysis technique that helps to extract knowledge or rules from data.