Advertisement

Computational Methods and Optimization in Machining of Metal Matrix Composites

  • V. N. Gaitonde
  • S. R. Karnik
  • J. Paulo Davim
Chapter

Abstract

This chapter deals with the importance of mathematical modeling and need for optimizing the process. Further, case studies involving the various modeling and optimization techniques applied to machining of metal matrix composites are also discussed.

Keywords

Particle Swarm Optimization Artificial Neural Network Feed Rate Response Surface Methodology Metal Matrix Composite 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This chapter deals with the importance of mathematical modeling and need for optimizing the process. Further, case studies involving the various modeling and optimization techniques applied to machining of metal matrix composites are also discussed.

7.1 Introduction

The main objective of any machining process is to obtain the desired level of quality characteristic for the finished component. In order to study the machining behavior, it is essential to establish the relationship between the controlled parameters and the required quality characteristic. On the other hand, to optimize the machining process, the proper setting of control parameters (factors) on the performance measure (response) is necessary. Both modeling and optimization require efficient planning of experiments for minimizing the number of experiments, which in turn reduces the time and cost involved in the experimentation. The design of experiments (DOE) is extensively used in the planning of experiments involving several factors, where it is necessary to investigate the joint effect of the factors on a response variable [1, 2]. The DOE takes levels of several input parameters to formulate the different combinations at which the output parameters are to be observed or computed. There are several types of DOE available, which are based on statistical theory [1, 2]. The selection of proper DOE basically depends on the purpose of experimentation, which includes the development of modeling or/and optimization of machining process.

7.1.1 Importance of Mathematical Modeling

In any machining process, a mathematical model is constructed for prediction, optimization and controlling the process. A regression model is built to a set of sample data, which describes the relationship between the quality characteristic (y) and input control factors (x 1, x 2, …, x k). The true functional relationship is given by:
$$ y = \varphi (x_{1} ,x_{2} ,x_{3} , \ldots x_{k} ) $$
(7.1)

In most of the cases, true functional relationship is not known and hence an appropriate function to approximate ? is to be chosen. Normally, low order polynomial models such as first and second orders are largely used as approximating functions [1, 2] and these empirical regression equations are called as response surface models.

Identifying and fitting an appropriate response surface model from experimental data requires some knowledge of DOE, regression modeling techniques and elementary optimization methods [1, 2]. The response surface methodology (RSM) integrates all of the above. These empirical models serve to provide information about the properties of the system from which the data are taken [1, 2]. There are many other situations in which a mathematical model is used for process optimization.

There are many types of experimental designs available, which are mainly based on number of factors and their levels. The appropriate DOE to be selected is based on three criteria, namely, purpose (model development/optimization), nature of model (linear or nonlinear model) and the number of factors and the levels defined for each of the factors [3]. The DOE can be classified as full-factorial design, fractional-factorial design, orthogonal array, central-composite design and Box–Behnken design [1, 2, 3, 4].

7.1.2 Need for Optimization

The optimization of a system (product/process) design means determining the best architecture, the best parameter values and the best tolerances [4]. However, the optimization of any process still remains one of the most challenging problems because of its high complexity and non-linearity while solving it and most of the engineering design problems are multi-objective in nature. Hence, designing the system at the required quality characteristic is an economical and technological challenge to the engineer or scientist.

The conventional methods of optimization do not fare well over a broad spectrum of problem domains and also not efficient when the practical search space is too large. Moreover, these methods are also not robust and tend to obtain a local optimum solution. In order to meet the above requirement, a systematic and efficient method of design optimization for performance, quality and cost is crucial [4]. The techniques for solving the optimization problems can be categorized as:
  • Taguchi robust design optimization technique i.e., by allowing the process optimization using Taguchi technique with minimum number of experiments without the need for the development of models [4].

  • Model-based optimization techniques i.e., by using the empirical models to obtain the predictions of the response of interest and then to find the best compromises among the multi-objectives for the system through the non-traditional optimization tools like genetic algorithms (GA), simulated annealing (SA), ant colony optimization (ACO) and particle swarm optimization (PSO) [5].

7.2 Modeling Approaches

The metamodels are the empirical expressions, which are used to obtain the relationships relating the control factors to the performance characteristics of interest. The data obtained from the statistical experimental design is utilized to construct the models. The advantage of the metamodel is that once it is created, it can yield large amounts of predictions. The most commonly used metamodels are response surface model, fuzzy logic, artificial neural networks and neuro-fuzzy inference systems. However, the models based on RSM and artificial neural network (ANN) has recently gained much attention in studying and analyzing the behavior of machining processes.

7.2.1 Modeling Based on Response Surface Methodology

The RSM using DOE proved to be an efficient mathematical modeling tool [1, 2]. The methodology not only reduces the cost and time, but also gives the required information about the main and interaction effects of the factors with reduced number of experiments. The RSM is a collection of mathematical and statistical techniques, which are useful for building the mathematical models and analyzing the problems that provide an overall perspective of the system response within the design space [1, 2]. The mathematical model of the quality characteristic to the control factors can be predicted by employing the multiple regression analysis with minimum number of experiments planned through DOE. The RSM refers not merely to the use of a response surface as a multi-variate function but also to the process for determining the polynomial coefficients.

The construction of RSM models involves three main steps, namely, choosing a functional form for the model representation, fitting the model to the observed data and the statistical validation for the response surface [3].

After conducting a set of experiments to obtain the quality characteristics or outputs according to the experimental designs, the next step is to take the vectors of input control factors (x) and the corresponding responses (y) for fitting the appropriate model. Typical response surface model limits the order of polynomial to one or two since the low-degree models contain fewer terms than the higher-degree models and thus, require fewer experiments to be performed [1, 2, 3].

The first-order model is likely to appropriate when the engineer or scientist is interested in approximating the true response surface over relatively small region of independent variable space in a location where there is little curvature in true response function [1, 2]. The first-order model is given by [1, 2]:
$$ y = \beta_{0} + \sum\limits_{i = 1}^{k} {\beta_{i} x_{i} } $$
(7.2)
where, ?s' are regression coefficients and k is the number of input variables. If there is curvature in the system i.e., the response exhibits nonlinear behavior, the second-order polynomial is considered as a response surface model and is given by [1, 2]:
$$ y = \beta_{0} + \sum\limits_{i = 1}^{k} {\beta_{i} x_{i} + } \sum\limits_{i = 1}^{k} {\beta_{ii} x_{i}^{2} + } \sum {\sum\limits_{i < j} {\beta_{ij} x_{i} } } x_{j} $$
(7.3)

The second-order model can take on a wide variety of functional forms, so it will often work well as an approximation to the true response surface.

The values of regression coefficients of linear, quadratic and interaction terms of the model are determined by [1, 2]:
$$ b = \left( {X^{\text{T}} X} \right)^{ - 1} X^{\text{T}} Y $$
(7.4)
where, b: regression coefficient, X: calculation matrix that contains main and interaction terms, X T: transpose of X, \( \left( {X^{\text{T}} X} \right)^{ - 1} \)inverse matrix of a \( \left( {X^{\text{T}} X} \right) \) and Y: matrix of measured response.

The analysis of variance (ANOVA) [1, 2] technique is used to check the adequacy of the developed model for the desired confidence interval. The ANOVA table includes the sum of squares (SS), the degrees of freedom (DF) and the mean square (MS). In ANOVA, the contribution for SS is from the first order terms, the second-order terms and the residual error. The MS are obtained by dividing the SS of each of the sources of variation by the respective DF. The Fisher’s variance ratio is the ratio of MS of regression to MS of residual error. As per ANOVA, the model developed is adequate within the desired confidence interval, if Fisher’s variance ratio of regression exceeds the standard tabulated value of F-ratio.

7.2.1.1 Case Study: RSM Model Development for Machining of MMC

The study presented here describes the development of second-order RSM-based models to predict some aspects of machinability, namely, machining force (F m), cutting power (P) and specific cutting force (K s) during turning of metal matrix composites (MMC) [6]. Aluminum alloy reinforced with 20% of silicon carbide (SiC) particulates (A356/20/SiCp-T6) work material was used throughout the investigation. The chemical composition of A356 aluminum matrix is with 7% Si and 0.4% Mg. The average dimension of SiC particles is about 20 ?m.

Three levels for cutting speed (v) and four levels for feed rate (f) were selected and are given in Table 7.1. The turning experiments were planned as per full-factorial design (FFD) and were performed using MMC’s appropriate workpieces with a diameter of 60 mm and a length of 200 mm using a PCD tool (TCMW 16T3 04 FP CD10). A ‘STGCL 2020 K16’ type tool holder was used. The tool geometry was as follows: rake angle 0º, clearance angle 7º, cutting edge angle 91º and cutting edge inclination angle 0º. A ‘Kingsbury MHP 50’ CNC lathe with 18 kW spindle power and a maximum spindle speed of 4,500 rpm was used to conduct the experiments. All the experiments were performed using a cutting fluid (emulsion 1/10 with BP Microtrend 231 L). Two millimeter depth of cut was kept constant throughout the investigation.
Table 7.1

Process parameters and their levels

Parameters

Unit

Levels

1

2

3

4

Cutting speed (v)

m/min

50

100

200

Feed rate (f)

mm/rev

0.05

0.10

0.15

0.20

Adapted from Gaitonde et al. [6], with permission from SAGE

A Kistler® 9121 piezoelectric dynamometer with a charge amplifier (model 5019) was used to acquire three different components of forces, namely, cutting force (F c), feed force (F f) and depth force (F d). The data acquisition was made through charge amplifier and a computer using appropriate software (Dynoware by Kistler®). The machining force, cutting power and specific cutting force are computed as:
$$ F_{\text{m}} = \sqrt {F_{\text{c}}^{2} + F_{\text{f}}^{2} + F_{\text{d}}^{2} } $$
(7.5)
$$ P = F_{\text{c}} v $$
(7.6)
$$ K_{\text{s}} = \frac{{F_{\text{c}} }}{f*d} $$
(7.7)
where, d is the depth of cut. The corresponding experimental layout plan along with the responses is presented in Table 7.2.
Table 7.2

Experimental layout plan and the machinability characteristics

Trial number

Levels of process parameters

Actual values of process parameters

Machinability characteristics

v

f

v (m/min)

f (mm/rev)

F m (N)

P(W)

K s (MPa)

1

1

1

50

0.05

216.2

145

1,744.3

2

1

2

50

0.1

326.9

224

1,346.2

3

1

3

50

0.15

440.2

305

1,221.0

4

1

4

50

0.2

551.3

385

1,156.0

5

2

1

100

0.05

220.3

274

1,643.4

6

2

2

100

0.1

328.8

433

1,299.1

7

2

3

100

0.15

429.5

582

1,163.2

8

2

4

100

0.2

515.9

717

1,074.8

9

3

1

200

0.05

212.8

522

1,565.5

10

3

2

200

0.1

309.0

812

1,218.1

11

3

3

200

0.15

389.8

1,071

1,071.2

12

3

4

200

0.2

461.5

1,308

980.9

Adapted from Gaitonde et al. [6], with permission from SAGE

Following the model development procedure as explained in Sect. 7.2.1, the second-order RSM-based models of the machinability characteristics in natural form as reported by Gaitonde et al. [6] are given by:
$$ F_{\text{m}} \, = 71.26667 + 0.376571\,v + 2782.8\,f - 3.792 5 7 1 4 3 { }\,vf - 0. 0 0 0 6 8 6 7\,{{v}}^{ 2} - 1540\,f^{2} $$
(7.8)
$$ P = - 7 7. 1 2 5 { } + 2. 2 0 1 4 2 9 { }\,v + 1 , 0 9 6. 3 3 3 { }\,f + 2 4. 0 0 8 5 7 1 4 3 { }\,vf - 0. 0 0 3 1 1 6 7 { }\,v^{2} - 2 ,5 3 3. 3 3 3 3 3 { }\,f^{2} $$
(7.9)
$$ K_{\text{s}} \, = \, 2,2 7 5. 4 1 7- 1. 9 7 1 7 9\,{ }v - 1 0 , 7 7 2 \,{ }f - 0. 2 8 9 7 1 4 2 9 \,{ }vf + { 0} . 0 0 3 8 2\,{{ v}}^{2} + 28,203.3333\,{{f}}^{ 2} $$
(7.10)
where, v is in m/min; f in mm/rev; F m in N, P in W and K s in MPa.
The model adequacy is checked through ANOVA and is summarized in Table 7.3. Here, the developed mathematical models of machinability characteristics are significant at 95% confidence interval as F-ratio of all the models is >4.39 (F-table (5, 6, 0.05)).
Table 7.3

Summary of ANOVA for machining force, cutting power and specific cutting force models

Response

Sum of squares

Degrees of freedom

Mean square

F-ratio

Regression

Residual

Regression

Residual

Regression

Residual

Machining force (Fm)

1,49,186

136

5

6

29,837

23

1,312.00

Cutting power (P)

13,88,586

536

5

6

2,77,717

89

3,111.35

Specific cutting force (K s )

6,38,456

5,657

5

6

1,27,691

243

135.44

Adapted from Gaitonde et al. [6], with permission from SAGE

F-table (5, 6, 0.05) = 4.39

The proposed RSM-based mathematical models of machinability characteristics are used to analyze the interaction effects of process parameters by substituting the values of cutting speed and feed rate within the ranges selected [6]. The interaction effects of cutting speed and feed rate on machining force, cutting power and specific cutting force as reported by Gaitonde et al. [6] are illustrated in Figs. 7.1, 7.2 and 7.3. It is observed from Fig. 7.1 that for any value of cutting speed, machining force increases with feed rate during turning of MMCs. Further, the machining force decreases with increase in cutting speed and the machining force is highly sensitive to variations in cutting speed at higher values of feed rate as compared to lower values. As seen from Fig. 7.2, the cutting power increases with feed rate for any given value of cutting speed during turning of MMCs and the cutting power is sensitive to variations in cutting speed at higher values of feed rate as compared to lower values. As depicted in Fig. 7.3, for a given value of cutting speed, the specific cutting force decreases with increase in feed rate and for a given feed rate, the specific cutting force is less sensitive to cutting speed variations during MMCs machining. A combination of higher feed rate with high cutting speed is found to be beneficial for minimizing specific cutting force.
Fig. 7.1

Effect of cutting speed and feed rate on machining force (adapted from Gaitonde et al. [6], with permission from SAGE)

Fig. 7.2

Effect of cutting speed and feed rate on cutting power (adapted from Gaitonde et al. [6], with permission from SAGE)

Fig. 7.3

Effect of cutting speed and feed rate on specific cutting force (adapted from Gaitonde et al. [6], with permission from SAGE)

7.2.2 Modeling Based on Artificial Neural Networks

The model development by RSM is a method, which requires minimum number of experiments to be conducted, but restricted to only small range of input variables and hence not suitable for complex and highly nonlinear processes. On the other hand, the development of higher order RSM model requires more number of experiments to be performed and hence costlier. This poses a limitation on the use of RSM models for highly nonlinear process and these constraints led to the development of model based on ANN.

The ANN is a fast, efficient, accurate and cost effective process-modeling tool in which the biological neurons are represented by a mathematical model [7]. The ability of ANN to capture any complex input–output relationships from the limited data set is valuable in machining process, where a huge experimental data for process-modeling is difficult and further expensive to obtain. The ANN is a flexible modeling tool with an aptitude to learn the mapping between input and output variables [7].

The purpose of ANN development is to imitate human brain so as to implement the various functions such as association, self-organization and generalization. The ANNs are parallel computer models of processes and the mechanisms, which constitute biological nerve systems. The ANNs are attractive in view of their high-execution speed and modest computer hardware requirements in addition to an adaptive nature and capability in solving complex and nonlinear problems. The ANN is made up of neurons connected via links. The information is processed within the neurons and is propagated to other neurons through the links connecting neurons.

Normally, a multi-layer feed forward ANN using error back propagation training algorithm (EBPTA) has been adapted. The EBPTA is a supervised learning algorithm based on generalized delta rule [7], which requires a set of inputs and desired outputs, known as training patterns. The EBPTA uses a gradient search technique, which updates the synaptic weights of connecting links during learning stage in such a way that mean square error (MSE) between the actual and desired output responses is minimized. The multi-layer feed forward ANN consists of neurons divided into input layer, hidden layer(s) and the output layer. The net activation input for ith neuron is given by [7]:
$$ net_{i} \, = \sum\limits_{j = 1}^{n} {\,w_{ij} x_{i} } $$
(7.11)
where, w ij = weight of link connecting neuron i to j; x i = the output of ith neuron. For an unipolar sigmoid transfer function, the ouput of ith neuron is given as [7]:
$$ o_{{i}} = \frac{1}{{1 + e^{{\eta \,net_{{i}} }} }} $$
(7.12)
where, ? is the scaling factor. The training algorithm used here is based on the weight updates so as to minimize the sum of squared error for K number of neurons in the output layer and is given by [7]:
$$ E = \frac{1}{2}\sum\limits_{{{\text{k}} = 1}}^{K} {\left( {d_{{{\text{k}},{\text{p}}}} - o_{{{\text{k}},{\text{p}}}} } \right)^{2} } $$
(7.13)
where, d k,p = desired output for pth pattern. The synaptic weights of the links are updated as [7]:
$$ w_{{ji({\text{n}} + 1)}} \, = w_{{ji({\text{n}})}} + \alpha \,\delta_{pj} o_{pi} + \beta \,\Updelta w_{{ji({\text{n}})}} $$
(7.14)
where, n is the learning step, ? is the learning rate and ? is the momentum constant. The error term ? pj is given by:
  • For output layer:
    $$ \delta_{pk} = (d_{kp} - o_{kp} )(1 - o_{kp} );\;k = 1, \ldots K $$
    (7.15)
  • For hidden layer:
    $$ \delta_{pj} = o_{pj} (1 - o_{pj} )\,\sum {\delta_{pk} w_{kj} } ;\;j \, = 1, \ldots J $$
    (7.16)
where, J is the number of neurons in the hidden layer.
The steps involved in ANN training using EBPTA are mentioned below:
  1. 1.

    The network weights are initialized to small random values.

     
  2. 2.

    The input/desired output pairs are presented one by one, updating the weights each time.

     
  3. 3.
    The MSE due to all outputs and NP number of patterns is computed as:
    $$ MSE = \frac{1}{NP}\sum\limits_{p = 1}^{NP} {\,\,\sum\limits_{k = 1}^{K} {\left( {d_{kp} - o_{kp} } \right)^{2} } } $$
    (7.17)
     
  4. 4.

    If (MSE < specified tolerance) or (epochs > (epochs)max)

     

Then stop.

Else, go to Step 2.

7.2.2.1 Case Study: ANN Model Development for Machining of MMC

A case study of ANN-based modeling of surface roughness in turning of Al-SiC (20p) using coarse grade polycrystalline diamond (PCD) inserted under different cutting conditions is considered in this section [8]. A multi-layer perceptron (MLP) model has been constructed with EBPTA to capture the relationship between cutting speed (v), feed rate (f) and depth of cut (d) on surface roughness (R a) of turned component. The input--output data required for development of ANN model has been obtained through FFD. The experimental results were obtained by turning of MMC’s of type A356/SiC/20p (aluminum with 7.5% silicon, 2.44% magnesium, reinforced with 20% volume particles of SiC).

A medium duty lathe of 2 kW spindle power has been employed for dry turning of 30 trials of parameter combinations. The CNMA 120408 inserts with PCLNR 25 X25 M12 tool holder with PCD were used to turn the billets of 150 mm diameter. The tool geometry of PCD inserts employed was, top rake angle of 0º and nose radius of 0.8 mm. The work material was machined at five different cutting speeds ranging from 100 to 600 m/min with two feed rates of 0.108 and 0.200 mm/rev and depth of cut as 0.25, 0.50 and 0.75 mm. Each experimental trial was carried out for 3 min duration.

The average surface roughness (R a) in the direction of tool movement was measured in three different places of machined surface using a surface roughness tester, Mitutoyo Surf test-301 with a cut-off and transverse length of 0.8, and 2.5 mm, respectively. The average surface roughness (R a ) for three different locations was considered for each trial.

The ANN training was performed using 18 input--output patterns and other 12 data sets were then utilized for ANN validation. The network was trained by using suitable scaling factor for input parameters. The ANN designed for the present study takes depth of cut, cutting speed and feed rate as the input parameters; surface roughness as the output parameter. The ANN architecture selected for the surface roughness force model is 3-10-1, number of epochs is set to 2000, transfer function is sigmoid, learning factor is 0.6 while momentum factor is 1.0.

The average error when testing all training and testing patterns was found to be 1.47% for the developed ANN-based surface roughness model. The validation of results for surface roughness obtained using ANN is presented in Table 7.4. The details of ANN training and model verification are given in Muthukrishnan and Davim [8]. The proposed ANN model of surface roughness can be used to analyze the interaction effects of process parameters on surface roughness of turned components of MMC by generating 3D surface plots.
Table 7.4

Validation of results for surface roughness obtained using ANN

Reading number

Experimental surface roughness (?m)

ANN predicted surface roughness (?m)

Error (%)

1

3.94

3.96

0.75

2

2.27

2.25

0.50

3

4.21

4.30

2.17

4

3.87

3.88

0.27

5

4.50

4.45

1.10

6

2.49

2.52

1.42

7

6.19

6.10

1.44

8

5.05

5.03

0.39

9

5.39

5.13

4.78

10

2.93

2.86

2.26

11

5.75

5.66

1.48

12

4.57

4.60

0.84

Adapted from Muthukrishnan and Davim [8], with permission from Elsevier

7.3 Optimization Methods

The robust parameter design (RPD) as suggested by Taguchi [2, 4] is an engineering methodology that emphasizes proper choice of levels of control factors in a process. The principle of choice of levels focuses mainly on variability around a target for the quality characteristic. The majority of variability around a target is caused by uncontrollable parameters known as noise factors. Hence, RPD entails designing the system by selecting the optimal levels of control factors so as to achieve robustness of system response to inevitable changes in the noise factors [2, 4]. The noise factors may be, and often are, controlled at research or development level but they cannot be controlled at the production or product use level [4].

Traditionally, mathematical programing techniques like linear programing, integer programing, dynamic programing and geometric programing have been used to solve the optimization problems in machining processes. However, the traditional techniques are not ideal for solving these problems, as they tend to obtain a local optimal solution. Considering the drawbacks of the traditional optimization techniques, attempts are being made to optimize the machining problem using heuristic search algorithms like GA, SA, ACO and PSO [5].

7.3.1 Taguchi Robust Design

Taguchi-based optimization technique has produced a unique and powerful optimization discipline that differs from the traditional practises. Taguchi optimization technique is based on concept of “robust design”, which aims at obtaining the solutions that make the design less sensitive to noise factors. Taguchi technique [4] has been widely applied in the process design, wherein the mathematical models for the performance do not exist and the experiments are typically conducted to determine the optimum settings for the design and process variables.

Taguchi robust design principle is based on matrix experiments. The traditional experimental design methods are too complex and are not easy to use. If the number of parameters is more, a large number of experiments have to be performed. This problem is overcome in the Taguchi technique, which uses a special design of orthogonal arrays (OA) [4, 9] to study the entire parameter space with small number of experiments. The OA is a major tool used in the robust design, which is used to study many design parameters by means of a quality characteristic. The purpose of conducting an experiment based on OA is to determine the optimum level for each parameter and to establish the relative significance of individual parameters in terms of their main effects on the quality characteristic. The OA gives acceptable estimates of factor effects with reduced number of experiments when compared to the traditional methods.

Depending on number of factors and the levels defined for each of the factors, a suitable OA is selected. Each column of OA designates a factor and its setting levels in each experiment and each row designates an experiment trial with combination of levels of different factors in that trial. The first step in constructing an OA to fit a specific case study is to count the total DF, which gives the minimum number of experiments to be conducted. The selection of OA [9] begins with the distinct number of levels (l) defined for the number of factors (k). The minimum number of trials in the OA is:
$$ N_{\min } = \left( {l - 1} \right)k + 1 $$
(7.18)

Taguchi has tabulated 18 standard OAs, each comprising N 0 (?N min) number of trials for different numbers of factors and their levels, which can be directly used for the experimental plan [9]. However, Taguchi design allows defining different number of levels for each factor. In such situations, the mixed level OA need to be selected for the experimentation purpose. After a simple analysis and processing of the output results from experiments as per OA, an optimum combination of the factor values can be obtained. It is demonstrated in statistics that although the number of experiments is dramatically reduced, the optimal result obtained from OA usage is very close to that obtained from FFD.

The principle behind Taguchi robust design is to control the effect of variations caused by noise factors on product quality characteristic rather than controlling the source of noise itself [4, 9]. In order to minimize the variations in the quality characteristic, Taguchi introduced a method to transform the repetition data to signal-to-noise (S/N) ratio (?), which is a measure of variation present in the scattered response data [4, 9]. The maximization of S/N ratio simultaneously optimizes the quality characteristic and minimizes the effect of noise factors.

For each trial “i” in the OA, if the performance measure (y) is repeated “n” times, then S/N ratio can be computed as follows [4, 9]:
  • Smaller-the-better type:
    $$ \begin{aligned} \eta_{i} = - 10\log &_{10} \left[ {\frac{1}{n}\sum\limits_{j = 1}^{n} {y_{j}^{2} } } \right]\;{\text{dB}} \\ {\text{ if }}y\,{\text{needs to be minimized}} .\\ \end{aligned} $$
    (7.19)
  • Larger-the-better type:
    $$ \begin{aligned} \eta_{i} = - 10\log &_{10} \left[ {\frac{1}{n}\sum\limits_{j = 1}^{n} {y_{j}^{ - 2} } } \right]\;{\text{dB}} \\ {\text{ if }}y\,{\text{needs to be maximized}}. \\ \end{aligned} $$
    (7.20)

Taguchi optimization procedure consists of analysis of means (ANOM) and ANOVA on S/N ratio of OA [4]. The ANOM is used to identify the optimal factor level combinations and to estimate the main effects of each factor. ANOM is also employed to find the effect of a factor level, which is the deviation it causes from the overall mean response. The optimal level for a parameter is the level, which results in highest value of S/N ratio in the experimental region. The ANOVA is used to estimate the error variance for the effects and variance of the prediction error. ANOVA is performed on S/N ratio to obtain the contribution of each of the factors.

After selecting the optimal levels of process parameters, the final step is to predict and verify the adequacy of the model for determining the optimum response [4, 9]. The confirmation experiments under the optimal conditions are then performed and the results with the predictions are compared. In order to judge the closeness of observed value of signal-to-noise ratio with that of the predicted value, the variance of prediction error is determined and the corresponding two-standard deviation confidence limits for the prediction error of S/N ratio are calculated. If the prediction error is outside these limits, one should suspect the possibility that the additive model is not adequate. Otherwise, the additive model is adequate [4, 9].

7.3.1.1 Case Study: Taguchi Robust Design for Machining of MMC

This case study demonstrates the application of Taguchi method to determine the optimal process parameter settings, namely, cutting speed (v), feed rate (f) and depth of cut (d) in order to minimize the surface roughness (R a) during turning of Al-SiC (20p) MMC using coarse grade PCD insert [8]. As in previous case study explained in Sect. 7.2.2.1., the same workpiece material, cutting tool and the experimental set up were employed in the current investigation. The experiments are planned as per Taguchi’s L 27 orthogonal array and each trial of an array consists of three replications. The S/N ratio for the selected performance characteristic (smaller-the-better type) is given by [8]:
$$ \eta = - 10\log_{10} \left[ {\frac{1}{r}\sum\limits_{i = 1}^{r} {R_{i}^{2} } } \right] $$
(7.21)
where, R i is the value of the surface roughness for the test in that trial and r is the number of tests in a trial. For lower-the-better characteristics, this translates into lower process average and improved consistency from one unit to the next or both.
The ANOM gives the optimal levels of the process parameter combination and the ANOVA summarizes the percent contribution of each factor. The optimum factor level combinations obtained through ANOM reported by Muthukrishnan and Davim [8] are cutting speed at 575 m/min, feed rate at 0.108 mm/rev and depth of cut at 0.75 mm. The results of ANOVA for surface roughness are given in Table 7.5. The details of main effects of process parameters, the prediction and verification of quality characteristic using the optimal level of the design parameters are presented in Muthukrishnan and Davim [8]. It was reported that increase in S/N ratio from initial cutting parameters is 3.99 dB for surface roughness, which implies that the surface roughness qualities have improved. It was also observed in their study that the experimental results were close to the predicted values and were falling within the confidence limits.
Table 7.5

Results of ANOVA of surface roughness

Cutting parameters

Degrees of freedom

Sum of squares

Mean square

F-test

Percent contribution

Cutting speed

2

60.70

30.38

28

12

Feed rate

2

35.70

17.90

6.2

51

Depth of cut

2

16.90

8.40

18.80

30

Error

20

45.20

1.125

7

Total

26

158.50

100

Adapted from Muthukrishnan and Davim [8], with permission from Elsevier

7.3.2 Heuristic Search Algorithms

The heuristic algorithms are the model-based optimization techniques i.e., by using the metamodels to obtain predictions of the phenomena of interest and then to find the best compromises among the objectives for the system through the latest non-traditional optimization tools like GA, SA, ACO and PSO.

7.3.2.1 Genetic Algorithms

The GA are non-traditional search algorithms that emulate the adaptive processes of natural biological systems [10]. Based on the survival and reproduction of the fittest, they continually search for new and better solutions without any pre-assumptions such as continuity and unimodality. The GA has been applied in many complex optimization and search problems, outperforming the traditional optimization and search methods.

The solution of the problem that GAs attempt to solve is coded into a string of binary numbers known as chromosomes. Each chromosome contains the information of a set of possible process parameters. Initially, a population of chromosomes are formed randomly. The fitness of each chromosome is then evaluated using an objective function after the chromosome has been decoded. Upon completion of the evaluation, either a roulette wheel method or selected control method is used to select randomly pairs of chromosomes to undergo genetic operations such as crossover and mutation to produce offspring for fitness evaluation. This process continues until a near optimal solution is found.

7.3.2.2 Simulated Annealing

The SA is also one of the non-traditional search and optimization techniques, which resembles the cooling process of molten metals through annealing [11]. The SA procedure simulates this process of annealing to achieve the minimization function value in a problem.

The algorithm begins with an initial point, m 1 and a high temperature, T. A second point, m 2 is created using a Gaussian distribution and the difference in the function values at these points (?E), is calculated. If the second point has a smaller value, the point is accepted; otherwise the point is accepted with a probability e (–E/T) [11]. This completes an iteration of the SA procedure. The algorithm is terminated when a sufficiently small temperature is obtained or a small enough change in the function value is observed.

7.3.2.3 Ant Colony Algorithm

The natural metaphor on which ant algorithms are based is ant colonies. The researchers are fascinated by seeing the ability of near-blind ants in establishing the shortest route from their nest to the food source and back. These ants secrete a substance, called pheromone, and use its trial as a medium for communicating information [12]. The probability of the trial being followed by other ants is enhanced by further deposition by others following the trial. This cooperative behavior of ants inspired the new computational paradigm for optimizing real-life systems, which is suited for solving large-scale problems.

In the first step of ant colony algorithm (ACO), hundred solutions are generated randomly with parameters that satisfy the constraints. The initial solutions are classified as superior and inferior solutions. The following three operations are performed on the randomly generated initial solution: (1) random walk or cross over—90% of the solutions (randomly chosen) in the inferior solutions are replaced with randomly selected superior solutions, (2) mutation—the process whereby randomly adding or subtracting a value is done to each variable of the newly created solutions in the inferior region with a mutation probability and (3) trial diffusion—applied to inferior solutions that were not considered during random walk and mutation stages.

7.3.2.4 Particle Swarm Optimization

The PSO is a population-based stochastic optimization technique, inspired by social behavior of bird flocking or fish schooling [13]. The PSO shares many similarities with evolutionary computation techniques such as GA. The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation.

In PSO, the potential solutions, called particles, fly through the problem space by following current optimum particles. Each particle keeps track of its coordinates in the problem space, which are associated with the best solution (fitness) it has achieved so far and the fitness value is stored. This value is called “pbest”. Another “best” value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the neighbors of the particle. This location is called “lbest”. When a particle takes all the population as its topological neighbors, the best value is a global best and is called “gbest”. The PSO concept consists of, at each time step, changing the velocity of (accelerating) each particle toward its “pbest” and “lbest” locations (local version of PSO). Acceleration is weighted by a random term, with separate random numbers being generated for acceleration toward “pbest” and “lbest” locations.

7.3.2.5 Case Study: Genetic Algorithms-Based Optimization for Machining of MMC

This case study addresses the application of GA for optimizing the cutting conditions during turning of PMMC’s of type A356/20/SiCp-T6 (continuous casting) in which the matrix is aluminum with 7% silicon, 0.4% magnesium, reinforced with 20% volume particles of SiC [14]. The material was subjected to heat treatment (solutionizing and aging T6-5h at 154°C). The average dimension of SiC particle is 20 microns.

A lathe with 6 kW spindle power lathe was employed for the current investigation. TMCW 16T308F PCD inserts were used for machining billets of 95 mm diameter lubricated with an emulsion (Alusol—B 8%). Constant depth of cut of 1 mm was employed in the study. The turning conditions with PCD inserts are presented in Table 7.6.
Table 7.6

Turning cutting conditions with PCD

S. No.

Cutting speed (m/min)

Feed rate (mm/rev)

1

250

0.1

2

350

0.1

3

500

0.1

4

700

0.1

5

500

0.2

6

500

0.05

Adapted from Antonio and Davim [14], with permission from Elsevier

A Kistler piezoelectric dynamometer with appropriate load amplifier was used. Different programs for data acquisition have been developed and used based on Lab VIEW software. They allow continuous recording and simultaneous graphical visualization of evolution of cutting force, feed force and depth force. The tool wear was measured according to ISO 3685 with a Mitutoyo optical microscope, which has 30× magnification and 1 micron resolution. The surface roughness of turned workpiece was evaluated according to ISO 4287/1 with a HomeltesterT500 profilometer. The details of experimental results and discussion are given in Antonio and Davim [14].

The aim of the numerical model was to obtain the cutting conditions using GA. The controlled variables cutting speed (v), feed (f) and cutting time (t) assume the following discrete values: v = (v 1, v 2, …, v n); f = (f 1, f 2, …, f n) and t = (t 1, t 2,…, t n) with a genetic code to define. As reported in Antonio and Davim [14], the time interval is 39 min with fractions of 1 min and hence the design space is a typical discrete and non-convex search domain. Each chromosome has three genes and each gene represents code value for each variable on a turning operation according to above discrete values. The various outputs considered in this study are cutting force (F c), feed force (F f), depth force (F d), tool wear (V b), average surface roughness (R a) and peak to valley height (R t).

Denoting the turning parameters by U i, the normalization with respect to maximum values is performed as:
$$ U_{{i}}^{*} = \frac{{U_{{i}} }}{{U_{{i}}^{\max } }},\;U_{{i}}^{\max } = Maximum\,\left( {U_{{i}}^{{j}} ;j = 1, \ldots ,\mathop N\limits^{ - } } \right) $$
(7.22)
where, jth superscript refers to individual experiments and \( \mathop N\limits^{ - } \) is the total number of experimental values. Here, the given problem is multi-objective optimization, which involves minimization of several machining performance measures. The total utility function in the current study is given by:
$$ \mathop U\limits^{ - } (v, f, t) = \sum\limits_{i = 1}^{6} {\omega_{{i}} } U_{{i}}^{*} $$
(7.23)
where, \( \omega_{{i}} \) is scalar weighting factor associated with ith objective. As stated in Antonio and Davim [14], the present problem is a multi-criteria optimization with contradictory objectives and the formulation of the problem is given by:
$$ {\text{Maximize}},\;F = \frac{{t^{*} }}{{\mathop U\limits^{ - } (v, f, t)}} $$
(7.24)

Subject to constraints, V b < 0.3 mm; R a < 1 ?m.

Where, \( t^{*} = \frac{t}{{t^{\max } }} \)is the normalized cutting time.

The fitness function considers the above constraints, which can be applied via penalty functions. The application of penalties to individuals where penalties are violated is carried out by:
$$ U_{{i}}^{pen} = U_{{i}} + kd^{n} $$
(7.25)
where, U i is the unpenalized fitness, \( U_{{i}}^{pen} \)is the penalized fitness, d is the difference between actual and allowable values of design constraint and k, n are constants to be determined [14]. The effectiveness of penalties relates to the comparison with magnitude of maximum fitness \( U_{{i}}^{\max } \)in Eq. 7.22. To avoid some heavily penalised individuals remaining in the population, special attention is given to values of constraint violation (d 0), which can be tolerated, and lowest value of difference (d 1) attracting a severe penalty. If p 0 and p 1 are the penalties corresponding to d 0 and d 1 then,
$$ p_{0} = \varepsilon {}_{0}U_{{i}}^{\max } $$
(7.26)
$$ p_{1} = \varepsilon {}_{1}U_{{i}}^{\max } $$
(7.27)
where, \( \varepsilon {}_{{i}} \) is the percentage of maximum value of cutting parameter.
The fitness function corresponding to current optimization problem is given by:
$$ FIT = \frac{{t^{*} }}{{\omega_{1} F_{\text{c}}^{*} + \omega_{2} F_{\text{f}}^{*} + \omega_{3} F_{\text{d}}^{*} + \omega_{4} (V_{\text{b}}^{*} )^{pen} + \omega_{5} (R_{\text{a}}^{*} )^{pen} + \omega_{6} (R_{\text{t}}^{*} )}} $$
(7.28)
where \( \left( {V_{b}^{*} } \right)^{pen} \) and \( \left( {R_{a}^{*} } \right)^{pen} \) are the penalized cutting parameters related to degree of violation of imposed constraints obtained from Eq. 7.25 and the normalization proposed in Eq. 7.22. Here, the equal contributions from each cutting parameter are considered in the fitness function Eq. 7.28.

The details of genetic operations, namely, selection, crossover and mutation are given in Antonio and Davim [14]. A population of eight chromosomes are considered in GA and search is based on best solutions with N A = 2 and N c = 1 is considered for mutation operator. The constraints are penalized by ? 0 = 1% and ? 1 = 5%. The constraint violations considered are: For V b, d 0 = 0.003 mm, d 1 = 0.03 mm; For R a, d 0 = 0.05 microns, d 1 = 0.1 microns. The optimal cutting conditions are found to be cutting speed (v) = 350 m/mi, feed (f) = 0.1 mm/rev and cutting time (t) = 19 min. It was concluded that the optimal cutting conditions of turning operation obtained through GA optimization demonstrate the potential of mixed numerical-experimental mode.

7.4 Conclusion

The modeling based on RSM and ANN have been extensively used in the machining processes. The RSM coupled with DOE is a collection of mathematical and statistical technique not only reduces the cost and time, but also gives the required information about the main and interaction effects of the factors with minimum number of experiments. On the other hand, ANN is a powerful modeling tool; mainly deals with complex and nonlinear problems and can provide accurate results in machining process.

Two case studies based on modeling approaches involving the machinability investigations during turning of MMC with PCD tool were presented. The first investigation is on RSM modeling to study the influence of cutting speed and feed on machining force, cutting power and specific cutting force. The two-factor interaction effects on machinability characteristics were studied by generating 3D response surface plots. The ANN-based modeling for predicting the surface roughness during turning of MMC is detailed in the second case study. A MLP model trained by EBPTA has been used to capture the relationship between cutting speed, feed and depth of cut on surface roughness.

The main advantage of Taguchi robust design (TRD) is to make the process performance measure less sensitive to noise factors. The TRD employs OA for conducting the experiments and signal-to-noise (S/N) ratio as the objective function for optimization. In Taguchi method, analysis of means (ANOM) is used to identify the optimum level factor combinations and ANOVA is employed to estimate the relative significance of each factor on performance measure. The case study demonstrating the application of Taguchi method for determining the optimal process parameter settings of cutting speed, feed rate and depth of cut to minimize the surface roughness during turning of MMC is presented in this chapter.

The conventional optimization techniques such as gradient-based methods do not function effectively and are not suitable for solving multi-objective optimization problems. Hence, a number of heuristic algorithms such as GA, SA, ACO and PSO have been proposed for obtaining optimal solutions for multi-objective problems. The multi-objective optimization in machining of MMC using GA concept is illustrated in this chapter, which consists of a stochastic strategy of direct search that stimulates the genetic process of evolution.

Notes

Acknowledgements

The authors would like to thank Elsevier and SAGE publications for granting permission for re-use of the published materials.

References

  1. 1.
    Montgomery DC (2004) Design and analysis of experiments. Wiley, New YorkGoogle Scholar
  2. 2.
    Myers RH, Montgomery DC, Anderson-Cook CM (2009) Response surface methodology. Wiley, New JerseyMATHGoogle Scholar
  3. 3.
    Gaitonde VN, Karnik SR, Davim JP (2009) Design of experiments. In: Ozel T, Davim J (eds) Intelligent machining: modeling and optimization of the machining processes and systems. Wiley, USA, pp 215–243Google Scholar
  4. 4.
    Phadke MS (1989) Quality engineering using robust design. Prentice Hall, Englewood Cliffs, NJGoogle Scholar
  5. 5.
    Satishkumar S, Asokan P, Kumanan S (2006) Optimization of depth of cut in multi-pass turning using nontraditional optimization techniques. Int J Adv Manuf Technol 29:230–238CrossRefGoogle Scholar
  6. 6.
    Gaitonde VN, Karnik SR, Davim JP (2009) Some studies in metal matrix composites machining using response surface methodology. J Reinforc Plast Compos 28(20):2445–2457CrossRefGoogle Scholar
  7. 7.
    Schalkoff GB (1997) Artificial neural network. McGraw-Hill, SingaporeGoogle Scholar
  8. 8.
    Muthukrishnan N, Davim JP (2009) Optimization of machining parameters of Al/SiC-MMC with ANOVA and ANN analysis. J Mater Process Technol 209:225–232CrossRefGoogle Scholar
  9. 9.
    Ross PJ (1996) Taguchi techniques for quality engineering. McGraw-Hill, SingaporeGoogle Scholar
  10. 10.
    Goldberg DE (1989) Genetic algorithms in search optimization and machine learning. Addison-Wesley, New YorkMATHGoogle Scholar
  11. 11.
    Deb K (1995) Optimization for engineering design: algorithms and examples. Prentice-Hall, New YorkGoogle Scholar
  12. 12.
    Dorigo M (1996) The ant system: optimization by a colony of cooperating agent. IEEE Trans Syst Man Cybern Part B 26(1):1–13CrossRefGoogle Scholar
  13. 13.
    Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks. University of Western Australia, Perth, Western Australia, pp 1942–1948Google Scholar
  14. 14.
    Antonio CAC, Davim JP (2002) Optimal cutting conditions in turning of particulate metal matrix composites based on experiment and a genetic search model. Composites Part A 33:213–219CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2012

Authors and Affiliations

  • V. N. Gaitonde
    • 1
  • S. R. Karnik
    • 2
  • J. Paulo Davim
    • 3
  1. 1.Department of Industrial and Production EngineeringB. V. B. College of Engineering and TechnologyHubliIndia
  2. 2.Department of Electrical and Electronics EngineeringB. V. B. College of Engineering and TechnologyHubliIndia
  3. 3.Department of Mechanical EngineeringUniversity of AveiroAveiroPortugal

Personalised recommendations