Advertisement

Automotive Innovation

, Volume 1, Issue 4, pp 381–389 | Cite as

Optimizing Neural Network Parameters Using Taguchi’s Design of Experiments Approach: An Application for Equivalent Stress Prediction Model of Automobile Chassis

  • Tushar M. PatelEmail author
  • Nilesh M. Bhatt
Article
  • 212 Downloads

Abstract

Artificial neural networks (ANNs) have been successfully applied to problems in different fields including medicine, management, and manufacturing. One major disadvantage of ANNs is that there is no systematic approach for model design. Most literature suggests a trial-and-error method for parameter setting which requires more time. The accuracy of the ANN model greatly depends on the network parameter settings including the number of neurons, momentum, learning rate, transfer function, and training algorithm. In this paper, we apply Taguchi’s design of experiments approach to determine the optimum set of parameters for an ANN trained using feed forward back-propagation. We present a case study of an equivalent stress prediction model for an automobile chassis to demonstrate the implementation of the approach. After training the network, the optimum values of the ANN parameters are determined according to the performance statistics. The performance of the ANN is superior using the Taguchi method to optimize the parameters compared with random parameter values.

Keywords

Artificial neural network Model selection ANN parameters Neural network optimization Taguchi method 

1 Introduction

Most literature related to ANN focused on the results for specific applications rather than the methodology of developing and training the networks. In general, the ANN parameters, including the learning rate, the number of hidden nodes and hidden layers, and the transfer functions, are set during the training process. These settings are crucial to the performance of the ANN model. The trial-and-error method is typically used to determine the appropriate values of these parameters.

Patel and Bhatt [1] prepared ANN model using the results of FEA. For training the ANN model, the standard back-propagation algorithm is observed to be the best. A multilayer perception network is used for nonlinear mapping between the input and the output parameters. FEA–ANN hybrid model can save material used, production cost, and time [1]. Lefik and Schrefler [2] proposed back-propagation ANN as a tool to numerically model the constitutive behavior of a physically nonlinear body. They demonstrated that the model was applicable even in the case of complex nonlinear, inelastic behavior [2]. Rao and Babu [3] demonstrated the applicability of ANNs for the design of beams subjected to moment and shear forces. Gudur and Dixit [4] used ANN to predict the location of the neutral point and velocity field. The training data were obtained from a rigid–plastic FEA code. This procedure provided highly accurate solutions and was suitable for optimization applications [4]. Castellani and Rowlands [5] proposed the evolutionary artificial neural network generation and training (ANNGaT) algorithm to formulate an ANN system. The algorithm simultaneously evolved the ANN topology and the weights. The results showed no differences in accuracy between ANN architectures with one or two hidden layers [5]. Sholahudin and Han [6] developed a method to predict the instantaneous building energy load depending on various combinations of input parameters using a dynamic neural network model. The results of this study demonstrated that Taguchi’s method could successfully reduce the number of input parameters. Moreover, the dynamic neural network model could precisely predict instantaneous heating loads using a reduced number of inputs [6]. Patel and Bhatt [7] optimized the weight of the Eicher 11.10 chassis frame. To reduce the number of experiments, they used the Taguchi method along with FEA. This method could minimize the materials used, the production cost, and time [7]. Patel and Bhatt [8] developed the mathematical model of von Mises stress (VMS) to optimize the weight of the chassis frame using FEA–response surface methodology (RSM) hybrid modeling. The regression equation for VMS was developed using the FEA results of different variants of the chassis frame [8]. Patel and Bhatt [8] compared the prediction accuracy of the RSM and multiple linear regressions (MLR) model for the equivalent stress of the chassis frame. The results indicated that predictions of the RSM were more accurate than the predictions of the MLR model [9]. Stojanović et al. [10] use Taguchi’s method to investigate the tribological behavior of aluminum hybrid composites. They used ANN to predict the wear rate and coefficient of friction [10].

The performance of ANN depends on the network training parameters and network architectural parameters [11]. In general, a standard ANN model that is applicable for every problem has not been established. For this reason, the appropriate parameter values must be determined experimentally for each problem. Different statistical methods have been used to find suitable ANN parameter values. The Taguchi method is a statistical technique used to study the relationship between the factors affecting a process and the outputs of the process. It can be used to systematically identify the best parameter settings that optimize the output. Several authors have applied the Taguchi design of experiments methodology for ANN parameter setting [11, 12, 13, 14, 15, 16, 17, 18, 19].

Tortum et al. [11] optimized the data transformation, the percentage of data used for training, the number of neurons in the first and second layers, and the activation function to increase the accuracy of the back-propagation algorithm [11]. Packianather et al. [12] studied the effect of design variables and back-propagation neural network (BPNN) performance for wood veneer inspection. Roy [13] described using the Taguchi method to optimize the design factors of an ANN. Kuo and Wu [14] prepared the prediction model for polymer blends using a BPNN along with Taguchi’s method to improve the deficiencies of the network architecture design. The objective of the ANN prediction model was to identify the relationship between the control parameter settings and the surface roughness in the film coating process [14]. Sukthomya and Tannock [15] applied Taguchi’s method to optimize ANN parameters in a multilayer perceptron network trained with the back-propagation algorithm in a case study of a complex forming process. Laosiritaworn and Chotchaithanakorn [16] studied the optimum settings of an ANN trained to model ferromagnetic material data. They optimized the number of neurons in the first and second layers, the learning rate, and the momentum. The authors suggested that this optimization procedure should be performed for each ANN application because the significant parameters vary for different purposes [16]. Jung and Yum [17] developed a dynamic parameter design approach for ANNs to optimize the number of neurons in the first and second layers, the learning rate, and the momentum. Madić and Radovanović [18] applied Taguchi’s DOE method to optimize an ANN model trained using the Levenberg–Marquardt algorithm. A high prediction accuracy was achieved by the Taguchi optimized ANN model [18]. Kazancoglu et al. [19] suggested using Taguchi’s method with BPNN to minimize the surface roughness in a wire cut electron discharge machining process. The predicted values closely approached the experimental values [19]. Moosavi et al. [20] assessed different factors affecting the performance of wavelet–ANN and wavelet–ANFIS hybrid models. Each of the models entailed several levels, and the optimum structures for both models were determined using Taguchi’s method [20]. Adalarasan et al. [21] studied the drilling characteristics of second-generation hybrid composites. The experimental trials were designed using an L18 orthogonal array, and a Taguchi-based response surface method was used for optimizing the drilling parameters [21]. Khoualdia et al. [22] proposed a monitoring and diagnosis system based on a neural network model for gear–bearing combined faults prediction. The time domain parameters and the binary codes of defects were used as input and output data to train and test the neural network model. The Taguchi standard orthogonal array and the Grey–Taguchi method were used as multi-objective optimization approaches to determine the best neural network model architecture [22]. Padhi et al. [23] fabricated intricate parts using fused deposition modeling (FDM). They used a fuzzy inference system combined with Taguchi’s method to generate a single response from three responses. They also used Taguchi’s method with artificial neural networks to evaluate the accuracy of the dimensions of the FDM fabricated parts, subjected to various operating conditions. The predicted values obtained from both models were consistent with the experimental data [23]. Sahare et al. [24] optimized the end milling process for Al2024-T4 workpiece material. The input process parameters were the cutting speed, feed per tooth, depth of cut, and the cutting fluid flow rate. The response parameters were the surface roughness, cutting force, and material removal rate. They use the Taguchi L9 orthogonal array for the experimentation and performed regression analysis using ANN to obtain the optimal settings of the end milling process. The obtained results demonstrated that the ANN combined with Taguchi’s method was suitable for optimization [24].

2 ANN Parameter Optimization

To demonstrate ANN parameter optimization using Taguchi’s design of experiments approach, we use the equivalent stress on a vehicle chassis as a case study.

2.1 Experimental Data

A DOE is prepared using specially constructed tables called orthogonal arrays (OA). The use of these tables facilitates consistent experimentation. Twenty-five parameter value combinations for the chassis are modeled in Creo 3.0. The equivalent stress is measured for each parameter value combination using FEA in Ansys. The resulting data used to train the ANN models are given in Table 1.
Table 1

FEA experimental datasets for ANN training [1]

Sr. no.

Thickness of web (mm)

Thickness of upper flange (mm)

Thickness of lower flange (mm)

Equivalent stress (N/mm2)

1

3

3

3

155.010

2

3

4

4

128.200

3

3

5

5

118.160

4

3

6

6

115.500

5

3

7

7

103.390

6

4

3

4

118.570

7

4

4

5

112.420

8

4

5

6

102.610

9

4

6

7

96.970

10

4

7

3

131.150

11

5

3

5

108.040

12

5

4

6

97.007

13

5

5

7

93.031

14

5

6

3

121.770

15

5

7

4

108.200

16

6

3

6

97.780

17

6

4

7

88.380

18

6

5

3

110.080

19

6

6

4

97.607

20

6

7

5

88.780

21

7

3

7

87.279

22

7

4

3

100.250

23

7

5

4

93.288

24

7

6

5

86.179

25

7

7

6

82.599

The ANN consists of three neurons in the input layer that correspond to three topological parameters of the automobile chassis sidebar (thickness of web, thickness of upper flange, thickness of lower flange), and one neuron in the output layer that corresponds to the equivalent von Mises stress (VMS).

2.2 Neural Network Parameters

From existing literature, the design control parameters that influence ANN performance are identified and divided into two groups.
  1. 1.

    ANN architectural parameters: number of neurons in the hidden layer, training algorithm, and transfer function in hidden and output layers;

     
  2. 2.

    ANN learning parameters: learning rate, momentum, and increment and decrement factors;

     
A random initialization of weights is considered as a noise factor. The ANN design parameters and the corresponding levels are given in Table 2.
Table 2

ANN training and architectural parameters and levels

Parameter

Parameter description

Level 1

Level 2

Level 3

A

Training algorithm

Trainscg

Trainlm

B

Transfer function in hidden layer

Purelin

Logsig

Tansig

C

Transfer function in output layer

Purelin

Logsig

Tansig

D

Increment factor

5

10

15

E

Decrement factor

0.05

0.1

0.2

F

Learning rate (μ)

0.001

0.01

0.1

G

Momentum

0.1

0.3

0.5

H

Hidden neurons in first layer

2

6

10

This design problem involves eight main parameters, where one parameter has two levels and the remaining seven parameters each have three levels. Considering all of the possible combinations of the eight parameters entails a total of 21 × 37 = 4374 different experiment sets.

2.3 Taguchi Design of Experiments

Testing all of the above combinations is impractical. Using Taguchi’s OA approach, we can greatly reduce the number of experiments to conduct. The total degree of freedom (DOF) for the eight parameters of the ANN is 1 + (7 × 2) = 15. Thus, we use a mixed OA L18 (21 × 37) for experimentation because it has 17 DOFs which is more than the DOF of the ANN design parameters. Our experimental parameter sets correspond to each row of the L18 OA as given in Table 3. Thus, we conduct a total of 18 experiments to evaluate the sensitivity of the ANN performance [13]. The main objective of the proposed Taguchi-based optimization method is to develop a robust and accurate ANN model by selecting the training and architectural parameters that yield the best performance.
Table 3

Taguchi’s orthogonal array

Exp. no.

A

B

C

D

E

F

G

H

1

1

1

1

1

1

1

1

1

2

1

1

2

2

2

2

2

2

3

1

1

3

3

3

3

3

3

4

1

2

1

1

2

2

3

3

5

1

2

2

2

3

3

1

1

6

1

2

3

3

1

1

2

2

7

1

3

1

2

1

3

2

3

8

1

3

2

3

2

1

3

1

9

1

3

3

1

3

2

1

2

10

2

1

1

3

3

2

2

1

11

2

1

2

1

1

3

3

2

12

2

1

3

2

2

1

1

3

13

2

2

1

2

3

1

3

2

14

2

2

2

3

1

2

1

3

15

2

2

3

1

2

3

2

1

16

2

3

1

3

2

3

1

2

17

2

3

2

1

3

1

2

3

18

2

3

3

2

1

2

3

1

The performance of the ANN is measured using the performance index (PI) given by Eq. 1 [18]:
$${\text{PI}} = R - E_{\text{rms}} $$
(1)
where R is the correlation coefficient obtained between the ANN predictions and the experimental values using whole data and Erms is root mean squared error using whole data. The PI accuracy is a the-larger-the-better type problem; thus, its S/N ratio to be maximized is represented by Eq. 2 [13]:
$$S /N\,{\text{Ratio}} = - 10\log_{10} \left( {\frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \frac{1}{{y_{i}^{2} }}} \right) $$
(2)
The values of the PI and the S/N ratios for each experiment set are given in Table 4.
Table 4

Taguchi’s analysis

Exp. no.

MSE (N/mm2)2

R 2

PI

S/N ratio

1

0.0045100

0.95566

0.91042

− 0.8151441

2

0.0031400

0.96966

0.92868

− 0.6427020

3

0.0061800

0.93285

0.88723

− 1.0392891

4

0.0000000

1.00000

0.99984

− 0.0014034

5

0.0029600

0.97152

0.93125

− 0.6186625

6

0.0000013

0.99999

0.99884

− 0.0101039

7

0.0000000

1.00000

0.99994

− 0.0005309

8

0.0004300

0.99587

0.97720

− 0.2003626

9

0.0000711

0.99932

0.99123

− 0.0765300

10

0.0045100

0.95566

0.91042

− 0.8151441

11

0.0031400

0.96966

0.92868

− 0.6427020

12

0.0068100

0.93286

0.88332

− 1.0775989

13

0.0000000

1.00000

1.00000

− 0.0000374

14

0.0000000

1.00000

0.99994

− 0.0005452

15

0.0006950

0.99331

0.97029

− 0.2619999

16

0.0000000

1.00000

1.00000

− 0.0000336

17

0.0000000

1.00000

0.99999

− 0.0000867

18

0.0006950

0.99331

0.97029

− 0.2619999

3 Analysis of Results

The mean data and S/N ratio data for different parameters are shown in Figs. 1 and 2, respectively. The highest performance index corresponding to the optimal parameter setting is marked with a circle.
Fig. 1

Plot of means of performance index for different parameter settings

Fig. 2

S/N ratios of performance index for different parameter settings

Table 4 and Fig. 2 indicate that the optimal ANN parameter settings are A2B3C1D1E1F2G2H2. In other words, the optimal ANN model is trained with the Levenberg–Marquardt (LM) algorithm using μ = 0.01 as the initial learning rate and 0.3 as the momentum, μ − = 0.05 as the decrement factor, and μ + = 5 as increment factor. It uses the tansig transfer function in the hidden layer and the purelin transfer function in the output layer. It has six neurons in the hidden layer.

Table 5 shows that the transfer function used in the hidden layer has the greatest influence on the predicted equivalent stress of the chassis frame while the training algorithm used has the least influence. The parameters in decreasing order of influence are as follows: transfer function in the hidden layer, number of hidden neurons in the first layer, transfer function in the output layer, momentum, decrement factor, increment factor, learning rate, and training algorithm.
Table 5

Influence of various parameters

Level

A

B

C

D

E

F

G

H

1

0.9583

0.9081

0.9701

0.9667

0.968

0.9616

0.9527

0.945

2

0.9625

0.9834

0.961

0.9522

0.9599

0.9667

0.968

0.9746

3

-

0.9898

0.9502

0.9623

0.9534

0.9529

0.9605

0.9617

Delta

0.0042

0.0817

0.0199

0.0145

0.0146

0.0138

0.0153

0.0296

Rank

8

1

3

6

5

7

4

2

4 Confirmation Experiment

The confirmation experiment is an essential step of Taguchi’s method. There is no need to run the confirmation test if the optimal set of parameters is already included in the OA. However, the best design identified in this experiment is not included in the OA, and therefore, a confirmation test is required. The optimal ANN model is developed and trained using the A2B3C1D1E1F2G2H2 parameter value combination, and its performance is tested.

Figure 3 shows the training settings of the LM6TP model. The ANN with back-propagation uses the LM algorithm for training and has six neurons in hidden layer. It uses MSE to measure performance. The ANN uses the tansig transfer function between the input and hidden layers and the purelin transfer function in between hidden and output layers.
Fig. 3

LM6TP model training

Figure 4 shows the training performance (MSE) graph of the LM6TP model with respect to the epoch. The training stopped after 35 epochs because the performance goal was achieved. This is a useful diagnostic tool to inspect the progress of training. After the initial training of the LM6TP model, it is retrained for 35 epochs, achieving an MSE of 9.3083 × 10−06 at the end of training.
Fig. 4

LM6TP model training performance graph

Test data that are different from the training data previously presented to the network are used to investigate the prediction accuracy of the ANN. For evaluation, we use the following statistical methods: mean squared error (MSE), root mean squared error (RMSE), and the coefficient of multiple determination (R2) values. These values are determined by Eqs. 3, 4, and 5, respectively. Table 6 summarizes the error rates for the training and testing separately.
$${\text{MSE}} = \left[ {\frac{1}{n} \mathop \sum \limits_{j = 1}^{n} \left| {a_{j} - p_{j} } \right|^{2} } \right] $$
(3)
$${\text{RMSE}} = \left[ {\frac{1}{n} \mathop \sum \limits_{j = 1}^{n} \left| {a_{j} - p_{j} } \right|^{2} } \right]^{1/2}$$
(4)
$$R^{2} = 1 - \left[ {\frac{{\mathop \sum \nolimits_{j = 1}^{n} \left( {a_{j} - p_{j} } \right)^{2} }}{{\mathop \sum \nolimits_{j = 1}^{n} \left( {p_{j} } \right)^{2} }}} \right] $$
(5)
Table 6

ANN error rates for training and testing data

Sr. no.

Factors

Target equivalent stress (N/mm2)

Predicted equivalent stress (N/mm2)

Error (N/mm2)

MSE (N/mm2)2

RMSE (N/mm2)

R 2

Web (mm)

Upper flange (mm)

Lower flange (mm)

Training data

1

0

0

0

1.0000

0.9991

0.0009

9.31 × 10−06

0.0031

0.9999

2

0

0.25

0.25

0.6298

0.6316

− 0.0018

3

0

0.5

0.5

0.4911

0.4927

− 0.0016

4

0

0.75

0.75

0.4544

0.4542

0.0002

5

0

1

1

0.2871

0.2902

− 0.0030

6

0.25

0

0.25

0.4968

0.4972

− 0.0004

7

0.25

0.25

0.5

0.4118

0.4085

0.0033

8

0.25

0.5

0.75

0.2764

0.2769

− 0.0005

9

0.25

0.75

1

0.1985

0.1952

0.0033

10

0.25

1

0

0.6705

0.6705

0.0000

11

0.5

0

0.5

0.3513

0.3528

− 0.0014

12

0.5

0.25

0.75

0.1990

0.1995

− 0.0006

13

0.5

0.5

1

0.1441

0.1440

0.0001

14

0.5

0.75

0

0.5410

0.5427

− 0.0017

15

0.5

1

0.25

0.3536

0.3470

0.0065

16

0.75

0

0.75

0.2097

0.2104

− 0.0007

17

0.75

0.25

1

0.0798

0.0782

0.0017

18

0.75

0.5

0

0.3795

0.3893

− 0.0098

19

0.75

0.75

0.25

0.2073

0.2022

0.0051

20

0.75

1

0.5

0.0854

0.0920

− 0.0066

21

1

0

1

0.0646

0.0651

− 0.0005

22

1

0.25

0

0.2438

0.2357

0.0081

23

1

0.5

0.25

0.1476

0.1480

− 0.0004

24

1

0.75

0.5

0.0494

0.0496

− 0.0001

25

1

1

0.75

0.0000

0.0008

− 0.0008

Testing data

26

0

0

0.5

0.8014

0.8008

0.0006

2.94 × 10−06

0.0017

0.9999

27

0

0

0.75

0.7232

0.7256

− 0.0024

28

0

0.25

0.5

0.5592

0.5577

0.0015

29

0.25

0.25

0.75

0.3443

0.3447

− 0.0004

30

0.25

0.75

0.25

0.4313

0.4323

− 0.0010

31

0.25

0.75

0.75

0.3562

0.3542

0.0019

32

0.5

0.25

0.25

0.3944

0.3927

0.0017

33

0.5

0.5

0

0.5836

0.5821

0.0016

34

0.5

0.5

0.5

0.3254

0.3281

− 0.0027

35

0.5

1

0.5

0.3294

0.3293

0.0001

36

0.75

0.25

0.25

0.2238

0.2254

− 0.0016

37

0.75

0.25

0.75

0.1119

0.1147

− 0.0028

38

0.75

0.75

0.75

0.0769

0.0750

0.0020

39

1

0.5

0.5

0.0812

0.0822

− 0.0010

The training data MSE, RMSE, and R2 values for the LM6TP architecture are 0.0031, 9.31 × 10−06, and 0.9999, respectively. The testing data MSE, RMSE, and R2 values for the LM6TP architecture are 0.0017, 2.94 × 10−06, and 0.9999, respectively. This indicates that the ANN accurately predicts the equivalent stress for both training and testing datasets, and there is no evidence of over-fitting because of the results similar for both datasets.

While the performance of an ANN can be measured by the errors on the training and testing datasets, it is often useful to further investigate the network response. One option is to perform a regression analysis between the network response and the corresponding targets. The LM6TP model linearly closely fits with the supplied target values, as shown in Fig. 5. This indicates that the LM6TP model is well suited for high-accuracy equivalent stress prediction.
Fig. 5

LM6TP model linear fitting in training and testing

The equivalent stress predicted by the LM6TP model is compared with the actual values in training and testing, as shown in Fig. 6. Figure 6 shows that the ANN-predicted results are very close to the actual values.
Fig. 6

Actual versus ANN-predicted values in training and testing

5 Conclusions

This study deals with the application of Taguchi’s method for the optimization of ANN parameters. The eight ANN architectural and training parameters have been identified and were arranged in the L18 OA. Analyses show that the transfer function in hidden layer (B) is the most significant and the training algorithm (A) is the least significant on ANN prediction performance. This confirms the finding from previous researcher, Tortum et al. [11].
  1. 1.

    It is found that the best ANN model architecture had six hidden neurons in hidden layer. Analysis shows that adding more neurons in a hidden layer has an adverse effect on ANN performances. This finding further supports the conclusion made by Madić and Radovanović [18] that too many neurons in the first hidden layer are not desirable when training ANNs with LM algorithm.

     
  2. 2.

    ANN trained with LM algorithm using μ = 0.01 as initial learning rate and 0.3 as a momentum, μ − = 0.05 as decrement factor, and μ + = 5 as increment factor, using tansig transfer function in hidden layer, using purelin transfer function in output layer, and having six hidden neurons in hidden layer, is the optimal ANN model.

     
  3. 3

    The values of mean square error, root mean square error, and coefficient of determination R2 for LM6TP architecture are 0.0031, 9.31 × 10−06, and 0.9999, respectively, in training. The values of mean square error, root mean square error, and coefficient of determination R2 for LM6TP architecture are 0.0017, 2.94 × 10−06, and 0.9999, respectively, for randomly chosen the testing datasets. This indicates that the ANN accurately predicts the equivalent stress for both training and testing datasets, and there is no evidence of over-fitting because of the results similar for both datasets.

     
  4. 4.

    However, it was shown that Taguchi’s method can be successfully implemented in design and training of ANN in order to develop the optimized ANN model of high performance with a comparatively small and time-saving experiment.

     
  5. 5.

    The methodology presented in this paper can be performed for different ANN applications.

     

References

  1. 1.
    Patel, T.M., Bhatt, N.M.: Development of FEA-ANN hybrid model for equivalent stress prediction of automobile structural member. Autom. Control Comput. Sci. 50(5), 293–305 (2016)CrossRefGoogle Scholar
  2. 2.
    Lefik, M., Schrefler, B.A.: Artificial neural network as an incremental non-linear constitutive model for a finite element code. Comput. Methods Appl. Mech. Eng. 192(28), 3265–3283 (2003)CrossRefGoogle Scholar
  3. 3.
    Rao, H.S., Babu, B.R.: Hybrid neural network model for the design of beam subjected to bending and shear. Sadhana 32(5), 577–586 (2007)CrossRefGoogle Scholar
  4. 4.
    Gudur, P.P., Dixit, U.S.: A neural network-assisted finite element analysis of cold flat rolling. Eng. Appl. Artif. Intell. 21(1), 43–52 (2008)CrossRefGoogle Scholar
  5. 5.
    Castellani, M., Rowlands, H.: Evolutionary artificial neural network design and training for wood veneer classification. Eng. Appl. Artif. Intell. 22(4), 732–741 (2009)CrossRefGoogle Scholar
  6. 6.
    Sholahudin, S., Han, H.: Simplified dynamic neural network model to predict heating load of a building using Taguchi method. Energy 115, 1672–1678 (2016)CrossRefGoogle Scholar
  7. 7.
    Patel, T.M., Bhatt, N.M.: FEM based Taguchi method to reduce the automobile structural member weight. GIT-J. Eng. Technol. 8, 1–10 (2015)Google Scholar
  8. 8.
    Patel, T.M., Bhatt, N.M.: Development of a mathematical model of VMS for automotive structural member using FEA-RSM hybrid modeling. Procedia Technol. 23, 98–105 (2016)CrossRefGoogle Scholar
  9. 9.
    Patel, T.M., Bhatt, N.M.: RSM and MLR model for equivalent stress prediction of Eicher 11.10 chassis frame: a comparative study. In: Proceedings of First International Conference on Information and Communication Technology for Intelligent Systems, Springer, Cham. 2, 387–395 (2016)Google Scholar
  10. 10.
    Stojanović, B., Vencl, A., Bobić, I., et al.: Experimental optimisation of the tribological behaviour of Al/SiC/Gr hybrid composites based on Taguchi’s method and artificial neural network. J. Braz. Soc. Mech. Sci. Eng. 40(6), 311 (2018)CrossRefGoogle Scholar
  11. 11.
    Tortum, A., Yayla, N., Çelik, C., et al.: The investigation of model selection criteria in artificial neural networks by the Taguchi method. Physica A 386(1), 446–468 (2007)CrossRefGoogle Scholar
  12. 12.
    Packianather, M.S., Drake, P.R., Rowlands, H.: Optimizing the parameters of multilayered feedforward neural networks through Taguchi design of experiments. Qual. Reliab. Eng. Int. 16(6), 461–473 (2000)CrossRefGoogle Scholar
  13. 13.
    Roy, R.K.: Design of Experiments Using the Taguchi Approach: 16 Steps to Product and Process Improvement. Wiley, Hoboken (2001)Google Scholar
  14. 14.
    Kuo, C.F.J., Wu, Y.S.: Application of a Taguchi-based neural network prediction design of the film coating process for polymer blends. Int. J. Adv. Manuf. Technol. 27(5–6), 455–461 (2006)CrossRefGoogle Scholar
  15. 15.
    Sukthomya, W., Tannock, J.: The optimisation of neural network parameters using Taguchi’s design of experiments approach: an application in manufacturing process modelling. Neural Comput. Appl. 14(4), 337–344 (2005)CrossRefGoogle Scholar
  16. 16.
    Laosiritaworn, W., Chotchaithanakorn, N.: Artificial neural networks parameters optimization design of experiments: an application in materials modeling. Chiang Mai J. Sci. 36(1), 83–91 (2009)Google Scholar
  17. 17.
    Jung, J.R., Yum, B.J.: Artificial neural network based approach for dynamic parameter design. Expert Syst. Appl. 38(1), 504–510 (2011)CrossRefGoogle Scholar
  18. 18.
    Madić, M.J., Radovanović, M.R.: Optimal selection of ANN training and architectural parameters using Taguchi method: a case study. FME Trans. 39(2), 79–86 (2011)Google Scholar
  19. 19.
    Kazancoglu, Y., Esme, U., Akkurt, A., et al.: Application of a Taguchi-based neural network for forecasting and optimization of the surface roughness in a wire-electrical-discharge machining process. Materiali in Tehnologije 46(5), 471–476 (2012)Google Scholar
  20. 20.
    Moosavi, V., Vafakhah, M., Shirmohammadi, B., et al.: Optimization of wavelet-ANFIS and wavelet-ANN hybrid models by Taguchi method for groundwater level forecasting. Arab. J. Sci. Eng. 39(3), 1785–1796 (2014)CrossRefGoogle Scholar
  21. 21.
    Adalarasan, R., Santhanakumar, M., Thileepan, S.: Selection of optimal machining parameters in pulsed CO2 laser cutting of Al6061/Al2O3 composite using Taguchi-based response surface methodology (T-RSM). Int. J. Adv. Manuf. Technol. 93(1–4), 305–317 (2017)CrossRefGoogle Scholar
  22. 22.
    Khoualdia, T., Hadjadj, A.E., Bouacha, K., et al.: Multi-objective optimization of ANN fault diagnosis model for rotating machinery using grey rational analysis in Taguchi method. Int. J. Adv. Manuf. Technol. 89(9–12), 3009–3020 (2017)CrossRefGoogle Scholar
  23. 23.
    Padhi, S.K., Sahu, R.K., Mahapatra, S.S., et al.: Optimization of fused deposition modeling process parameters using a fuzzy inference system coupled with Taguchi philosophy. Adv. Manuf. 5(3), 231–242 (2017)CrossRefGoogle Scholar
  24. 24.
    Sahare, S.B., Untawale, S.P., Chaudhari, S.S., et al.: Optimization of end milling process for Al2024-T4 aluminum by combined Taguchi and artificial neural network process. In: Soft Computing: Theories and Applications, Springer, Singapore, pp. 525–535 (2018)Google Scholar

Copyright information

© China Society of Automotive Engineers (China SAE) 2018

Authors and Affiliations

  1. 1.Mechanical Engineering DepartmentLDRP-ITRGandhinagarIndia
  2. 2.Gandhinagar Institute of TechnologyGandhinagarIndia

Personalised recommendations