Real-Time Identification of Fuzzy PID-Controlled Maglev System using TLBO-Based Functional Link Artificial Neural Network

Abstract

In this paper, the teaching–learning-based optimization-based functional link artificial neural network (FLANN) has been proposed for the real-time identification of Maglev system. This proposed approach has been compared with some of the other state-of-the-art approaches, such as multilayer perceptron–backpropagation, FLANN least mean square, FLANN particle swarm optimization and FLANN black widow optimization. Further, the real-time Maglev system and the identified model are controlled by the Fuzzy PID controller in a closed loop system with proper choice of the controller parameters. The efficacy of the identified model is investigated by comparing the response of both the real-time and identified Fuzzy PID-controlled Maglev system. To validate the dominance of the proposed model, three nonparametric statistical tests, i.e., the sign test, Wilcoxon signed-rank test and Friedman test, are also performed.

Introduction

In the recent past, many articles have been published on the identification of a complex system, owing to its widespread use in various areas. System identification means the estimation of the parameters of a plant or matching the output responses of the model with that of the physical system. System identification is intended to find the deep understanding of the cause–effect relationships [1,2,3,4]. The nature of the system is categorized by different characteristics, such as its electrical, physical and chemical properties. However, it is very difficult to understand and model such characteristics of the plant. Thus, identification is a big challenge in several fields like control engineering [5, 6], power system engineering [7], renewable [8], etc. Accurate and quick identification is a difficult task for real-world plants which is mainly due to its nonlinear and dynamic nature. Many researchers have applied various forms of the artificial neural network (ANN) like multilayer perceptron (MLP) [9], functional link artificial neural network (FLANN) [10, 11], radial basis function (RBF) [12, 13], etc., for the identification purpose. By using multilayer perceptron (MLP) networks, Narendra and Parthasarathy have reported various identification techniques for a low complexity dynamic system [14]. However, the MLP network has multiple layers, which make it computationally expensive for the identification of any complex system. The FLANN model which is introduced by Pao et al. [15] is a single layer neural network without any hidden layer. The FLANN input is functionally expanded with different expansion techniques like power series, trigonometric, Chebyshev expansion, etc. This model is having lower computational complexity with a fast rate of convergence. The FLANN has been used for pattern classification [16], prediction [17] and many other challenging tasks with faster convergence and lesser complexity compared to the MLP.

In the training phase of an ANN, all the weights are iteratively updated, and they reached the optimal value. The methods for updating the weights of a neural network could be based on the derivative or free from the derivative. Some of the examples of the derivative/gradient based are least mean squares (LMS) [6], backpropagation (BP) [2], recursive least squares (RLS) [18], etc. Similarly, examples of the second category include bio-inspired or evolutionary computing or computational intelligence-based approach. In most of the applications, the gradient-based approach provides inferior solutions due to the inherent limitations, such as trapping at local optimum points and incapability of finding derivatives of the discontinuous function.

To eliminate the above shortcomings, derivative-free algorithms, such as the genetic algorithm (GA) [19], particle swarm optimization (PSO) [20,21,22], and black widow optimization (BWO) [23], have been applied by different researchers to train the model. Kumar et al. [24] have introduced a metaheuristic-based socio-evolution and learning optimization algorithm (SELO) inspired by the social learning behavior of humans. The performance of the SELO is evaluated using 50 benchmark problems and compared with the other competitive algorithms. The results show the performance of the SELO is better than the others. Gholizadeh et al. [25] have introduced a metaheuristic algorithm, i.e., improved fireworks algorithm (IFWA) used for a discrete structural optimizations problems of steel trusses and frames. The optimization results demonstrate that the IFWA has highly competitive and superior over the standard FWA algorithm in terms of the convergence rate and statistical analysis. Gholizadeh et al. [26] have proposed a metaheuristic algorithm, center of mass optimization (CMO) to deal with performance-based discrete topology optimization (PBDTO) problem. PBDTO process is implemented for four multi-story steel braced frames by CMO. The authors have concluded that the CMO-based PBDTO formulation is an efficient technique for the seismic discrete topology optimization. Gholizadeh et al. [27] have proposed a new and efficient metaheuristic algorithm Newton metaheuristic algorithm (NMA) for optimization of steel moment frames. The NMA is a population-based framework which uses Newton gradient-based method. Here, the authors investigate the effectiveness of the proposed algorithm by considering two benchmark discrete trusses optimization problems. The performance of the proposed algorithm is analyzed on the basis of statistical parametric and nonparametric test and found to be superior over other competitive algorithms. Hayyolalam et al. [23] have proposed a novel black widow optimization algorithm (BWO), which is inspired by mating behavior of black widow spiders. The efficacy of the BWO algorithm is determined by taking 51 different benchmark functions. From obtained results, it is confirmed that the BWO has better performance and superiority as compared to other algorithms. All these optimization techniques may be implemented to update the weights of the neural network and applied for identification of any system.

However, selecting the proper controlling parameters of these derivative-free bio-inspired algorithms is still a challenging task because of the presence of many controlling parameters. Due to these controlling parameters, the weight updation of neural network model is complex, computationally expensive and time consuming. Hence, there is a need to explore other bio-inspired algorithms with less number of controlling parameters. Rao et al. [28] recently came up with the TLBO optimization technique to circumvent the above shortcomings, which uses the teaching and learning methodology of the teacher and the student in a classroom. They highlighted the merits of TLBO that it does not depend on any controlling parameters, and only need the algorithm specific parameters, such as number of populations, iterations and stopping criteria. They have stressed on the fact that the TLBO eliminates the intricacy of the optimum selection and optimization of controlling parameters, which is usually necessary in other bio-inspired techniques. Naik et al. [29] have concluded that the performance of higher order neural networks is sensitive to weight initialization and relies on a kind of adopted learning algorithm. They have implemented TLBO for the training of ANN’s, and applied it successfully for the classification problem. In this manuscript, we have implemented TLBO for optimizing weights of a variant of ANN, i.e., FLANN for identification of Maglev plant.

In this paper, MLP-BP, FLANN-LMS, FLANN-PSO, FLANN-TLBO and FLANN-BWO have been implemented for the identification of the Maglev system. The comparative analysis of performance among all these approaches is carried out by considering the mean squares error and the computational time. Here, a Fuzzy PID controller is also implemented to control the identified model, and then, the response is compared with that of the Fuzzy PID-controlled actual Maglev system.

The organization of the paper is as follows: Introduction and the recent work on identification are presented in Sect. 1. Section 2 presented and illustrated the construction and principle of the Maglev plant as shown in Fig. 1. Discussion of related work is presented in Sect. 3, and Sect. 4 highlighted the prerequisites of the research work. Section 5 deals with the proposed TLBO-based FLANN model for identification of the Maglev plant. In Sect. 6, design of controller based on the Fuzzy PID is discussed. In Sect. 7, the simulation study, validation and nonparametric statistical test of proposed model and the results of top-notch models are presented and compared. Section 8 presents the contribution of the manuscript, and the scope of future research work is outlined.

Fig. 1
figure1

Schematic diagram of the Maglev system

The Magnetic Levitation Plant

The laboratory setup of the Maglev system from Feedback Instruments Ltd., Model No. 33-210 is shown in Fig. 2, and it has a wide range of applications like magnetically balanced bearings, vibration damping and transportation systems (i.e., very popularly known as Maglev trains) [30,31,32,33]. Basically, it works on the Maglev principle and has two parts: (i) the Maglev plant, and (ii) the digital computer where the controlling action takes place. The Maglev system comprises of different integrated components, like the electromagnet, ferromagnetic ball, IR sensor and a current driver circuit. A digital computer provides an immense platform for the effective design of various controllers, which can be implemented using MATLAB and Simulink for real-time applications. The whole setup accumulates both mechanical and electrical units with I/O interface systems.

Fig. 2
figure2

The Maglev laboratory setup

The Maglev plant parameters are given in Table 1, and its transfer function is as follows [34,35,36]:

$$G_{{\text{p}}} (s) = \frac{{\Delta V_{{\text{o}}} }}{{\Delta V_{{\text{i}}} }} = \frac{ - 3518.85}{{s^{2} - 2180}}$$
(1)
Table 1 The physical parameters of the Maglev system

where \(G_{\text{p}} (s)\) represents the Maglev plant (Feedback Instruments Ltd., Model No. 33-210) transfer function, \(V_{o}\) is the output voltage of the sensor and \(V_{\text{i}}\) is the input voltage to the controller. From Eq. (1) and Fig. 3, it is found that the behavior of the Maglev system is highly nonlinear and unstable in nature. Therefore, it is challenging task to get the improved identified model of the Maglev plant.

Fig. 3
figure3

Nonlinear response of Maglev system

Related Work

Artificial neural network (ANN) plays an important role in the identification of a nonlinear system [37, 38]. The neural network (NN) can performed nonlinear mapping between the input and output, as it has interconnection between the different layers. The neural network can be classified on the basis of its input, hidden and output layers. From the structural point of view, an ANN may be a single layer or multilayers. In a multilayer perceptron (MLP), there may be one or many hidden layers in between the input and output layers [39]. However, in a single layer structure, no hidden layer is present. Each neuron is connected from one layer to next layer of other neuron.

The learning of any neural network is a process where the weights are updated iteratively. These learning processes may be of derivative based or derivative free. Some of the standard conventional gradient derivative-based approaches are LMS, RLS and BP algorithms. These have been applied by different researchers to train various neural networks and other adaptive models. Similarly, different derivative-free/evolutionary/bio-inspired learning algorithms, such as the GA, PSO, ant colony optimization (ACO), cat swarm optimization (CSO) and TLBO algorithms, are also used to train different neural network models.

Derivative-based algorithms are usually based on the gradient descent search algorithm and mathematically derived by utilizing the derivative of error. Least mean squares (LMS) are a stochastic gradient method or a simple derivative-based algorithm [40]. It is very popular, and widely used for its simple structure and ease of implementation to minimize the error. It is suitable for single layer ANN models for updating the weights. Backpropagation (BP) algorithm is derivative-based algorithm, which is suitable for multilayer ANN models [41]. The gradient-based optimization techniques fail to solve optimizing functions having discontinuities. These techniques may get trapped at local optimum points while solving functions having multiple optimal (maxima/minima) points. To overcome these bottlenecks of the traditional derivative-based approaches, different heuristic algorithms have been implemented by researches. The PSO, which is based on the principle of the movement of a flock of birds that collectively search for food is a heuristic algorithm that has better convergence characteristics even for non-convex and discontinuous functions [20, 42]. This algorithm has a better exploration capability as the best among the swarm is followed by all the individuals along with their own best positions. The algorithm has provision for both local and global search techniques. The teaching–learning-based optimization (TLBO) has no control parameters. It undergoes a two phase search; the teacher phase performs a global search for better exploration, while the learner phase carries out for local search for better exploitation [43,44,45]. Also, this algorithm being dependent only on algorithm specific parameters, and without having controlling parameters is expected to have a better convergence characteristic is discussed in details in Sect. 4. The black widow optimization is a type of evolutionary-based optimization technique that imitates the strange mating behavior of the black widow spiders [23]. It is one of the latest techniques in the evolutionary-based optimization family. It delivers fast convergence speed, and avoids local optima problem. These techniques update the weights in three stages, i.e., procreate, cannibalism (sexual cannibalism and sibling cannibalism) and mutation.

Prerequisites

In this paper, a MLP and a special variant of ANN, i.e., FLANN is implemented for the identification of the Maglev system. FLANN is a type of single layer NN in which, the input data is allowed to pass through a functional expansion block, and hence, the input is functionally expanded with different expansion techniques. The power series expansion, trigonometric expansion and Chebyshev expansion are some of the mostly used expansion techniques. The Chebyshev functional expansion is found to be better for many engineering applications, and hence, it is considered for the expansion of FLANN inputs for the identification of Maglev system in this article. The Chebyshev expansion of input \(x_{l}\), can be written as [41, 46, 47],

$$\begin{aligned} T_{0} (x_{l} ) &= 1 \quad \text{for}\, l = 0 \hfill \\ T_{1} (x_{l} ) &= x_{l} \quad \text{for}\, l = 1 \hfill \\ T_{2} (x_{l} ) &= 2x_{l}^{2} - 1 \quad \text{for}\, l = 2 \hfill \\ T_{l + 1} (x_{l} ) &= 2x_{l} {\rm T}_{l} (x_{l} ) - {\rm T}_{l - 1} (x_{l} ) \quad \text{for}\, l > 2 \hfill \\ \end{aligned}$$
(2)

The higher order polynomials are expanded as per usual practice. The output of the functional expansion block is multiplied with a set of weights. The basic structure of FLANN model that is trained by any adaptive algorithm is depicted in Fig. 4.

Fig. 4
figure4

Structure of the FLANN Model

The lower computational complexity of the FLANN model due to its simple single layer structure, and simple learning algorithms, makes it computationally cheap and time efficient [48, 49]. The FLANN model holds the advantage of a single layer perceptron (SLP) network and an MLP network by evading their shortcomings. Here, the adaptive algorithm is the PSO and hence named the model as FLANN-PSO model. A set of input signals is given to the FLANN-PSO model, and the input of the FLANN model is functionally expanded nonlinearly by using the Chebyshev functional expansion technique. All the weights have been updated by using the PSO algorithm. Simultaneously, a set of weights are initialized in between 0 and 1, and it is multiplied with the input signals. The output of each set of weights is compared with the same corresponding desired (output) signal and each set of weights is considered to be one particle. Hence, each set will produce one error signal.

The set of weights whose error is the minimum is considered to be the global particle. The other particles, i.e., other sets of weights, are local particles, and they update their velocity and position according to Eqs. (3) and (4):

$$V_{\rm i} (d) = wV_{\rm i} (d) + c_{1} *rand*(P_{\rm i} (d) - X_{\rm i} (d)) + c_{2} *rand*(P_{\rm g} (d) - X_{\rm i} (d))$$
(3)
$$X_{\rm i} (d + 1) = X_{\rm i} (d) + V_{\rm i} (d + 1)$$
(4)

where \(V_{\rm i} (d)\) and \(X_{\rm i} (d)\) represents the velocity and position of the \(i\)th particle, respectively, and rand represents the random number, which is in between [0,1]. \(P_{\rm g} (d)\) and \(P_{\rm i} (d)\) are the position of g-best and p-best, respectively. \(w\) is the weighting factor, \(c_{1}\) and \(c_{2}\) are the constant whose values determine the effect of social and cognitive components.

Once all the particles/set of weights are updated, again the error (i.e., objective function) will be calculated by using the new sets of weights. According to the minimum error, the respective weights will be saved so that it can be compared with the previous minimum error. If the current error is lesser than the previous one, then the current value is saved or the previous one. This process is repeated iteratively for a predefined number of times. After a certain epoch, the change of error is saturated, and then, the program is terminated. Finally, the optimum weight is reported. The FLANN network having this optimum weight is called as the trained network, and suitable for testing in the test data.

Proposed TLBO-Based FLANN Model

This article presents the metaheuristic TLBO technique based on the teaching and learning methodology, which helps to update the weights of FLANN [29]. The TLBO algorithm simulates a classroom like environment where the number of students is the population whose level of knowledge is considered as the possible solution set of the problem. Hence, the knowledge is defined by its objective function in the problem. The students in a classroom learn mainly through two processes; one through the teacher, and other by interacting between themselves. Thus, TLBO has two phases (a) the teacher phase and (b) the learner phase. In the ‘teacher phase,’ the learner group learns from the teacher, and in the ‘learner phase’ they learn by having discussions with one another. The most knowledgeable person in the classroom is considered as the teacher who shares his or her knowledge with the learners, and at every iteration, the best learner is considered as a teacher. Different designed variables of the optimization problem are analogous to the different subjects offered to the students (learners). The results (grade) of each learner are equivalent to the fitness of the problem. The teacher tries to enhance the knowledge of all the learners in accordance with his or her capability. The transfer of knowledge also depends on the capability of the students (learners).

A set of input signals having window size ‘u,’ i.e., \(\left\{ {x_{1,} x_{2,} x_{3,} .....,x_{u} } \right\},\) is given to the proposed FLANN-TLBO model and again, and the input of the FLANN model is functionally expanded nonlinearly by using the Chebyshev functional expansion technique. Simultaneously, random sets of weights (equals to number of expanded inputs of the FLANN) are initialized between 0 and 1. Each set is multiplied by the expanded input signals. Then, the output of the FLANN is compared with the desired signal. Hence, it results a set of error signal \(\left\{ {e_{1,} e_{2,} e_{3,} .....,e_{u} } \right\}\).

Maglev plant input can be expanded by using Chebyshev expansion by the following mathematical form [46],

$$T_{\rm i} (n) = \left[ {\begin{array}{*{20}l} {T_{1} } \\ {T_{2} } \\ {T_{3} } \\ : \\ : \\ {T_{k} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}l} 1 \\ {x_{1} } \\ {2x_{2}^{2} - 1} \\ : \\ : \\ {2x_{k} T_{k} (x_{k} ) - T_{k - 1} x_{k} } \\ \end{array} } \right]_{\text{For} \, k > 2}$$
(5)
$${\mathop y\limits^{ \wedge }}_{m + u} = \sum\limits_{i = 1}^{Q - 1} {T_{\rm i} (n)w_{\rm i} (n)}$$
(6)

Here, \(x_{m}\) is the input, \({\mathop y\limits^{ \wedge }}_{m + u}\) is the output of the FLANN model, \(T_{\rm i} (n)\) is the expanded input using Chebyshev expansion and \(w_{\rm i} (n)\) is the weight vector having Q no. of elements. Equation (6) shows the output of the proposed model as shown in Fig. 5. The weights set connected with the FLANN model is optimized by the TLBO algorithm to achieve desired response and the error is

$$e_{m + u} = y_{m + u} - {\mathop y\limits^{ \wedge }}_{m + u}$$
(7)

 Here, the total input (m) = 1, 2, …., n-u, input window size of FLANN (u) = 10 and n is the no. of expanded input.

Teaching phase

A teacher tries to enhance the performance of all the students in the class. Considering a class of n students (population size), m subjects (number of design variables) the mean result of students in subject j can be denoted as \(M_{ji}\) for the \(ith\) iteration. Let the overall best result of all the subjects of the whole population obtained for the learner number be denoted by \(\text{kbest}\). The teacher is the most knowledgeable person having the highest fitness value in the class. So, the teacher tries to improve the results of the students, for this a correction variable derived from the difference between the mean results of the kth student and the teacher in subject j is defined as

$$DM_{j,k,i} = rd \times (W_{j,\text{kbest},i} - T_{\rm f} \times M_{ji} )$$
(8)

where DM correspond to the difference of mean,\(W_{j,\text{kbest},i}\) is the result of the teacher in \(j\)th subject, \(rd\) is a random number between 0 and 1. \(T_{\rm f}\) is called as the teaching factor whose value is either 1 or 2. The teaching factor is defined randomly as

$$T_{\rm f} = round[1 + rd(0,1)*(2 - 1)]$$
(9)

The solutions are updated as

$$W_{j,k,i}^{new} = W_{j,k,i} + DM_{j,k,i}$$
(10)

where \(W_{j,k,i}^{new}\) is the updated result value of the kth student in the \(jth\) subject at the \(i\)th iteration and \(W_{j,k,i}\) is the existing result. However, the updated result will be accepted if it satisfies the boundary condition, else it has to be replaced by the limiting boundary value. Also, it should have a better fitness than that of the existing values; otherwise, it need not be replaced. This updated value will act as input to the learner phase (Fig. 5).

Fig. 5
figure5

Proposed FLANN model for identification of Maglev system

Learner phase

The individual learner enhanced his or her own knowledge by interacting with his/her classmates apart from learning from the teacher. It is a convention that a learner will learn from another learner if the other learner’s knowledge is more than his or her. In this phase, two copies p and q are selected randomly such that \(x_{\rm totalpi}^{^{\prime}} \ne x_{\rm totalqi}^{^{\prime}}\), i.e., the total results as updated in the teacher phase do not match. Then, their results are updated as:

$$x_{\rm jpi}^{^{\prime\prime}} = x_{\rm jpi}^{^{\prime}} + r_{\rm i} (x_{\rm jpi}^{^{\prime}} - x_{jqi}^{^{\prime}} );\quad if\;x_{\rm totalpi}^{^{\prime}} < x_{\rm totalqi}^{^{\prime}}$$
(11)
$$x_{\rm jpi}^{^{\prime\prime}} = x_{\rm jpi}^{^{\prime}} + r_{\rm i} (x_{jqi}^{^{\prime}} - x_{\rm jpi}^{^{\prime}} );\; if \;x_{\rm totalpi}^{^{\prime}} > x_{\rm totalqi}^{^{\prime}}$$
(12)

where \(x_{\rm jpi}^{^{\prime\prime}}\) is accepted if its fitness value is better than that of \(x_{\rm jpi}^{^{\prime}}\) ; further, \(x_{\rm jpi}^{^{\prime\prime}}\) should satisfy the boundary condition. If the boundary condition is not satisfied, it should be replaced by \(x_{\rm jpi}^{^{\prime}}\).

The 1st set of updated weights and error values is stored for the forthcoming assessment. Again, the TLBO is applied to update the next set of weights and matched with the previous value of weights. The best set of weights, i.e., the set of weight having minimum error is considered to be the teacher, and the other sets are learners. The parameter of the proposed model undergoes the teaching and learning phase of TLBO to update the weights of the FLANN network. This process has been repeated until the error is less than the threshold value. The flowchart describes the detailed process of the TLBO-based FLANN model as shown in Fig. 6.

Fig. 6
figure6

Flowchart of FLANN-TLBO network

Design of the Fuzzy PID Controller

The universally accepted PID controller is an important tool for industrial control and automation, due to its reliability and adaptability [50]. It has the capability to handle all the shortcomings of any other controller, and is found to be suitable for many of the industrial requirements. However, due to the high nonlinearity and uncertainty present in different systems, the performance of the PID gets degraded. To avoid these bottlenecks and enhance the capability of the PID controller, a fuzzy technique has been incorporated with the PID controller by researchers [51].

The control law associated with PID is as follows:

$$u(t) = k_{\text{p}} e(t) + k_{\text{i}} \int {e(t)\text{d}t} + k_{\text{d}} \frac{{\text{d}e(t)}}{{\text{d}t}}$$
(13)

where \(k_{\text{p}}\) is the proportional gain, \(k_{\text{i}}\) is the integral gain, \(k_{\text{d}}\) is the derivative gains, e(t) is the error signal and u(t) is the control input.

The intervening system is fuzzified  with two inputs, i.e., the system error (e) and derivative of error \((\dot{e})\), obtained from coefficients, \(K_{{\text{in1}}}\) and \(K_{{\text{in2}}}\) as shown in Fig. 7. These two values match the values between − 1 and 1. This leads to assign the membership function in a definite manner using the rule Table 2 and the linguistic variable Table 3.

Fig. 7
figure7

Model of Fuzzy PID controller

For each input, five membership functions are chosen and assigned. However, for the output, nine triangular membership functions have been defined from − 1 to 1 as shown in Figs. 8 and 9, and found from the coefficient \(K_{{\text{out}}}\) .

Table 2 Basic rule table for FIS
Table 3 Linguistic variables of FIS

This Fuzzy PID controller has been utilized for validating our identified model. This controller has been implemented to the identified model, and in the real-time Maglev plant. The responses of the identified model and the actual model are compared to investigate the performance of the proposed.

Simulation Study

The algorithms were executed in the Acer Aspire V system, Window 10 OS, Intel® Core™ i5-3337U CPU @ 1.80 GHz processor, RAM of 8 GB and in a MATLAB environment. Five different neural network models, i.e., the MLP trained by BP, and FLANN networks trained by the LMS, PSO, TLBO and BWO algorithms have been implemented for the comparative analysis.

Performance Analysis

All the possible functional expansions are implemented, and we found that the Chebyshev functional expansion model is found to be the most effective in our application. Hence, in our study, we have utilized the Chebyshev expansion in all the four FLANN models for reasonable comparison. The error signal which is the difference between the desired signal and the output of the FLANN network is considered to be the cost function. The following parameter have been considered for the identification of Maglev system using different algorithms.

In FLANN-LMS: Learning rate (\(\mu\)): 0.6, No. of iteration (Ni): 10, No. of weights: 90 and Activation function: tanh.

In FLANN-PSO: Learning rate (\(\mu\)): 0.6, No. of iteration (Ni): 20, No. of feature (Fe): 20, cognitive parameter: \(c_{1} = c_{2} = 2\), Population size (Ps): 45, Inertia rate: 0.9, No. of weights: 90 and Activation function: tanh.

In FLANN-TLBO: Population size (Ps): 45, No. of iteration (Ni): 30, No. of feature (Fe): 20, No. of weights: 90 and Activation function: tanh.

In FLANN-BWO: Population size (Ps): 45, No. of iteration (Ni): 100, No. of feature (Fe): 20, Procreating rate (PP): 0.6, Cannibalism rate (CR): 0.675, Mutation rate (PM): 0.4, No. of weights: 90 and Activation function: tanh.

In MLP-BP: Learning rate (\(\mu\)): 0.6, No. of iteration (Ni): 20, No. of layer: 3, Node: 5-3-1, No. of weights: 90 and Activation function: tanh.

To study the effectiveness of the proposed model, 5000 samples are taken. In Fig. 11, it is shown that the FLANN-BWO model has an average MSE of 2.28E−07 after 100 iterations and the corresponding CPU time 382.422 s. The FLANN-TLBO model has an average MSE of 2.7498E−08 after 30 iterations and CPU time 462.02 s as displayed in Fig. 13. But, the FLANN-PSO and MLP-BP models have average MSE of 1.3945E−08 and 1.1470E−07, respectively, after 20 iterations each and corresponding CPU time is 782.43 s and 8.96 s, respectively, as presented in Figs. 15 and 19. The gradient-based FLANN-LMS model shown in Fig. 17 has an average MSE of 2.47E−07 after 10 iterations and CPU time of 4.15 s, which is the lowest among others. By taking the proposed model with different bio-inspired algorithms, the value of MSE has been reduced from 1.1470E−07 to 2.7498E−08, as listed in Table 4. After training of the proposed model, the best set of 90 weights, which represents the identified model of the Maglev system, is listed in Table 6. The fitting and MSE curves of all the models are shown in Figs. 10, 11, 12, 13, 14, 15, 16, 17, 18 and 19. The comparative results of MSE and the MSE plot are provided for various test runs in Table 5 and Fig. 20.

 Fig. 8
figure8

Membership function of input variable

 Fig. 9
figure9

Membership function for output variable

 Fig. 10
figure10

Identified model response with FLANN-BWO

 Fig. 11
figure11

MSE plot of FLANN-BWO

 Fig. 12
figure12

Identified model response with FLANN-TLBO

 Fig. 13
figure13

MSE plot of FLANN-TLBO

 Fig. 14
figure14

Identified model response with FLANN-PSO

 Fig. 15
figure15

MSE plot of FLANN-PSO

Table 4 Comparative results of identified model of Maglev system
Table 5 Comparative results of MSE of various optimization techniques for 20 independent test runs
Table 6 The best sets of weight from FLANN-TLBO model (W1 – W90)

Here, Ni is the number of iteration, Ps is the number of population, Fe is the number of feature and Nt is the number of input for training. From the Big O Notation, it shows that the FLANN-LMS and MLP-BP have less time complexity than the other three algorithms as shown in Table 4. For investigating the performance objectively, the Mean Squares Error (MSE) is considered as the performance metric. The average values of MSE for all the five models by running them for 20 independent test runs are shown in Table 4. The MSE values for all the models, for each test run in histogram, are shown in Fig. 20. It is clear from Table 4 that the MSE values of the FLANN-TLBO algorithm are lowest as compared to others, which signify the superior performance over the other four competitive networks.

It is depicted from Fig. 21 that the predicted value does not match with the actual output and a very large gap exists. Hence, the performance is highly unsatisfactory for the FLANN-LMS network. There exists high nonlinearity in the data of the Maglev system, and hence, the result is highly discouraging. The results of the FLANN-TLBO are found to be the most matched one as compared to the other four networks.

 Fig. 16
figure16

Identified model response with FLANN-LMS

 Fig. 17
figure17

MSE plot of FLANN-LMS

From Figs. 21 and 22, it is demonstrated that the response of the FLANN-TLBO model replicates the response of the real-time Maglev system and hence it is the best among all other competitive models. The performance of the algorithm also depends on the number controlling parameters and number of steps associated with weight updation. It is because these two parameters increase the computation time and the computational complexity. From Table 4, it is observed that the FLANN-LMS and FLANN-PSO have taken 4.15 s and 782.43 s CPU time, respectively, which are the lowest and the highest values. The recently developed BWO algorithm-based FLANN network required 382.422 s. The LMS algorithm have one step weight updation with one controlling parameter, the PSO have one step weight updation with three controlling parameters and the MLP have one step weight updation with three controlling parameters. The recently developed BWO algorithm have two steps weight updation with three controlling parameters for which it takes higher time and computational complexity. The FLANN-TLBO model involves two-updation process during the teaching and the learning phases, and hence, it takes more time of 462.02 s.

 Fig. 18
figure18

Identified model response with MLP-BP

Validation of Identified Model

The best identified FLANN-TLBO model is chosen for the validation purpose. The identified Maglev system obtained from the optimal 90 weights is shown in Table 6. This model is controlled and validated by using Fuzzy PID controller with proper choice of the controller parameters. The FLANN-TLBO model having the optimal set of weights is shown in Table 6. The Fuzzy PID controller is used to control both the actual Maglev and  the identified model to investigate the response as shown in Fig. 23.

The range of membership functions of the Fuzzy PID controller is defined from − 1 to 1, as shown in Figs. 8 and 9. The best responses are obtained after proper tuning of the Fuzzy PID controller with the values of \(k_{\text{p}} ,k_{\text{i}}\) and \(k_{\text{d}}\) are − 4, − 2 and − 0.2. From Fig. 24, it has been observed that the Fuzzy PID-controlled identified model and the actual Maglev system exhibit the same response.

 Fig. 19
figure19

MSE plot of MLP-BP

Fig. 20
figure20

Comparative plot of MSE in various test runs

 Fig. 21
figure21

Comparative identified model response

 Fig. 22
figure22

Comparative error plot

 Fig. 23
figure23

Control of actual Maglev system and identified model using Fuzzy PID controller

 Fig. 24
figure24

Comparative results of Maglev system and identified model with a Fuzzy PID controller

Nonparametric Statistical Tests

To validate the dominance of the FLANN-TLBO network, the pairwise sign test and Wilcoxon signed-rank test are carried out. In fact, the sign test and the Wilcoxon signed-rank test are two well-known nonparametric statistical tests proposed for pairwise comparison of the two heuristics approaches. Here, we have carried out the test for 20 runs of each algorithm to justify a fair comparison. The results are listed in Table 8 by considering the average value of MSE as the winning parameter. The minimum number of wins required to obtain \(\alpha = 0.05\) and\(\alpha = 0.01\) levels of significance for one algorithm over another is shown in Table 7. It is observed in Table 5 that the FLANN-TLBO model shows dominance over all the three other models with a significance of \(\alpha = 0.05\).

Table 7 Minimum wins needed for the two-tailed sign test at \(\alpha = 0.05\) and\(\alpha = 0.01\)

It is observed from the performance measures of Table 8 that the TLBO shows a significant improvement over the BWO, PSO, LMS and MLP-BP algorithms with a level of significance \(\alpha = 0.05\) by taking the detection rate as the winning parameter, and the p value and h value for the sign test using the MSE metric as a triumphant parameter listed in Table 9. The p value and h value represent the superiority of the algorithm over the other competitive algorithms. If the p value is less than the level of significance \(\alpha = 0.05\) and h value is 1, then the proposed algorithm is superior over the other and the null hypothesis can be rejected. If the p value is greater than the level of significance \(\alpha = 0.05\) and h value is 0, then the proposed algorithm is considered to be inefficient. Similarly, the Wilcoxon signed-rank test, which is similar to the paired t test in statistical procedure, and normally applied to detect the dominance behaviors between the two algorithms, is also performed. The performance comparison of all the algorithms is listed in Table 10. The results presented in Tables 9 and 10 reveal the superiority of the TLBO over other competitive algorithms.

Table 8 Critical values obtained for the two-tailed sign tests at \(\alpha = 0.05\) and \(\alpha = 0.01\) using MSE metric as a triumphant parameter
Table 9 Sign test using MSE metric as a triumphant parameter
Table 10 Wilcoxon signed test using MSE metric as a triumphant parameter

To study the supremacy and repeatability of the obtained response of the network a nonparametric Friedman test is also performed by using the MATLAB. Table 11 shows the average rank of the different networks used for identification, which signifies that the lower rank networks have higher accuracy and performance. The Friedman test parameters are given in Table 12, and the critical value is obtained as 1.6214E−11 from the Friedman test. A null hypothesis concept emerges, if the critical value is less than the significance level, i.e., \(\alpha = 0.05\) and it can be rejected. Hence, the dominance of the proposed algorithm over other competitive algorithms has been confirmed by performing the sign test, Wilcoxon signed-rank test and Friedman test. A null hypothesis concept comes, if the critical value (p value) is less than the significance level, i.e., \(\alpha = 0.05\) and it can be rejected. The result obtained reveals the supremacy of the TLBO over the others.

Table 11 Friedman test rank table
Table 12 Friedman test parameter

Conclusion

This article proposed a FLANN-TLBO model that yields improved identification and implementation of the Maglev plant. The performance of the TLBO-based FLANN model is compared with that of the other ANN-based models, i.e., MLP-BP, FLANN-LMS, FLANN-PSO and FLANN-BWO. The estimated models have been compared in terms of MSE, CPU time, and the response matching capability of the Maglev system. From the simulations, it is perceived that the proposed FLANN-TLBO model provides superior identification model of the actual Maglev system. The validation of the proposed FLANN-TLBO model is carried out by comparing its performance with the actual Maglev system under identical conditions. The results demonstrate improved response matching of the identified model and the actual system. Moreover, the statistical tests validate the dominance of the FLANN-TLBO network over others. The outcomes of statistical testing reveal the supremacy of the TLBO algorithm in comparison with other competitive algorithms with a significance level of \(\alpha = 0.05\). Further, other variants of the neural network and nature-inspired algorithm can be applied for achieving better models of complex systems.

References

  1. 1.

    Padoan, A.; Astolfi, A.: Nonlinear system identification for autonomous systems via functional equations methods. In: American Control Conference, pp. 1814–1819 (2016)

  2. 2.

    Subudhi, B.; Ieee, S.M.; Jena, D.: Nonlinear system identification of a twin rotor MIMO system. In: IEEE TENCON, pp. 1–6 (2009)

  3. 3.

    Weng, B.; Barner, K.E.: Nonlinear system identification in impulsive environments. IEEE Trans. Signal Process. 53, 2588–2594 (2005)

    MathSciNet  Article  Google Scholar 

  4. 4.

    Forrai, A.: System identification and fault diagnosis of an electromagnetic actuator. IEEE Trans. Control Syst. Technol. 25, 1028–1035 (2017). https://doi.org/10.1109/TCST.2016.2582147

    Article  Google Scholar 

  5. 5.

    Mondal, A.; Sarkar, P.: A unified approach for identification and control of electro-magnetic levitation system in delta domain. In: International Conference on Control, Instrumentation, Energy and Communication, pp. 314–318 (2016)

  6. 6.

    Srivatava, S.; Gupta, M.: A novel technique for identification and control of a non linear system. In: International Conference on Computational Intelligence and Networks, pp. s172–176 (2016). https://doi.org/10.1109/CINE.2016.37

  7. 7.

    Wen, S.; Wang, Y.; Tang, Y.; Xu, Y.; Li, P.; Zhao, T.: Real-time identification of power fluctuations based on LSTM recurrent neural network: a case study on Singapore Power System. IEEE Trans. Ind. Inf. 15, 5266–5275 (2019). https://doi.org/10.1109/tii.2019.2910416

    Article  Google Scholar 

  8. 8.

    Alqahtani, A.; Marafi, S.; Musallam, B.; El, N.; Abd, D.; Khalek, E.: Photovoltaic Power Forecasting Model Based on Nonlinear System Identification Modèle de prévision de puissance photovoltaïquebasésur l ’. identification de système non-linéaire 39, 243–250 (2016)

    Google Scholar 

  9. 9.

    Nanda, S.J.; Panda, G.; Majhi, B.: Improved identification of nonlinear dynamic systems using artificial immune system. In: IEEE Conference and Exhibition on Control, Communications and Automation, pp. 268–273 (2008). https://doi.org/10.1109/INDCON.2008.4768838

  10. 10.

    Patra, J.; Pal, R.; Chatterji, B.N.; Panda, G.: Identification of nonlinear dynamic systems using functional link artificial neural networks. IEEE Trans. Syst. 29, 254–262 (1999)

    Google Scholar 

  11. 11.

    Majhi, B.; Panda, G.: Robust identification of nonlinear complex systems using low complexity ANN and particle swarm optimization technique. Expert Syst. Appl. 38, 321–333 (2011). https://doi.org/10.1016/j.eswa.2010.06.070

    Article  Google Scholar 

  12. 12.

    Han, M.: Robust Structure Selection of Radial Basis Function Networks for Nonlinear System Identification (2014)

  13. 13.

    Chen, W.: Nonlinear System Identification Based on Radial Basis Function Neural Network Using Improved Particle Swarm Optimization, pp. 409–413 (2009). https://doi.org/10.1109/ICNC.2009.233

  14. 14.

    Kumpati, S.N.; Kannan, P.: Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Netw. 1, 4–26 (1990)

    Article  Google Scholar 

  15. 15.

    Pao, Y.H.: Adaptive Pattern Recognition and Neural Networks. Addison-Wesley, Reading (1989)

    Google Scholar 

  16. 16.

    Mallikarjuna, B.; Viswanathan, R.; Naib, B.B.: Feedback-based gait identification using deep neural network classification. J. Crit. Rev. 7, 661–667 (2020). https://doi.org/10.31838/jcr.07.04.125

    Article  Google Scholar 

  17. 17.

    Vora, D.R.; Rajamani, K.: A hybrid classification model for prediction of academic performance of students: a big data application. Evol. Intell. (2019). https://doi.org/10.1007/s12065-019-00303-9

    Article  Google Scholar 

  18. 18.

    Guo, Y.; Wang, F.; Lo, J.T.H.: Nonlinear system identification based on recurrent neural networks with shared and specialized memories. In: Asian Control Conference 2018-January, pp. 2054–2059 (2018). https://doi.org/10.1109/ASCC.2017.8287491

  19. 19.

    Wang, Z.; Gu, H.: Nonlinear system identification based on genetic algorithm and grey function. In: IEEE International Conference on Automation and Logistics, pp. 1741–1744 (2007)

  20. 20.

    Guoqiang, Y.; Weiguang, L.; Hao, W.: Study of RBF neural network based on PSO algorithm in nonlinear system (2015). https://doi.org/10.1109/ICICTA.2015.217

  21. 21.

    Kang, D.; Lee, B.; Won, S.: Nonlinear system identification using ARX and SVM with advanced PSO. In: IEEE Industrial Electronics Society, pp. 598–603 (2007)

  22. 22.

    Panda, G.; Mohanty, D.; Majhi, B.; Sahoo, G.: Identification of nonlinear systems using particle swarm optimization technique. In: IEEE Congress on Evolutionary Computation, pp. 3253–3257 (2007). https://doi.org/10.1109/CEC.2007.4424889

  23. 23.

    Hayyolalam, V.; PourhajiKazem, A.A.: Black widow optimization algorithm: a novel meta-heuristic approach for solving engineering optimization problems. Eng. Appl. Artif. Intell. 87, 103249 (2020). https://doi.org/10.1016/j.engappai.2019.103249

    Article  Google Scholar 

  24. 24.

    Kumar, M.; Kulkarni, A.J.; Satapathy, S.C.: Socio evolution & learning optimization algorithm: a socio-inspired optimization methodology. Future Gener. Comput. Syst. 81, 252–272 (2018). https://doi.org/10.1016/j.future.2017.10.052

    Article  Google Scholar 

  25. 25.

    Gholizadeh, S.; Milany, A.: An improved fireworks algorithm for discrete sizing optimization of steel skeletal structures. Eng. Optim. 50, 1829–1849 (2018). https://doi.org/10.1080/0305215X.2017.1417402

    MathSciNet  Article  Google Scholar 

  26. 26.

    Gholizadeh, S.; Ebadijalal, M.: Performance based discrete topology optimization of steel braced frames by a new metaheuristic. Adv. Eng. Softw. 123, 77–92 (2018). https://doi.org/10.1016/j.advengsoft.2018.06.002

    Article  Google Scholar 

  27. 27.

    Gholizadeh, S.; Danesh, M.; Gheyratmand, C.: A new Newton metaheuristic algorithm for discrete performance-based design optimization of steel moment frames. Comput. Struct. 234, 106250 (2020). https://doi.org/10.1016/j.compstruc.2020.106250

    Article  Google Scholar 

  28. 28.

    Rao, R.V.; Savsani, V.J.; Vakharia, D.P.: Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Aided Des. 43, 303–315 (2011). https://doi.org/10.1016/j.cad.2010.12.015

    Article  Google Scholar 

  29. 29

    Naik, B.; Nayak, J.; Behera, H.S.: A TLBO based gradient descent learning-functional link higher order ANN: An efficient model for learning from non-linear data. J. King Saud Univ. Comput. Inf. Sci. 30, 120–139 (2018). https://doi.org/10.1016/j.jksuci.2016.01.001

    Article  Google Scholar 

  30. 30.

    Naumovi, M.B.; Veseli, B.R.: Magnetic levitation system in control engineering education. Autom. Control Robot. 7, 151–160 (2008)

    Google Scholar 

  31. 31.

    Morales, R.; Feliu, V.; Member, S.; Sira-ramírez, H.; Member, S.: Nonlinear control for magnetic levitation systems based on fast online algebraic identification of the input gain. IEEE Trans. Control Syst. Technol. 19, 757–771 (2011)

    Article  Google Scholar 

  32. 32.

    Balko, P.; Rosinova, D.: Modeling of magnetic levitation system. In: International Conference on Process Control. pp. 252–257 (2017)

  33. 33.

    Liceaga-castro, J.; Hernandez-alcantara, D.; Amezquita-brooks, L.: Nonlinear control of a magnetic levitation system. In: Electronics, Robotics and Automotive Mechanics Conference, pp. 391–396 (2009). https://doi.org/10.1109/CERMA.2009.10

  34. 34.

    Magnetic Levitation: Control Experiments Feedback Instruments Limited (2011)

  35. 35.

    Ghosh, A.; Krishnan, T.R.; Tejaswy, P.; Mandal, A.; Pradhan, J.K.; Ranasingh, S.: Design and implementation of a 2-DOF PID compensation for magnetic levitation systems. ISA Trans. 53, 1216–1222 (2014)

    Article  Google Scholar 

  36. 36.

    Swain, S.K.; Sain, D.; Kumar, S.; Ghosh, S.: Real time implementation of fractional order PID controllers for a magnetic levitation plant. Int. J. Electron. Commun. 78, 141–156 (2017)

    Article  Google Scholar 

  37. 37.

    Yaghini, M.; Khoshraftar, M.M.; Fallahi, M.: A hybrid algorithm for artificial neural network training. Eng. Appl. Artif. Intell. 26(1), 293–301 (2013)

    Article  Google Scholar 

  38. 38.

    Patra, J.C.; Kot, A.C.: Nonlinear dynamic system identification using Chebyshev functional link artificial neural networks. IEEE Trans. Syst. Man. Cybern. 32, 505–511 (2002). https://doi.org/10.1109/TSMCB.2002.1018769

    Article  Google Scholar 

  39. 39.

    Subudhi, B.; Jena, D.: Nonlinear system identification using memetic differential evolution trained neural networks. Neurocomputing. 74, 1696–1709 (2011). https://doi.org/10.1016/j.neucom.2011.02.006

    Article  Google Scholar 

  40. 40.

    Katari, V.; Malireddi, S.; Satya, S.K.; Panda, G.: Adaptive nonlinear system identification using comprehensive learning PSO. In: International Symposium on Communications, Control and Signal Processing, pp. 434–439 (2008). https://doi.org/10.1109/ISCCSP.2008.4537265

  41. 41.

    Juang, J.-G.; Lin, B.-S.: Nonlinear system identification by evolutionary computation and recursive estimation method. In: American Control Conference, pp. 5073–5078 (2005). https://doi.org/10.1109/CINE.2015.22

  42. 42.

    Puchta, E.D.P.; Siqueira, H.V.; Kaster, M.D.S.: Optimization tools based on metaheuristics for performance enhancement in a gaussian adaptive PID controller. IEEE Trans. Cybern. 50, 1185–1194 (2020). https://doi.org/10.1109/TCYB.2019.2895319

    Article  Google Scholar 

  43. 43.

    Rao, R.V.; Savsani, V.J.; Balic, J.: Teaching-learning-based optimization algorithm for unconstrained and constrained real-parameter optimization problems. Eng. Optim. 44, 1447–1462 (2012). https://doi.org/10.1080/0305215X.2011.652103

    Article  Google Scholar 

  44. 44.

    Kumar, M.; Mishra, S.K.: Teaching learning based optimization-functional link artificial neural network filter for mixed noise reduction from magnetic resonance image. Bio Med. Mater. Eng. 28, 643–654 (2017)

    Article  Google Scholar 

  45. 45.

    Singh, S.; Ashok, A.; Kumar, M.; Rawat, T.K.: Adaptive infinite impulse response system identification using teacher learner based optimization algorithm. Appl. Intell. 49, 1785–1802 (2018). https://doi.org/10.1007/s10489-018-1354-4

    Article  Google Scholar 

  46. 46.

    Patra, J.C.; Kot, A.C.: Nonlinear dynamic system identification using chebyshev functional link artificial neural network. In: IEEE Transactions on Systems, Man, and Cybernetics, vol. 32, pp, 505–511 (2002)

  47. 47.

    Li, M., He, Y.: Nonlinear system identification using adaptive Chebyshev neural networks. IEEE International Conference on Intelligent Computing and Intelligent Systems, pp. 243–247 (2010)

  48. 48.

    Nanda, S.J., Panda, G., Majhi, B., Tah, P.: Improved Identification of Nonlinear MIMO Plants using New Hybrid FLANN-AIS Model. In: International Advanced Computing Conference, pp. 141–146 (2009). https://doi.org/10.1109/IADCC.2009.4808996

  49. 49.

    Kumar, M.; Mishra, S.K.: Particle swarm optimization-based functional link artificial neural network for medical image denoising. In: Computational Vision and Robotics, vol. 105–111 (2015)

  50. 50.

    Arora, A.; Hote, Y.V.; Rastogi, M.: Design of PID controller for unstable system. Commun. Comput. Inf. Sci. 140, 19–26 (2011). https://doi.org/10.1007/978-3-642-19263-0_3

    Article  Google Scholar 

  51. 51.

    Rastogi, M.A.; Arora, Y.V.H.: Design of Fuzzy Logic Based PID Controller for an Unstable System, Vol. 157, p. 66–571. Springer, Berlin (2011)

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Suresh Chandra Satapathy.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sahoo, A.K., Mishra, S.K., Majhi, B. et al. Real-Time Identification of Fuzzy PID-Controlled Maglev System using TLBO-Based Functional Link Artificial Neural Network. Arab J Sci Eng (2021). https://doi.org/10.1007/s13369-020-05292-x

Download citation

Keywords

  • System identification
  • Maglev system
  • FLANN
  • TLBO
  • Fuzzy PID