Abstract
In this chapter, we present a novel grouping scheme of first-order transition rules obtained from a partitioned time-series for fuzzy-induced neural regression . The transition rules here represent precedence relationships between a pair of partitions containing consecutive data points in the time-series. In this regard, we propose two neural network ensemble models . The first neural model represents a set of transition rules, each with a distinct partition in the antecedent . During the prediction phase, a number of neural networks containing the partition corresponding to the current time-series data point in the antecedent are triggered to produce outputs following the pre-trained rules . Pruning of rules that do not contain the partition corresponding to the current data point in the antecedent is performed by a pre-selector Radial Basis Function neural network. In the first model, the partitions present in transition rules are described by their respective mid-point values during neural training. This might induce approximation error due to representation of a complete band of data points by their respective partition mid-points. In the second model, we overcome this problem by representing the antecedent of a transition rule as a set of membership values of a data point in a number of fuzzy sets representing the partitions. The second model does not require selection of neural networks by pre-selector RBF neurons. Experiments undertaken on the Sunspot time-series as well as on the TAIEX economic close-price time -series reveal a high prediction accuracy outperforming competitive models , thus indicating the applicability of the proposed methods to real life time-series forecasting .
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chen, S. M., & Hwang, J. R. (2000). Temperature prediction using fuzzy time series. IEEE Transactions on Systems, Man, Cybernetics, Part-B, 30(2), 263–275.
Wu, C. L., & Chau, K. W. (2013). Prediction of rainfall time series using modular soft computing methods. Elsevier, Engineering Applications of Artificial Intelligence, 26, 997–1007.
Morales-Esteban, A., Martinez-Alvarez, F., Troncoso, A., Justo, J. L., & Rubio-Escudero, C. (2010). Pattern recognition to forecast seismic time series. Elsevier, Expert Systems with Applications, 37, 8333–8342.
Barnea, O., Solow, A. R., & Stone, L. (2006). On fitting a model to a population time series with missing values. Israel Journal of Ecology and Evolution, 52, 1–10.
Jalil, A., & Idrees, M. (2013). Modeling the impact of education on the economic growth: evidence from aggregated and disaggregated time series data of Pakistan. Elsevier Economic Model, 31, 383–388.
Box, G. E. P., & Jenkins, G. (1976). Time series analysis, forecasting and control. San Francisco, CA: Holden-Day.
Wang, C. C. (2011). A comparison study between fuzzy time series model and ARIMA model for forecasting Taiwan Export. Elsevier, Expert Systems with Applications, 38(8), 9296–9304.
Chang, B. R., & Tsai, H. F. (2008). Forecast approach using neural network adaptation to support vector regression grey model and generalized auto-regressive conditional heteroscedasticity. Elsevier, Expert Systems with Applications, 34(2), 925–934.
Tsokos, C. P. (2010). K-th moving, weighted and exponential moving average for time series forecasting models. European Journal of Pure and Applied Mathematics, 3(3), 406–416.
Zhang, G. P. (2003). Time series forecasting using a hybrid ARIMA and neural network model. Elsevier, Neurocomputing, 50, 159–175.
Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8, 338–353.
Zhang, G. P. (2000). Neural networks for classification: A survey. IEEE Transactions on Systems, Man, Cybernetics Part C (Applications and Reviews), 30(4), 451–462.
Ma, L., & Khorasani, K. (2004). New training strategies for constructive neural networks with application to regression problems. Elsevier, Neural Networks, 17(4), 589–609.
Hill, M., O’ Connor, T., & Remus, W. (1996). Neural network models for time series forecasts. Management Science, 42(7), 1082–1092.
Hamzacebi, C. (2008). Improving artificial neural networks’ performance in seasonal time series forecasting. Elsevier, Information Sciences, 178, 4550–4559.
Mirikitani, D. T., & Nikolaev, N. (2010). Recursive Bayesian recurrent neural networks for time-series modeling. IEEE Transactions on Neural Networks, 21(2), 262–274.
Smith, C., & Jin, Y. (2014). Evolutionary multi-objective generation of recurrent neural network ensembles for time series prediction. Elsevier, Neurocomputing, 143, 302–311.
Ardalani-Farsa, M., & Zolfaghari, S. (2010). Chaotic Time series prediction with residual analysis method using hybrid Elman-NARX neural networks. Elsevier, Neurocomputing, 73, 2540–2553.
Gaxiola, P., Melin, F., & Valdez, F. (2014). Interval type-2 fuzzy weight adjustment for back propagation neural networks with application in time series prediction. Elsevier Information Sciences, 260, 1–14.
Song, Q., & Chissom, B. S. (1993). Fuzzy time series and its model. Fuzzy Sets Systems, 54(3), 269–277.
Song, Q., & Chissom, B. S. (1993). Forecasting enrollments with fuzzy time series—part I. Fuzzy Sets Systems, 54(1), 1–9.
Song, Q., & Chissom, B. S. (1994). Forecasting enrollments with fuzzy time series—part II. Fuzzy Sets Systems, 62(1), 1–8.
Yu, H. K. (2005). Weighted fuzzy time series models for TAIEX forecasting. Physica A, 349, 609–624.
Chen, Mu-Yen. (2014). A high-order fuzzy time series forecasting model for internet stock trading. Elsevier, Future Generation Computer Systems, 37, 461–467.
Huarng, K., Yu, H. K., & Hsu, Y. W. (2007). A multivariate heuristic model for fuzzy time-series forecasting. IEEE Transactions on Systems Man and Cybernetics, Part B Cybernetics, 37(4), 836–846.
Chen, S. M., Chu, H. P., & Sheu, T. W. (2012). TAIEX forecasting using fuzzy time series and automatically generated weights of multiple factors. IEEE Transactions Systems, Man, Cybernetics, Part A: Systems and Humans, 42(6), 1485–1495.
Chen, S. M., & Chen, C. D. (2011). TAIEX forecasting based on fuzzy time series and fuzzy variation groups. IEEE Transactions on Fuzzy Systems, 19(1), 1–12.
Chen, S. M., & Kao, P. Y. (2013). TAIEX forecasting based on fuzzy time series, particle swarm optimization techniques and support vector machines. Information Science, 247, 62–71.
Cai, Q., Zhang, D., Zheng, W., & Leung, S. C. H. (2015). A new fuzzy time series forecasting model combined with ant colony optimization and auto-regression. Knowledge-Based Systems, 74, 61–68.
Chen, S. M. (1996). Forecasting enrollments based on fuzzy time series. Fuzzy Sets Systems, 81(3), 311–319.
Yu, T. H. K., & Huarng, K. H. (2008). A bivariate fuzzy time series model to forecast the TAIEX. Expert Systems with Applications, 34(4), 2945–2952.
Yu, T. H. K., & Huarng, K. H. (2010). Corrigendum to “A bivariate fuzzy time series model to forecast the TAIEX. Expert Systems with Applications, 37(7), 5529.
Huarng, K., & Yu, T. H. K. (2006). The application of neural networks to forecast fuzzy time series. Physica A, 363(2), 481–491.
Karnik, N. N., & Mendel, J. M. (1999). Applications of type-2 fuzzy logic systems to forecasting of time-series. Elsevier, Information Sciences, 120, 89–111.
Elanayar, V. T. S., & Shin, Y. C. (1994). Radial basis function neural network for approximation and estimation of nonlinear stochastic dynamic systems. IEEE Transactions on Neural Networks, 5(4), 594–603.
Park, Y. R., Murray, T. J., & Chen, C. (1996). Predicting sun spots using a layered perceptron neural network. IEEE Transactions on Neural Networks, 1(2), 501–505.
TAIEX. [Online]. Available: http://www.twse.com.tw/en/products/indices/tsec/taiex.php
Ma, Q., Zheng, Q., Peng, H., Zhong, T., & Xu, L. (2007) Chaotic time series prediction based on evolving recurrent neural networks. In Proceedings of the Sixth International Conference on Machine Learning and Cybernetics, Hong Kong, 2007.
Koskela, T., Lehtokangas, M., Saarinen, J., & Kaski, K. (1996). Time series prediction with multilayer perceptron, FIR and Elman neural networks. In Proceedings of the World Congress on Neural Networks (pp. 491–496).
Author information
Authors and Affiliations
Corresponding author
Appendix 5.1: Source Codes of the Programs
Appendix 5.1: Source Codes of the Programs
% MATLAB Source Code of the Main Program and Other Functions for Time-% Series Prediction by Fuzzy-induced Neural Regression
% Developed by Jishnu Mukhoti
% Under the guidance of Amit Konar and Diptendu Bhattacharya
%% Main 0 function [ partitions ] = main_0( time_series, num_part ) %UNTITLED6 Summary of this function goes here %   Detailed explanation goes here
partitions = partition(time_series,num_part);
end %%%%%%%%%%%%%%%%%%%% %% Main 1
function [ rules ] = main_1( time_series, partitions ) %Given a time-series and number of partitions provides the transition rules using sub %functions.
%plot_partitions(time_series,partitions); rules = find_transition_rules(time_series, partitions);
end %%%%%%%%%%%%%%%%%% %% Main 2
function [ refined_training_set1, refined_training_set2, rule_prob ] = main_2( rules, partitions ) %A main function to prepare the training set and refine them for the neural %net training.
training_set = create_training_set(rules) refined_training_set1 = refine_training_set_part_1(training_set,partitions); refined_training_set2 = refine_training_set_part_2(refined_training_set1,partitions); rule_prob = rule_probability(rules); end %%%%%%%%%%%%%%%%%% %% Create training set
function [ res ] = create_training_set( rules ) %From the extracted rules of the time-series, this function produces the %training set to train the neural networks.
r2 = rules; num_part = size(rules,1);
%Count of neural networks required to train based on the given rules nn_count = 1; flag = 0;
while flag == 0     index = 1;     flag = 1;     for i = 1:num_part         for j = 1:num_part             if (r2(i,j) ~= 0)                 res(index,1,nn_count) = i;                 res(index,2,nn_count) = j;                 r2(i,j) = 0;                 index = index + 1;                 flag = 0;                 break;             end         end     end     nn_count = nn_count + 1; end end %%%%%%%%%%%%%%%%%%%%%%%%%% %% Create training set part 2
function [ ts ] = create_training_set_part2( train_series, partitions ) %UNTITLED2 Summary of this function goes here %   Detailed explanation goes here
l = length(train_series); num_part = size(partitions,1);
ts = zeros(l-1, num_part+1);
for i = 1:l-1     mv = gauss_mf(partitions,train_series(i));     ts(i,1:end-1) = mv;     ts(i,end) = train_series(i+1); end
end %%%%%%%%%%%%%%%%%%%%%%%% %% Error Matrices
function [ rms, ms, nms ] = error_metrics( actual, pred ) %Function to compute the MSE, RMSE and NMSE errors.
rms = rmse(actual, pred); ms = rms^2; nms = nmse(pred, actual);
end %%%%%%%%%%%%%%%%%%%%% %% Find partitions
function [ res ] = find_transition_rules( series, partitions ) % Finds the transition rules given the time-series and its partitions
num_part = size(partitions,1); len = length(series);
res = zeros(num_part,num_part);
for i = 1:len-1     prev_part = part_data_pt(partitions,series(i));     next_part = part_data_pt(partitions,series(i+1));     res(prev_part,next_part) = res(prev_part,next_part) + 1; end end %%%%%%%%%%%%%%%%%%%%%%%%% %% Gaussian Membership Functions
function [ mem_val ] = gauss_mf( partitions, point ) %A function to take a point and return the membership values in all the %membership functions.
num_part = size(partitions,1); mem_val = zeros(num_part,1);
for i = 1:num_part     mean = (partitions(i,1) + partitions(i,2))/2;     sig = partitions(i,2) - mean;     mem_val(i) = gaussmf(point,[sig mean]); end end %%%%%%%%%%%%%%%%%%%%% % lorenz - Program to compute the trajectories of the LorenzÂ
% equations using the adaptive Runge-Kutta method. clear;  help lorenz;
%* Set initial state x,y,z and parameters r,sigma,b state = input(‘Enter the initial position [x y z]: ’); r = input(‘Enter the parameter r: ’); sigma = 10.;   % Parameter sigma b = 8./3.;     % Parameter b param = [r sigma b];  % Vector of parameters passed to rka tau = 1;       % Initial guess for the timestep err = 1.e-3;   % Error tolerance
%* Loop over the desired number of steps time = 0; nstep = input(‘Enter number of steps: ’); for istep=1:nstep
  %* Record values for plotting   x = state(1); y = state(2); z = state(3);   tplot(istep) = time;  tauplot(istep) = tau;         xplot(istep) = x;  yplot(istep) = y;  zplot(istep) = z;   if( rem(istep,50) < 1 )     fprintf(‘Finished %g steps out of %g\n’,istep,nstep);   end
  %* Find new state using adaptive Runge-Kutta   [state, time, tau] = rka(state,time,tau,err,’lorzrk’,param);
end
%* Print max and min time step returned by rka fprintf(‘Adaptive time step: Max = %g,  Min = %g \n’, …            max(tauplot(2:nstep)), min(tauplot(2:nstep)));
%* Graph the time series x(t) figure(1); clf;  % Clear figure 1 window and bring forward plot(tplot,xplot,’-’) xlabel(‘Time’);  ylabel(‘x(t)’) title(‘Lorenz model time series’) pause(1)  % Pause 1 second
%* Graph the x,y,z phase space trajectory figure(2); clf;  % Clear figure 2 window and bring forward % Mark the location of the three steady states x_ss(1) = 0;              y_ss(1) = 0;       z_ss(1) = 0; x_ss(2) = sqrt(b*(r-1));  y_ss(2) = x_ss(2); z_ss(2) = r-1; x_ss(3) = -sqrt(b*(r-1)); y_ss(3) = x_ss(3); z_ss(3) = r-1; plot3(xplot,yplot,zplot,’-’,x_ss,y_ss,z_ss,’*’) view([30 20]);  % Rotate to get a better view grid;           % Add a grid to aid perspective xlabel(‘x’); ylabel(‘y’); zlabel(‘z’); title(‘Lorenz model phase space’); %%%%%%%%%%%%%%%%%%%%%%%%%% %% NMSE CalculationÂ
function [ err ] = nmse( vec1, vec2 ) %Function to compute the normalized mean square error.
v = abs(vec1 - vec2); v = v.^2; s1 = sum(v); s2 = sum(abs(vec1 - mean(vec2))); err = s1/s2;
end %%%%%%%%%%%%%%%%%%%%%%%% %% Script to create, execute and test the neural net model.Â
%% Creating the data sets and partitioning it into sets of 1000 data points %% clear; close all; clc; load ’sunspot.txt’; data = sunspot(:,4); run = ceil(size(data)/1000); data_sets = zeros(length(data),run); j = 1; % for i = 1:run %     data_sets(:,i) = data(j:j+999); %     j = j + 1000; % end data_sets(:,1) = data;
%% Training neural net and predicting for each test run of the data %%
num_part = 20; rmse_val = zeros(run,1); %TODO: Convert the second 1 to run !!!! for i = 1:1     %Separate the series into training and testing periods     dt = data_sets(:,i);     l = floor(0.5*length(dt));     train_series = dt(1:l);     test_series = dt(l+1:end);     partitions = main_0(dt,num_part);     plot_partitions(train_series, partitions);     rules = main_1(train_series, partitions);     [rts1, rts2, rule_prob] = main_2(rules, partitions);     fprintf(‘Training the neural networks for part 1\n’);     nets1 = train_neural_nets(rts1);    % fprintf(‘Training the neural networks for part 2\n’);    % nets2 = train_neural_nets2(rts2);
    %Prediction phase     pred11 = zeros(length(dt)-l,1);     pred12 = zeros(length(dt)-l,1);     %pred2 = zeros(200,1);     fprintf(‘Running test cases ………………\n’);     for j = l:(length(dt)-1)         fprintf(‘--------Iteration %d-------------------\n’,j-l+1);         inp = dt(j);         [out11,out12] = prediction(inp,rule_prob,nets1,partitions);         %out2 = prediction2(inp,rule_prob,nets2,partitions);         pred11(j-l+1) = out11;         pred12(j-l+1) = out12;         %pred2(j-799) = out2;     end     rmse_val(i) = rmse(test_series, pred11);     %Plot the predictions      figure;      plot((1:(length(dt)-l))’,test_series,’k*-’);      hold on;      plot((1:(length(dt)-l))’,pred11,’r*-’);     %plot((1:100)’,pred12,’b*-’);     %plot((1:200)’,pred2,’b*-’);     end %%%%%%%%%%%%%%%%%%%%%%% %% Loading data and preparing the training set %%
close all; clear; clc;
load ’data.txt’; run = ceil(size(data)/1000); data_sets = zeros(length(data),run); j = 1; % for i = 1:run %     data_sets(:,i) = data(j:j+999); %     j = j + 1000; % end data_sets(:,1) = data; %% Train the neural network %%
num_part = 40; rmse_val = zeros(run,1);
for i = 1:run     %Separate the series into training and testing periods     dt = data_sets(:,i);     l = floor(0.8*length(dt));     train_series = dt(1:l);     test_series = dt(l+1:end);     partitions = main_0(dt,num_part);
    ts = create_training_set_part2(train_series, partitions);     net = train_part2(ts);
    %Prediction phase     preds = zeros(length(dt)-l,1);
    for j = l:(length(dt)-1)         fprintf(‘--------Iteration %d-------------------\n’,j-l+1);         inp = dt(j);         preds(j-l+1) = predict_part2(net,inp,partitions);     end
    %Calculate rmse and plot%     rmse_val(i) = rmse(test_series, preds);     figure;     plot((1:(length(dt)-l))’,test_series,’k*-’);     hold on;     plot((1:(length(dt)-l))’,preds,’r*-’); end %%%%%%%%%%%%%%%%%%% %% Part data partition
function [ res ] = part_data_pt( partitions, point ) %A function to find the partition to which a data point belongs.
res = 0; num_part = size(partitions, 1);
for i = 1:num_part     if ((point >= partitions(i,1)) && (point <= partitions(i,2)))         res = i;         break;     end end
end %%%%%%%%%%%%%%%%%%%% %%Â Partitioning
function [ res ] = partition( series, num_part ) %A function to partition the given time series into the number of %partitions specified as a parameter
mx = max(series); mn = min(series);
diff = mx-mn; part_width = diff/num_part;
res = zeros(num_part,2);
temp = mn;
for i = 1:num_part     res(i,1) = temp;     temp = temp + part_width;     res(i,2) = temp; end end %%%%%%%%%%%%%%%% %% Plotting partitions
function [  ] = plot_partitions( series, partitions ) %Plots the time-series and its partitions
plot([1:length(series)]’,series,’k*-’); hold on;
for i = 1:(size(partitions,1))     line([1,length(series)],[partitions(i,1),partitions(i,1)]); end
n = size(partitions,1); line([1,length(series)],[partitions(n,2),partitions(n,2)]);
end %%%%%%%%%%%%%
%% Given a time-series data point, the function uses the trained neural nets %% to make a future prediction.
function [ s1,s2 ] = prediction( point, rule_prob, nets, partitions )
nn_count = size(nets,2); prev_part = part_data_pt(partitions,point); num_part = size(partitions,1); s = 0;
preds = zeros(nn_count,1); preds2 = zeros(nn_count,1); probs = zeros(nn_count,1);
% fprintf (‘----------------------------------------------------------------------\n’); % fprintf(‘Input given: %f\n’,point); % fprintf(‘Input partition: %d\n’,prev_part); for i = 1:nn_count     %fprintf (‘Output for neural network %d\n’,i);     pred = nets(i).net(point);     preds2(i) = pred;     if (pred > partitions(num_part,2))         pred = partitions(num_part,2);     end     if (pred < partitions(1,1))         pred = partitions(num_part,1);     end     %fprintf(‘Prediction: %f\n’,pred);     next_part = part_data_pt(partitions,pred);     %fprintf(‘Output partition: %d\n’,next_part);     prob = rule_prob(prev_part, next_part);     %fprintf(‘Probability of transition: %f\n’,prob);     %s = s + (pred * prob);     %fprintf(‘current value of overall prediction: %f\n’, s);     preds(i) = pred;     probs(i) = prob; end
%Process the prob vector% mx = sum(probs); if mx ~= 0     probs = probs/mx; else     for i = 1:nn_count         probs(i) = 1/nn_count;     end end
% for i = 1:nn_count %     fprintf (‘Output for neural network %d\n’,i); %     fprintf(‘Prediction: %f\n’,preds(i)); %     fprintf(‘Probability of transition: %f\n’,probs(i)); % end
s1 = preds .* probs; s1 = sum(s1); s2 = mean(preds2); % fprintf (‘Value of overall prediction by weightage : %f\n’, s1); % fprintf(‘Value of overall prediction by simple average: %f\n’, s2); %pause(1); end
%%%%%%%%%%%%%%%%%%%%%%%%%% %% Given a time-series data point, the function uses the trained neural nets %% to make a future prediction.
function [ s1, s2 ] = prediction2( point, rule_prob, nets, partitions ) %Given a time-series data point, the function uses the trained neural nets %to make a future prediction.
nn_count = size(nets,2); prev_part = part_data_pt(partitions,point); num_part = size(partitions,1);
preds = zeros(nn_count,1); preds2 = zeros(nn_count,1); probs = zeros(nn_count,1);
fprintf (‘----------------------------------------------------------------------\n’); fprintf(‘Input given: %f\n’,point); fprintf(‘Input partition: %d\n’,prev_part); for i = 1:nn_count     fprintf (‘Output for neural network %d\n’,i);     mv = gauss_mf(partitions,point);     pred = nets(i).net(mv);     preds2(i) = pred;     if (pred > partitions(num_part,2))         pred = partitions(num_part,2);     end     if (pred < partitions(1,1))         pred = partitions(num_part,1);     end     %fprintf (‘Prediction by neural net 2 : %lf\n’,pred);     next_part = part_data_pt(partitions,pred);     prob = rule_prob(prev_part, next_part);     preds(i) = pred;     probs(i) = prob; end
%Process the prob vector% mx = sum(probs); if mx ~= 0     probs = probs/mx; else     for i = 1:nn_count         probs(i) = 1/nn_count;     end end
for i = 1:nn_count     fprintf (‘Output for neural network %d\n’,i);     fprintf(‘Prediction: %f\n’,preds(i));     fprintf(‘Probability of transition: %f\n’,probs(i)); end
s1 = preds .* probs; s1 = sum(s1); s2 = mean(preds2);
fprintf (‘Value of overall prediction by weightage : %f\n’, s1); fprintf(‘Value of overall prediction by simple average: %f\n’, s2); %pause(1);
end
%%%%%%%%%%%%%%%%%%%%% %% Prediction part2
function [ res ] = predict_part2( net, point, partitions ) %UNTITLED4 Summary of this function goes here %   Detailed explanation goes here
mv = gauss_mf(partitions, point); res = net(mv); end %%%%%%%%%%%%%%%%%%% % Function refines the training set to form mid value to mid value mapping.
function [ res ] = refine_training_set_part_1( training_set, partitions )
num_part = size(partitions,1); mid_vals = zeros(num_part, 1);
for i = 1:num_part     mid_vals(i) = (partitions(i,1) + partitions(i,2))/2; end
nn_count = size(training_set,3); rows = size(training_set,1); res = zeros(size(training_set));
for i = 1:nn_count     train = training_set(:,:,i);     for j = 1:rows         if train(j,1) ~= 0             prev = train(j,1);             next = train(j,2);             train(j,1) = mid_vals(prev);             train(j,2) = mid_vals(next);         end     end     res(:,:,i) = train; end end
%%%%%%%%%%%%%%%%%%%%%%%% %A function to produce a training set for training a neural net using fuzzy %membership values of the input time-series value.
function [ res ] = refine_training_set_part_2( ref_training_set, partitions )
nn_count = size(ref_training_set,3); rows = size(ref_training_set,1); num_part = size(partitions,1); res = zeros(rows,num_part+1,nn_count);
for i = 1:nn_count     tr = ref_training_set(:,:,i);     for j = 1:rows         if tr(j,1) ~= 0             res(j,1:end-1,i) = (gauss_mf(partitions,tr(j,1)))’;             res(j,end,i) = tr(j,2);         end     end end end
%%%%%%%%%%%%%%%%%%%%%%% %Find the rmse of vec1 and vec2
function [ res ] = rmse( vec1, vec2 )
v = abs(vec1 - vec2); v = v.^2; m = mean(v); res = sqrt(m);
end %%%%%%%%%%%%%%%%%%%%%%%%% %A function to convert the rule matrix to a transition probability matrix.
function [ res ] = rule_probability( rules )
num_part = size(rules,1); s = sum(rules,2); res = zeros(size(rules));
for i = 1:num_part     if s(i) ~= 0         res(i,:) = rules(i,:)/s(i);     else         res(i,:) = 0;     end end end
%%%%%%%%%%%%%%%%%%%% %A function to train the neural networks on the given data.
function [ a ] = train_neural_nets( refined_training_set )
nn_count = size(refined_training_set,3); r = size(refined_training_set,1);
nn_rc = 0; for i = 1:nn_count     tr = refined_training_set(:,:,i);     idx = 1;     while idx <= r         if tr(idx,1) == 0             break;         end         idx = idx + 1;     end     if idx >= 5         nn_rc = nn_rc + 1;     end end nn_count = nn_rc;
for i = 1:nn_count     %Prepare the training data     tr = refined_training_set(:,:,i);     idx = 1;     while idx <= r         if tr(idx,1) == 0             break;         end         idx = idx +1;     end     tr = tr(1:idx-1,:);     %Code for neural net     a(i).net = feedforwardnet(10);     a(i).net = train(a(i).net,(tr(:,1))’,(tr(:,2))’); end end
%%%%%%%%%%%%%%%%%% %A function to train the neural networks on the given data.
function [ a ] = train_neural_nets2( refined_training_set )
nn_count = size(refined_training_set,3); r = size(refined_training_set,1); num_part = size(refined_training_set,2) - 1;
nn_rc = 0; for i = 1:nn_count     tr = refined_training_set(:,:,i);     idx = 1;     while idx <= r         s = sum(tr(idx,1:end-1));         if s == 0             break;         end         idx = idx + 1;     end     if idx >= 5         nn_rc = nn_rc + 1;     end end nn_count = nn_rc;
for i = 1:nn_count     %Prepare the training data     tr = refined_training_set(:,:,i);     idx = 1;     while idx <= r         s = sum(tr(idx,1:end-1));         if s == 0             break;         end         idx = idx +1;     end     tr = tr(1:idx-1,:);     %Code for neural net     a(i).net = feedforwardnet(num_part+10);     a(i).net = train(a(i).net,(tr(:,1:end-1))’,(tr(:,end))’); end end %%%%%%%%%%%%%%%%% %% Training Part 2 for training data set
function [ net ] = train_part2( ts ) %UNTITLED3 Summary of this function goes here %   Detailed explanation goes here num_part = size(ts,2) - 1; net = feedforwardnet(num_part + 10); net = train(net, (ts(:,1:end-1))’,(ts(:,end))’); end %%%%%%%%%%%%%%%%%%%%
Rights and permissions
Copyright information
© 2017 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Konar, A., Bhattacharya, D. (2017). Grouping of First-Order Transition Rules for Time-Series Prediction by Fuzzy-Induced Neural Regression. In: Time-Series Prediction and Applications. Intelligent Systems Reference Library, vol 127. Springer, Cham. https://doi.org/10.1007/978-3-319-54597-4_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-54597-4_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-54596-7
Online ISBN: 978-3-319-54597-4
eBook Packages: EngineeringEngineering (R0)