Enhanced Living Environments pp 5379  Cite as
Combining Machine Learning and Metaheuristics Algorithms for Classification Method PROAFTN
Abstract
The supervised learning classification algorithms are one of the most well known successful techniques for ambient assisted living environments. However the usual supervised learning classification approaches face issues that limit their application especially in dealing with the knowledge interpretation and with very large unbalanced labeled data set. To address these issues fuzzy classification method PROAFTN was proposed. PROAFTN is part of learning algorithms and enables to determine the fuzzy resemblance measures by generalizing the concordance and discordance indexes used in outranking methods. The main goal of this chapter is to show how the combined metaheuristics with inductive learning techniques can improve performances of the PROAFTN classifier. The improved PROAFTN classifier is described and compared to well known classifiers, in terms of their learning methodology and classification accuracy. Through this chapter we have shown the ability of the metaheuristics when embedded to PROAFTN method to solve efficiency the classification problems.
Keywords
Machine learning Supervised learning PROAFTN Metaheuristics1 Introduction
In this chapter we introduce and compare various algorithms which have been used to enhance the performance of the classification method PROAFTN. It is a supervised learning that learns from a training set and builds set of prototypes to classify new objects [10, 11]. The supervised learning classification methods have been applied extensively in Ambient Assisted Living (AAL) from sensors’ generated data [36]. The enhanced algorithm can be used for instance to activity recognition and behavior analysis in AAL on sensors data [43]. It can be applied for the classification of daily living activities in a smart home using the generated sensors data [36]. Hence, the enhanced PROAFTN classifier can be integrated to active and assisted living systems as well as for smart homes health care monitoring frameworks as any classifiers used in the comparative study presented in this chapter [47]. This chapter is concerned with the supervised learning methods where the given samples or objects have known class labels called also training set, and the target is to build a model from these data to classify unlabeled instances called testing data. We focus on the classification problems in which classes are identified with discrete, or nominal, values indicating for each instance to which class it belongs, among the classes residing in the data set [21, 60]. Supervised classification problems require a classification model that identifies the behaviors and characteristics of the available objects or samples called training set. This model is then used to assign a predefined class to each new object [31]. A variety of research disciplines such as statistics [60], Multiple Criteria Decision Aid (MCDA) [11, 22] and artificial intelligence have addressed the classification problem [39]. The field of MCDA [10, 63] includes a wide variety of tools and methodologies developed for the purpose of helping a decision model (DM) to select from finite sets of alternatives according to two or more criteria [62]. In MCDA, the classification problems can be distinguished from other classification problems within the machine learning framework from two perspectives [2]. The first includes the characteristics describing the objects, which are assumed to have the form of decision criteria, providing not only a description of the objects but also some additional preferential information associated with each attribute [22, 51]. The second includes the nature of the classification pattern, which is defined in both ordinal, known as sorting [35], and nominal, known as multicriteria classification [10, 11, 63]. Classification based machine learning models usually fail to tackle these issues, focusing basically on the accuracy of the results obtained from the classification algorithms [62].
This chapter is devoted to the classification method based on the preference relational models known as outranking relational models as described by Roy [52] and Vincke [59]. The method presented in this paper employs a partial comparison between the objects to be classified and prototypes of the classes on each attribute. Then, it applies a global aggregation using the concordance and nondiscordance principle [45]. Therefore it avoids resorting to conventional distance that aggregates the score of all attributes in the same value unit. Hence, it helps to overcome some difficulties encountered when data is expressed in different units and to find the correct preprocessing and normalization data methods. The PROAFTN method uses concordance and nondiscordance principle that belongs to MCDA field developed by Roy [52, 54]. Moreover, Zopounidis and Doumpos [63] dividing the classification problems based on MCDA into two categories: sorting problems for methods that utilize preferential ordering of classes and multicriteria classification for nominal sorting there is no preferential ordering of classes. In MCDA field the PROAFTN method is considered as nominal sorting or multicriteria classification [10, 63]. The main characteristic of multicriteria classification is that the classification models do not automatically result only from the training set but depend also on the judgment of an expert. In this chapter we will show how techniques from machine learning and optimization can determine the accurate parameters for fuzzy the classification method PROAFTN [11]. When applying PROAFTN method, we need to learn the value of some parameters, in case of our proposed method we have boundaries of intervals that define the prototype profiles of the classes, the attributes’ weights, etc. To determine the attributes’ intervals, PROAFTN applies the discretization technique as described by Ching et al. [20] from a set of preclassified objects presenting a training set [13]. Eventhough these approaches offer good quality solutions, they still need considerable computational time. The focus of this chapter concerns the application of different optimization techniques based on metaheuristics for learning PROAFTN method. To apply PROAFTN method over very large data, there are many parameters to be set. If one were to use the exact optimization methods to infer these parameters, the computational effort that would be required is an exponential function of the problem size. Therefore, it is sometimes necessary to abandon the search for the optimal solution, using deterministic algorithms, and simply seek a good solution in a reasonable computational time, using metaheuristics algorithms. In this paper, we will show how inductive learning method based on metaheuristic techniques can lead to the efficient multicriteria classification data analysis.

The PROAFTN method can apply two learning approaches: deductive or knowledge based and inductive learning. In the deductive approach, the expert has the role of establishing the required parameters for the studied problem for example the experts’ knowledge or rules can be expressed as intervals, which can be implemented easily to build the prototype of the classes. In the inductive approach, the parameters and the classification models are obtained and learned automatically from the training dataset.

PROAFTN uses the outranking and preference modeling as proposed by Roy [52] and it hence can be used to gain understanding about the problem domain.

PROAFTN uses fuzzy sets for deciding whether an object belongs to a class or not. The fuzzy membership degree gives an idea about its weak and strong membership to the corresponding classes.
The overriding goal of this study is to present a generalized framework to learn the classification method PROAFTN. And then compare the performance and the efficiency of the learned method against wellknown machine learning classifiers.
We shall conclude that the integration of machine learning techniques and metaheuristic optimization to PROAFTN method will lead to significantly more robust and efficient data classification tool.
The rest of the chapter is organized as follows: Sect. 2 overviews the PROAFTN methodology and its notations. Section 3 explains the generalized learning framework for PROAFTN. In Sect. 4 the results of our experiments are reported. Finally, conclusions and future work are drawn in Sect. 5.
2 PROAFTN Method
This section describes the PROAFTN procedure, which belongs to the class of supervised learning to solve classification problems. Based on fuzzy relations between the objects being classified and the prototype of the classes, it seeks to define a membership degree between the objects and the classes of the problem [11]. The PROAFTN method is based on outranking relation as an alternative to the Euclidean distance through the calculation of an indifference index between the object to be assigned and the prototype of the classes obtained through the training phase. Hence, to assign an object to the class PROAFTN follow the rule known as concordance and no discordance principle as used by the outranking relations: if the object a is judged indifferent or similar to prototype of the class according to the majority of attributes “concordance principle” and there is no attribute uses its veto against the affirmation “a is an indifferent to this prototype” “nodiscordance principal”, the object a is considered indifferent to this prototype and it should be assigned to the class of this prototype [11, 52].
PROAFTN has been applied to the resolution of many realworld practical problems such as acute leukemia diagnosis [14], asthma treatment [56], cervical tumor segmentation [50], Alzheimer diagnosis [18], eHealth [15] and in optical fiber design [53], asrtocytic and bladder tumors grading by means of computeraided diagnosis image analysis system [12] and it was also applied to image processing and classification [1]. PROAFTN also has been applied for intrusion detection and analyzing Cyberattacks [24, 25]. Singh and Arora [55] present an interesting application of fuzzy classification PROAFTN to network intrusion detection. In this paper authors find that PROAFTN outperforms the well known classifier Support Vector Machine [55]. The following subsections describe the notations, the classification methodology, and the inductive approach used by PROAFTN.
2.1 PROAFTN Notations
Notations and parameters used by the PROAFTN method
2.2 Fuzzy Intervals
Let A represents a set of objects known as a training set. Consider a new object a to be classified. Let a be described by a set of m attributes \({\{g_1,g_2,...,g_m\}}\). Let the k classes be \({\{C^1,C^2,...,C^k\}}\). The different steps of the procedure are as follows:
2.3 Computing the Fuzzy Indifference Relation

case 1 (strong indifference):
\(C_{jh}^i(a,b_i^h) = 1\) \(\Leftrightarrow g_j(a) \in [S_{jh}^1, S_{jh}^2]\); (i.e., \(S_{jh}^1 \le g_j(a) \le S_{jh}^2\))

case 2 (no indifference):
\(C_{jh}^i(a,b_i^h) = 0\) \( \Leftrightarrow g_j(a) \le q_{jh}^1\), or \(g_j(a) \ge q_{jh}^2\)

case 3 (weak indifference):
The value of \(C_{jh}^i(a,b_i^h) \in (0,1)\) is calculated based on Eq. (4). (i.e., \(g_j(a)\) \(\in \) \([q_{jh}^1, S_{jh}^1]\) or \(g_j(a)\) \(\in \) \([S_{jh}^2, q_{jh}^2]\))
Performance matrix of prototypes of the class \(C^h\) according to their partial fuzzy indifference relation with an object a to be classified.
\(g_1\)  \(g_2\)  ...  \(g_j\)  ...  \(g_m\)  

\(b_1^1\)  \(C_{11}^1(a,b^1_1)\)  \(C_{21}^1(a,b^1_1)\)  ...  \(C^1_{j1}(a,b^1_1)\)  ...  \(C^1_{m1}(a,b^1_1)\) 
\(b_2^1\)  \(C_{11}^2(a,b^1_2)\)  \(C_{21}^2(a,b^1_2)\)  ...  \(C^2_{j1}(a,b^1_2)\)  ...  \(C^2_{m1}(a,b^1_2)\) 
\(\vdots \)  \(\vdots \)  \(\vdots \)  ...  \(\vdots \)  ...  \(\vdots \) 
\(b_i^h\)  \(C_{1h}^i(a,b_i^h)\)  \(C_{2h}^i(a,b_i^h)\)  ...  \(C^i_{jh}(a,b^h_i)\)  ...  \(C^i_{mh}(a,b^h_i)\) 
\(\vdots \)  \(\vdots \)  \(\vdots \)  ...  \(\vdots \)  ...  \(\vdots \) 
\(b_{L_k}^k\)  \(C_{1k}^{L_k}(a,b^k_{L_k})\)  \(C_{2k}^{L_k}(a,b^k_{L_k})\)  ...  \(C^{L_k}_{jk}(a,b^k_{L_k})\)  ...  \(C^{L_k}_{mk}(a,b^k_{L_k})\) 
2.4 Evaluation of the Membership Degree
2.5 Assignment of an Object to the Class
3 Introduced Metaheuristic Algorithms for Learning PROAFTN
The rest of the chapter is to present the different methodologies based on machine learning and metaheuristic techniques for learning the classification method PROAFTN from data. The goal of the development of such methodologies is to obtain, from the training data set, the PROAFTN parameters that achieve the highest classification accuracy by applying the Algorithm 1. For this purpose, different learning methodologies are summarized in the following subsections.
3.1 Learn and Improve PROAFTN Based on Machine Learning Techniques
Thereafter, an induction approach was introduced to compose PROAFTN prototypes to be used for classification. To evaluate the performance of the proposed approaches, a general comparative study was carried out between DT algorithms (C4.5 and ID3) and PROAFTN based on the proposed learning techniques. That portion of the study concluded that PROAFTN and DT algorithms (C4.5 and ID3) share a very important property: they are both interpretable. In terms of classification accuracy, PROAFTN was able to outperform DT [16].
A superior technique for learning PROAFTN was introduced using Genetic algorithms (GA). More particularly, the developed technique, called GAPRO, integrates kMeans and a genetic algorithm to establish PROAFTN prototypes automatically from data in near optimal form. The purpose of using GA was to automate and optimize the selection of number of clusters and the thresholds to refining the prototypes. Based on the results generated by 12 typical classification problems, it was noticed that the newly proposed approach enabled PROAFTN to outperform widely used classification methods. The general description of using kMeans with GA to learn the PROAFTN classifier is documented in [7, 13]. A GA is an adaptive metaheuristic search algorithm based on the concepts of natural selection and biological evolution. GA principles are inspired by Charles Darwin’s theory of “survival of the fittest”; that is, the strong tend to adapt and survive while the weak tend to vanish. GA was first introduced by John H. Holland in the 1970s and further developed in 1975 to allow computers to evolve solutions to difficult search and combinatorial systems, such as function optimization and machine learning. As reported in the literature, GA represents an intelligent exploitation of a random search used to solve optimization problems. In spite of its stochastic behavior, GA is generally quite effective for rapid global searches for large, nonlinear and poorly understood spaces; it exploits historical information to direct the search into the region of better performance within the search space [32, 49].
3.2 Learning PROAFTN Using Particle Swarm Optimization
A new methodology based on the particle swarm optimization (PSO) algorithm was introduced to learn PROAFTN. First, an optimization model was formulated, and thereafter a PSO was used to solve it. PSO was proposed to induce the classification model for PROAFTN in socalled PSOPRO by inferring the best parameters from data with high classification accuracy. It was found that PSOPRO is an efficient approach for data classification. The performance of PSOPRO applied to different classification datasets demonstrates that PSOPRO outperforms the wellknown classification methods.
PSO is an efficient evolutionary optimization algorithm using the social behavior of living organisms to explore the search space. Furthermore, PSO is easy to code and requires few control parameters [17]. The proposed approach employs PSO for training and improving the efficiency of the PROAFTN classifier. In this perspective, the optimization model is first formulated, and thereafter a PSO algorithm is used for solving it. During the learning stage, PSO uses training samples to induce the best PROAFTN parameters in the form of prototypes. Then, these prototypes, which represent the classification model, are used for assigning unknown samples. The target is to obtain the set of prototypes that maximizes the classification accuracy on each dataset.
The steps for calculating the objective function f.
To solve the optimization problem presented in Eq. (15), PSO is adopted here. The problem dimension D (i.e., the number of parameters in the optimization problem) is described as follows: Each particle \(\mathbf {x}\) is composed of the parameters \(S^1_{jh}, S^2_{jh}, d^1_{jh}, d^2_{jh}\) and \(w_{jh}\), for all \(j=1,2,...,m\) and \(h=1,2,...,k\). Therefore, each particle in the population is composed of \(D = 5 \times m \times k\) real values (i.e., \(D=dim(\mathbf {x})\)).
3.3 Differential Evolution for Learning PROAFTN
A new learning strategy based on the Differential Evolution (DE) algorithm was proposed for obtaining the best PROAFTN parameters. The proposed strategy is called DEPRO. DE is an efficient metaheuristics optimisation algorithm based on a simple mathematical structure that mimics a complex process of evolution. Based on results generated from a variety of public datasets, DEPRO provides excellent results, outperforming the most common classification algorithms.
3.4 A Hybrid Metaheuristic Framework for Establishing PROAFTN Parameters
4 Comparative Study with PROAFTN and Well Known Classifiers
Description of datasets used in our experiments.
Dataset  Instances  Attributes  Classes  

1  BCancer  699  9  2 
2  Blood  748  4  2 
3  Heart  270  13  2 
4  Hepatitis  155  19  2 
5  HM  306  3  2 
6  Iris  150  4  3 
7  Liver  345  6  2 
8  MM  961  5  2 
9  Pima  768  8  2 
10  STAust  690  14  2 
11  TA  151  5  3 
12  Wine  178  13  3 
The performance of all approaches for learning PROAFTN introduced in this research study based on classification accuracy (in %). The average accuracy and average ranking is also included.
Dataset  GAPRO  PSOPRO  DEPRO  PSOPRORVNS  DEPRORVNS 

BCancer  96.76  97.14  96.97  97.33  97.05 
Blood  75.43  79.25  79.59  79.46  79.61 
HM  83.85  84.27  83.74  84.36  83.81 
Heart  71.95  86.04  84.17  87.05  85.37 
Hepatitis  73.84  75.73  80.36  76.27  76.10 
Iris  96.57  96.21  96.47  96.30  96.66 
Liver  71.83  69.31  71.01  70.97  70.99 
MM  84.92  82.31  84.33  84.07  84.77 
Pima  72.19  77.47  75.37  77.42  77.23 
STAust  81.78  86.09  85.62  86.10  86.04 
TA  52.44  60.55  61.80  60.62  62.72 
Wine  97.33  96.79  96.87  96.72  97.10 
Average accuracy  79.91  82.60  83.03  83.06  83.12 
Average rank  3.58  3.33  3.08  2.58  2.42 

Best approaches: DEPRORVNS and PSOPRORVNS.

Middle approaches: DEPRO and PSOPRO.

Weakest approach: GAPRO.
Experimental results based on classification accuracy (in %) to measure the performance of the wellknown classifiers on the same datasets
Dataset  C4.5 J48  NB  SVM SMO  NN MLP  kNN Ibk, k=3  PART  RForest n = 500  GLM  Deep learning 

BCancer  94.56  95.99  96.70  95.56  97.00  97.05  97.4  97.9  97.9 
Blood  77.81  75.40  76.20  78.74  74.60  79.61  76.1  74.9  78.7 
Heart  76.60  83.70  84.10  78.10  78.89  73.33  57.6  60.4  54.9 
Hepatitis  80.00  85.81  83.87  81.94  84.52  82.58  90.1  92.6  94.8 
HM  71.90  74.83  73.52  72.87  70.26  72.55  73.1  69.2  67.2 
Iris  96.00  96.00  96.00  97.33  95.33  94.00  95.3  96.7  90.7 
Liver  68.70  56.52  58.26  71.59  61.74  63.77  71.8  73.0  74.1 
MM  82.10  78.35  79.24  82.10  77.21  82.21  80.8  84.9  84.7 
Pima  71.48  75.78  77.08  75.39  73.44  73.05  77.4  78.3  75.4 
STAust  85.22  77.25  85.51  84.93  83.62  83.62  86.7  88.9  86.8 
TA  59.60  52.98  54.30  54.30  50.33  58.28  66.1  52.3  39.6 
Wine  91.55  97.40  99.35  97.40  95.45  92.86  97.8  98.9  97.7 
Mean accuracy rankings. The algorithms developed in this paper are marked in bold.
Algorithm  Mean rank 

DEPRORVNS  4.75 
PSOPRORVNS  4.75 
h2o GLM  5.29 
PSOPRO  5.50 
DEPRO  6.08 
RForest 500  6.25 
h2o DL  7.04 
GAPRO  8.08 
SVM SMO  8.12 
NN MLP  8.12 
NB  9.54 
PART  9.62 
C4.5  10.62 
kNN  11.21 
Summary of the of wellknown classifiers versus PROBPLA properties (the best rating is **** and the worst is *)
In this chapter, we have presented the implementation of machine learning and metaheuristics algorithms for parameters training of multicriteria classification method. We have shown that learning techniques based on metaheuristics proved to be a successful approach for optimizing the learning of PROAFTN classification method and thus greatly improving its performances. As has been demonstrated, every classification algorithm has its strengths and limitations. More particularly, the characteristics of the method and whether it is strong or weak depend on the situation or on the problem. For instance, assume the problem at hand is a medical dataset and the interest is to look for a classification method for medical diagnostics. Suppose the executives and experts are looking for a high level of classification accuracy and at the same time they are very keen to know more details about the classification process (e.g., why the patient is classified to this category of disease). In such circumstances, classifiers such as Deep Learning networks, kNN, or SVM may not be an appropriate choice, because of the limited interpretability of their classification models. Although deep learning networks have been successfully applied to some healthcare application and in particularly into medical imaging, they suffered from some limitations such as the limited interpretability of their classification results; they require a very large balanced labeled data set; the preprocessing or change of input domain is often required to bring all the input data to the same scale [48]. Thus, there is a need to look for other classifiers that reason about their outputs and can generate good classification accuracy, such as DTs (C4.5, ID3), NB, or PROAFTN.
Based on the experimental and the comparative study presented in Table 8, the PROAFTN method based on our proposed learning approaches has good accuracy in most instances and can deal with all types of data without sensitivity to noise. PROAFTN uses the pairwise comparison and therefore, there is no need for looking for suitable normalization technique of data like the case of other classifiers. Furthermore, PROAFTN is a transparent and interpretable classifier where it’s easy to generalize the classification rules from the obtained prototypes. It can use both approaches deductive and inductive learning, which allow us to use in the same time historical data with expert judgment to compose the classification model. To sum up, there is no complete or comprehensive classification algorithm that can handle or fit all classification problems. In response to this deficiency, the major task of this work is to review an integration of methodologies from three major fields, MCDA, machine learning, and optimization based metaheuristics, through the aforementioned classification method PROAFTN. The target of this study was to exploit the machine learning techniques and the optimization approaches to improve the performance of PROAFTN. The aim is to find a good suitable and comprehensive (interpretable) classification procedure that can be applied efficiently in many applications including the ambient assisted living environments.
5 Conclusions and Future Work
The target of this chapter is to exploit the machine learning techniques and the optimization approaches to improve the performance of PROAFTN. The aim is to find a good suitable and comprehensive (interpretable) classification procedure that can be applied efficiently in health applications including the ambient assisted living environments. This chapter describes the ability of the metaheuristics when embedded to the classification method PROAFTN in order to classify new objects. To do this we compared the improved PROAFTN methodology with those reported previously on the same data and same validation technique (10cross validation). In addition to reviewing several approaches to modeling and learning classification method PROAFTN, this chapter also presents new ideas to further research in the areas of data mining and machine learning. Below are some possible directions for future research.
 1.The fact that PROAFTN has several parameters to be obtained for each attribute and for each class, which provides more information to assign objects to the closest class. However, in some cases this may cause some limitation on the speed of learning, particularly when using metaheuristics, as we presented in this paper. Possible future solutions could be summarized as follows:

Utilizing different approaches for obtaining the weights. One possible direction is to use a features ranking approach by using some strong algorithms that perform well in the aspect of dimensionality reduction.

Determining intervals bounds for more than one prototype before performing optimization. This would involve establishing the intervals’ bounds a priori by using some clustering techniques, hence improving and speeding up the search and improving the likelihood of finding the best solutions.

 2.
As we know the performance of approaches based on the choice of control parameters varies from one application to another. However, in this work the control parameters are fixed for all applications. A better control of parameter choice for the metaheuristics based PROAFTN algorithms will be investigated.
 3.
To speed up the PROAFTN learning process, possible improvement could be made by using parallel computation. The different processors can deal with the fold independently in the cross validation folds process. The parallelism can be also applied in the composition of prototypes of each class.
 4.
In this chapter, an inductive learning is presented to build the classification models for the PROAFTN method. PROAFTN also can apply the deductive learning that allows the introduction of the given knowledge in setting PROAFTN parameters such intervals and/or weights to build the prototype of classes.
References
 1.AlObeidat, F., AlTaani, A.T., Belacel, N., Feltrin, L., Banerjee, N.: A fuzzy decision tree for processing satellite images and landsat data. Procedia Comput. Sci. 52, 1192–1197 (2015)CrossRefGoogle Scholar
 2.AlObeidat, F., Belacel, N.: Alternative approach for learning and improving the MCDA method PROAFTN. Int. J. Intell. Syst. 26(5), 444–463 (2011)CrossRefGoogle Scholar
 3.AlObeidat, F., Belacel, N., Carretero, J.A., Mahanti, P.: Automatic parameter settings for the PROAFTN classifier using hybrid particle swarm optimization. In: Farzindar, A., Kešelj, V. (eds.) AI 2010. LNCS (LNAI), vol. 6085, pp. 184–195. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642130595_19CrossRefGoogle Scholar
 4.AlObeidat, F., Belacel, N., Carretero, J.A., Mahanti, P.: Differential evolution for learning the classification method PROAFTN. Knowl.Based Syst. 23(5), 418–426 (2010)CrossRefGoogle Scholar
 5.AlObeidat, F., Belacel, N., Carretero, J.A., Mahanti, P.: A hybrid metaheuristic framework for evolving the PROAFTN classifier. Spec. J. Issues World Acad. Sci. Eng. Technol. 64, 217–225 (2010)Google Scholar
 6.AlObeidat, F., Belacel, N., Carretero, J.A., Mahanti, P.: An evolutionary framework using particle swarm optimization for classification method PROAFTN. Appl. Soft Comput. 11(8), 4971–4980 (2011)CrossRefGoogle Scholar
 7.AlObeidat, F., Belacel, N., Mahanti, P., Carretero, J., et al.: Discretization techniques and genetic algorithm for learning the classification method PROAFTN. In: International Conference on Machine Learning and Applications, ICMLA 2009, pp. 685–688. IEEE (2009)Google Scholar
 8.Asuncion, A., Newman, D.: UCI machine learning repository (2007)Google Scholar
 9.Ban, A., Coroianu, L.: Simplifying the search for effective ranking of fuzzy numbers. IEEE Trans. Fuzzy Syst. 23(2), 327–339 (2015). https://doi.org/10.1109/TFUZZ.2014.2312204CrossRefGoogle Scholar
 10.Belacel, N.: Multicriteria classification methods: methodology and medical applications. Ph.D. thesis, Free University of Brussels, Belgium (1999)Google Scholar
 11.Belacel, N.: Multicriteria assignment method PROAFTN: methodology and medical application. Eur. J. Oper. Res. 125(1), 175–183 (2000)CrossRefGoogle Scholar
 12.Belacel, N., Boulassel, M.: Multicriteria fuzzy assignment method: a useful tool to assist medical diagnosis. Artif. Intell. Med. 21(1–3), 201–207 (2001)CrossRefGoogle Scholar
 13.Belacel, N., Raval, H., Punnen, A.: Learning multicriteria fuzzy classification method PROAFTN from data. Comput. Oper. Res. 34(7), 1885–1898 (2007)MathSciNetCrossRefGoogle Scholar
 14.Belacel, N., Vincke, P., Scheiff, J., Boulassel, M.: Acute leukemia diagnosis aid using multicriteria fuzzy assignment methodology. Comput. Methods Programs Biomed. 64(2), 145–151 (2001). https://doi.org/10.1016/S01692607(00)001000CrossRefGoogle Scholar
 15.Belacel, N., Wang, Q., Richard, R.: Webintegration of PROAFTN methodology for acute leukemia diagnosis. Telemed. J. eHealth 11(6), 652–659 (2005)CrossRefGoogle Scholar
 16.Belacel, N., AlObeidat, F.: A learning method for developing PROAFTN classifiers and a comparative study with decision trees. In: Butz, C., Lingras, P. (eds.) AI 2011. LNCS (LNAI), vol. 6657, pp. 56–61. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642210433_7CrossRefGoogle Scholar
 17.van den Bergh, F., Engelbrecht, A.: A study of particle swarm optimization particle trajectories. Inf. Sci. 176(8), 937–971 (2006). https://doi.org/10.1016/j.ins.2005.02.003MathSciNetCrossRefzbMATHGoogle Scholar
 18.Brasil Filho, A.T., Pinheiro, P.R., Coelho, A.L.V., Costa, N.C.: Comparison of two MCDA classification methods over the diagnosis of Alzheimer’s disease. In: Wen, P., Li, Y., Polkowski, L., Yao, Y., Tsumoto, S., Wang, G. (eds.) RSKT 2009. LNCS (LNAI), vol. 5589, pp. 334–341. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642029622_42CrossRefGoogle Scholar
 19.Candel, A., Parmar, V., LeDell, E., Arora, A., Lanford, J.: Deep Learning with H2O, September 2016. http://h2o.ai/resources
 20.Ching, J., Wong, A.K., Chan, K.: Classdependent discretization for inductive learning from continuous and mixedmode data. IEEE Trans. Pattern Anal. Mach. Intell. 17(7), 641–651 (1995)CrossRefGoogle Scholar
 21.Crammer, K., Singer, Y.: On the learnability and design of output codes for multiclass problems. Mach. Learn. 47(2–3), 201–233 (2002)CrossRefGoogle Scholar
 22.Doumpos, M., Zopounidis, C.: A multicriteria classification approach based on pairwise comparisons. Eur. J. Oper. Res. 158(2), 378–389 (2004)MathSciNetCrossRefGoogle Scholar
 23.Dubois, D., Prade, H., Sabbadin, R.: Decision theoritic foundations of qualitative possibility theory. Eur. J. Oper. Res. 128, 459–478 (2015)CrossRefGoogle Scholar
 24.ElAlfy, E.S.M., AlObeidat, F.N.: A multicriterion fuzzy classification method with greedy attribute selection for anomalybased intrusion detection. Procedia Comput. Sci. 34, 55–62 (2014)CrossRefGoogle Scholar
 25.ElAlfy, E.S.M., AlObeidat, F.N.: Detecting cyberattacks on wireless mobile networks using multicriterion fuzzy classifier with genetic attribute selection. Mob. Inf. Syst. 501, 585432 (2015)Google Scholar
 26.Fayyad, U., Irani, K.: Multiinterval discretization of continuousvalued attributes for classification learning. In: XIII International Joint Conference on Artificial Intelligence (IJCAI 1993), pp. 1022–1029 (1993)Google Scholar
 27.Frank, E., Hall, M.A., Witten, I.H.: The WEKA Workbench. Online Appendix for “Data Mining: Practical Machine Learning Tools and Techniques”, Fourth edn. Morgan Kaufmann, Burlington (2016)Google Scholar
 28.Garcia, S., Luengo, J., Saez, V., Herrera, F.: A survey of discretization techniques: taxonomy and empirical analysis in supervised learning. IEEE Trans. Knowl. Data Eng. 25(4), 734–750 (2013). https://doi.org/10.1109/TKDE.2012.35CrossRefGoogle Scholar
 29.García, S., RamírezGallego, S., Luengo, J., Benítez, J.M., Herrera, F.: Data discretization: taxonomy and big data challenge. WIREs Data Mining Knowl. Discov. 6, 5–21 (2016). https://doi.org/10.1002/widm.1173CrossRefGoogle Scholar
 30.Glover, F.W., Kochenberger, G.A.: Handbook of Metaheuristics. Kluwer Academic Publishers, Norwell (2003)zbMATHGoogle Scholar
 31.Goebel, M., Gruenwald, L.: A survey of data mining and knowledge discovery software tools. ACM SIGKDD Explor. Newslett. 1(1), 20–33 (1999)CrossRefGoogle Scholar
 32.Goldberg, D.: Genetic Algorithms in Search, Optimization, and Machine Learning. AddisonWesley Professional, Boston (1989)Google Scholar
 33.Hansen, P., Mladenovic, N.: Variable neighborhood search for the pmedian. Location Sci. 5(4), 207–226 (1997)CrossRefGoogle Scholar
 34.Hansen, P., Mladenovic, N.: Variable neighborhood search: principles and applications. Eur. J. Oper. Res. 130(3), 449–467 (2001)MathSciNetCrossRefGoogle Scholar
 35.Ishizaka, A., Nemery, P.: Assigning machines to incomparable maintenance strategies with electresort. Omega 47, 45–59 (2014). https://doi.org/10.1016/j.omega.2014.03.006CrossRefGoogle Scholar
 36.Ivascu, T., Cincar, K., Dinis, A., Negru, V.: Activities of daily living and falls recognition and classification from the wearable sensors data. In: EHealth and Bioengineering Conference (EHB), pp. 627–630. IEEE (2017)Google Scholar
 37.Jung, S., Moon, B.: A hybrid genetic algorithm for the vehicle routing problem with time windows. In: GECCO, pp. 1309–1316 (2002)Google Scholar
 38.Kim, J.P., Moon, B.R.: A hybrid genetic search for circuit bipartitioning. In: GECCO, p. 685 (2002)Google Scholar
 39.Kotsiantis, S.: Supervised machine learning: a review of classification techniques. Informatica 31, 249–268 (2007)MathSciNetzbMATHGoogle Scholar
 40.Kotsiantis, S.B., Zaharakis, I.D., Pintelas, P.E.: Machine learning: a review of classification and combining techniques. Artif. Intell. Rev. 26(3), 159–190 (2006)CrossRefGoogle Scholar
 41.Law, A.: Breiman and Cutler’s Random Forests for Classification and Regression, October 2015. https://cran.rproject.org/web/packages/randomForest/randomForest.pdf
 42.Marchant, T.: A measurementtheoretic axiomatization of trapezoidal membership functions. IEEE Trans. Fuzzy Syst. 15(2), 238–242 (2007). https://doi.org/10.1109/TFUZZ.2006.880000CrossRefGoogle Scholar
 43.Monekosso, D., FlorezRevuelta, F., Remagnino, P.: Ambient assisted living [guest editors’ introduction]. IEEE Intell. Syst. 30(4), 2–6 (2015). https://doi.org/10.1109/MIS.2015.63CrossRefGoogle Scholar
 44.Nykodym, T., Kraljevic, T., Hussami, N., Rao, A., Wang, A.: Generalized Linear Models with H2O, September 2016. http://h2o.ai/resources
 45.Perny, P., Roy, B.: The use of fuzzy outranking relations in preference modelling. Fuzzy Sets Syst. 49, 33–53 (1992)MathSciNetCrossRefGoogle Scholar
 46.Quinlan, J.R.: Improved use of continuous attributes in C4.5. J. Artif. Intell. Res. 4, 77–90 (1996)CrossRefGoogle Scholar
 47.Ranasinghe, S., Machot, F.A., Mayr, H.C.: A review on applications of activity recognition systems with regard to performance and evaluation. Int. J. Distrib. Sens. Netw. 12(8) (2016). https://doi.org/10.1177/1550147716665520CrossRefGoogle Scholar
 48.Rav, D., et al.: Deep learning for health informatics. IEEE J. Biomed. Health Inform. 21(1), 4–21 (2017). https://doi.org/10.1109/JBHI.2016.2636665CrossRefGoogle Scholar
 49.Reeves, C.R., Rowe, J.E.: Genetic Algorithms: Principles and Perspectives. A Guide to GA Theory. Kluwer Academic Publishers, Norwell (2002)Google Scholar
 50.Resende Monteiro, A.L., Manso Correa Machado, A., Lewer, M., Henrique, M.: A multicriteria method for cervical tumor segmentation in positron emission tomography. In: 2014 IEEE 27th International Symposium on ComputerBased Medical Systems (CBMS), pp. 205–208. IEEE (2014)Google Scholar
 51.Roy, B.: Multicriteria Methodology for Decision Aiding. Kluwer Academic, Norwell (1996)CrossRefGoogle Scholar
 52.Roy, B.: Multicriteria Methodology for Decision Aiding. Nonconvex Optimization and Its Applications. Springer, Heidelberg (2013). https://doi.org/10.1007/9781475725001CrossRefGoogle Scholar
 53.Sassi, I., Belacel, N., Bouslimani, Y.: Photoniccrystal fibre modeling using fuzzy classification approach. Int. J. Recent Trends Eng. Technol. 6(2), 100–104 (2011)Google Scholar
 54.Sharlig, A.: Décider sur plusieurs critères, panorama de laide à la décision multicritère. Press polytechniques Romandes, Lausanne (1985)Google Scholar
 55.Singh, N., Arora, H.: Network intrusion detection using feature selection and PROAFTN classification. Int. J. Sci. Eng. Res. 6(4), 466–472 (2015)Google Scholar
 56.Sobrado, F., Pikatza, J., Larburu, I., Garcia, J., de Ipiña, D.: Towards a clinical practice guideline implementation for asthma treatment. In: Conejo, R., Urretavizcaya, M., Pérezdela Cruz, J. (eds.) CAEPIATTIA 2003. LNCS, pp. 587–596. Springer, Heidelberg (2004). https://doi.org/10.1007/9783540259459_58CrossRefGoogle Scholar
 57.Talbi, E.G., Rahoual, M., Mabed, M.H., Dhaenens, C.: A hybrid evolutionary approach for multicriteria optimization problems: application to the flow shop. In: Zitzler, E., Thiele, L., Deb, K., Coello Coello, C.A., Corne, D. (eds.) EMO 2001. LNCS, vol. 1993, pp. 416–428. Springer, Heidelberg (2001). https://doi.org/10.1007/3540447199_29CrossRefzbMATHGoogle Scholar
 58.Talbi, E.G.: A taxonomy of hybrid metaheuristics. J. Heuristics 8(5), 541–564 (2002). https://doi.org/10.1023/A:1016540724870CrossRefGoogle Scholar
 59.Vincke, P.: Multicriteria DecisionAid. Wiley, Hoboken (1992). https://books.google.ca/books?id=H2NRAAAAMAAJzbMATHGoogle Scholar
 60.Witten, H.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann Series in Data Management Systems. Morgan Kaufmann Publishers, San Francisco (2005)zbMATHGoogle Scholar
 61.Wu, X.: Fuzzy interpretation of discretized intervals. IEEE Trans. Fuzzy Syst. 7(6), 753–759 (1999)CrossRefGoogle Scholar
 62.Zopounidis, C., Doumpos, M.: Multicriteria preference disaggregation for classification problems with an application to global investing risk. Decis. Sci. 32(2), 333–385 (2001)CrossRefGoogle Scholar
 63.Zopounidis, C., Doumpos, M.: Multicriteria classification and sorting methods: a literature review. Eur. J. Oper. Res. 138(2), 229–246 (2002)CrossRefGoogle Scholar
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.