2.1 Introduction

This chapter is divided into two main parts. The first part of the chapter consists of literature review on different types of multi-criteria decision making (MCDM) methods. Literature review on the application of MCDM methods in different fields is also provided in the first part of the chapter. The second part of the chapter presents literature review on weighting methods and different types of weighting methods. We have also summarized advantages and disadvantages of various weighting methods in this part of the chapter. The criteria used to select popular weighting method for a particular water resource or hydrology study are also discussed in the second part of the chapter. A brief review report on various applications of the weighting methods in different MCDM methods is also given in this part of the chapter.

2.2 Decision-Making Process

In most of the cases, decision-making process takes the steps shown in Fig. 2.1. In the first step, problem in hand is clearly defined. Some other important requirements are then listed on which the solution of multi-criteria model was dependent. In the third step, objectives or goals of the multi-criteria problem are established. Fourth step of the decision-making process deals with the establishment of alternatives which are going to be considered in a decision-making process with objective to choose the best alternative. In Step 5 of the decision process, evaluation criteria are decided. The criteria should satisfy some previously fixed standards. For example, the chosen criterion may change its value in space and time. The sixth step of the process is very important as it involves the selection of an appropriate multi-criteria decision making method for solving the problem in hand. Later the chosen MCDM method is applied to the list of alternatives which was finalized in Step 4 of the decision process. Final step of the decision-making process is checking the results of the model and performing sensitivity analysis test.

Fig. 2.1
figure 1

Decision making process

It is important to say that the decision-making process normally flows from top to bottom, but it may return to any of the previous steps if new information was later found.

Yoe (2002) describes the multi-criteria decision making process as:

  1. 1.

    Define multi-criteria problem and objectives explicitly.

  2. 2.

    List and describe alternatives for meeting objectives or goals.

  3. 3.

    Define criteria/attributes/performance indicators to measure performance of alternatives.

  4. 4.

    Carry out studies to gather data and evaluate criteria.

  5. 5.

    Prepare a decision matrix by arranging alternatives against criteria.

  6. 6.

    Elicit criteria subjective or objective weights for criteria.

  7. 7.

    Rank alternatives and communicate results with interest groups.

  8. 8.

    Decision-makers make decisions with input of interest group and get MCDM results.

These steps are shown in Fig. 2.2.

Fig. 2.2
figure 2

The iterative steps of MCDM (after Yoe 2002)

2.3 Multi-Criteria Decision-Making

International Society on Multiple Criteria Decision Making defines MCDM as “The study of methods and procedures by which multiple and conflicting criteria can be incorporated into the decision process.” The development of multi-criteria decision-making began in 1971. The main objective of MCDM is to provide decision-makers with a tool in order to enable them to advance in solving a multi-criteria decision problem, where several conflicting criteria are taken into account.

Roy (1996) defines a multi-criteria decision problem as being a situation in which, having defined a set A of actions and a family F of criteria, the decision maker wishes: to determine a subset of actions considered to be the best with respect to F (choice problem); to divide A into subsets according to some norms (sorting problem); to rank the actions of A from the best to worst (ranking problem); to describe actions and their consequences in a formalized and systematic manner, so that decision-makers can evaluate those actions (description of issue) (Schramm and Morais 2012).

In literature, many terms have been used for MCDM and these terms are given as below:

  • Multi-Criteria Decision Analysis (MCDA)

  • Multi-Objective Decision Making (MODM)

  • Multi-Attributes Decision Making (MADM)

  • Multi-Dimensions Decision-Making (MDDM)

2.4 Classification of Multi-Criteria Decision-Making Methods

Literature is rich with different types of multi-criteria decision-making methods. Following is the list of some popular MCDM methods which have been frequently used by researchers to solve some real-world multiple criteria problems:

  • AHP: Analytic Hierarchy Process

  • ANP: Analytic Network Process

  • ELECTRE: Elimination Et Choix Traduisant la Realite (French)—(Elimination and Choice Translating Reality) (English)

  • GP: Goal Programming

  • MACBETH: Measuring Attractiveness by a Categorical Based Evaluation Technique

  • MAUT: Multi-Attribute Utility Theory

  • MAVT: Multi-Attribute Value Theory

  • PROMETHEE: Preference Ranking Organization Method for Enrichment Evaluation

  • TOPSIS: Technique for Order Preference by Similarity to Ideal Solution

  • WSM: Weighted Sum Model

The specialists have divided multi-criteria decision-making methods into three categories, whose purpose is to bring the MCDM methods together according to some similarities, namely: (i) multiple attribute theory; (ii) outranking methods; (iii) interactive methods. Roy (1996) classifies them as follows: (i) unique synthesis criterion approach, eliminating any incomparability; (ii) outranking synthesis approach, accepting incomparability; (iii) interactive local judgment approach, with trial-error interaction (Schramm and Morais 2012).

  • Unique synthesis criterion approach: It consists of aggregating the different points-of-view into a unique function which will be optimized. For example, MAUT (Multi-Attribute Utility Theory; Keeney and Raiffa 1976), SMART (Simple Multi-Attribute Rating Technique) family (Edwards 1977; Edwards and Barron 1994) and AHP (Analytic Hierarchy Process) (Saaty 1987).

  • Outranking synthesis approach: It consists in the development of a relationship called an outranking relationship, which represents the decision-maker’s preferences, the relationship being explored in order to help the decision-maker solve his/her problems. Examples: ELECTRE (Elimination and Choice Translating Algorithm) (Belton and Stewart 2002; Roy 1996; Vincke 1992) and PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation) (Brans and Vincke 1985).

  • Interactive local judgment approach: This proposes methods which alternate calculation steps, giving successive compromising solutions, and dialog steps, leading to an extra source of information on the decision-maker’s preferences (Vincke 1992).

Classification of MCDM methods is shown in Fig. 2.3.

Fig. 2.3
figure 3

Classification of MCDM methods

2.5 Characteristics of Different Multi-Criteria Methods

Not all MCDM methods are recommended for solving any multi-criteria decision problem. Some MCDM methods can only take quantitative data to process with evaluation phase of the decision-making and some can work with both types of data (quantitative and qualitative). There are also some other characteristics of multi-criteria decision-making methods, e.g. transparency and cost (Table 2.1).

Table 2.1 Characteristics of different multi-criteria methods (after RPA 2004)

The type of available information will largely determine which MCDM method could be used for a particular multi-criteria problem. Most quantitative methods produce performance scores as well as a ranking. In addition to a ranking, weighted performance scores provide information on the relative performance of the alternatives. Comprehensiveness is achieved if all the information is presented to decision-makers, while presenting a final ranking, or even only one best alternative, results in maximum simplicity and possibly an oversimplification.

Graphic or other presentations of the information take an intermediate position. Although a complete ranking provides maximum simplicity, in aggregating all information into a final ranking, priorities need to be included and a decision rule needs to be selected (RPA 2004).

Transparency is low across a number of the methods, suggesting that such methods should not be used if many stakeholders are involved in or concerned with decision-making. Computation is complex in some of the methods. Since software is generally available to support the use of the methods, this is in itself not an important issue. The costs of adopting methods based on the use of value/utility functions are likely to be higher than those associated with the use of AHP and outranking methods. These additional costs result from the involvement of an expert in the assessment procedure (RPA 2004).

2.6 Strengths and Weaknesses of MCDM Methods

Multi-criteria decision-making methods have been criticized by many researchers for their property of being prone to manipulation which may lead to a false sense of accuracy. One the other hand, supporters of the MCDM approach claim that MCDM provides a systematic, transparent approach that enhances objectivity and generates results which can be trusted with a reasonable satisfaction (Janssen 2001; Macharis et al. 2004). The main elements of criticism to MCDM approach were summarized by Mutikanga (2012) and given as follows:

  1. 1.

    Aggregation algorithms: Different MCDM methods yield different outcomes when applied to the same multi-criteria problem. The selection of an appropriate MCDM method from a long list of MCDM methods is often not straight forward and may possibly control the final outcome of the decision-making process.

  2. 2.

    Compensatory methods: Complete aggregation methods of the additive type (e.g. AHP) allow for trade-offs between good performance on one criterion and poor performance on some other criterion. Often important information is lost by such aggregation (e.g. in PROMETHEE II complete ranking). For example, poor performance on water quality could be compensated with good performance on investment cost. The underlying value judgments of the aggregation procedure are therefore debatable and probably not acceptable from the public health and regulatory point of view. A multi-criteria problem is mathematically ill-defined since an action a may be better than an action b according to one criterion and worse according to another. This is because complete axiomatization of multi-criteria decision theory is very difficult (Munda et al. 1994).

  3. 3.

    Elicitation process: The way subjective information (weights and preference thresholds) is elicited is not trivial and is likely to influence the results.

  4. 4.

    Incomparable options: As the purpose of all MCDM is to reduce the number of incomparability, MCDM problems are often reduced to single-criterion problems for which an optimal solution exists completely changing the structure of the decision problem which is not realistic. In addition alternatives are often reduced to a single abstract value during data aggregation resulting in loss of useful information. To a lay person it may be easy to understand the cost of an alternative in monetary values rather than an abstract value indicating that option A is better or worse than option B by a value of say 0.45.

  5. 5.

    Scaling effects: Some MCDM methods derive conclusions based on scales in which evaluations are expressed which is unacceptable. For example if two strategy options (A and B) with the same weight (0.5) have different costs (A = 10,000, B = 18,000) and their impact on water quality improvement is (A = 0.2, B = 0.8), their overall performance would be (A = 5000.1 and B = 9000.4). If costing were scaled back to a 0–1 scale, then the relative importance of the two criteria would be better represented.

  6. 6.

    Problem structuring: Results could be manipulated by omission or addition of some relevant criteria or options. MCDM methods have been reported to suffer from rank reversals by introduction of new options (De Keyser and Peeters 1996; Dyer 1990).

  7. 7.

    Additional required information: Depending on how much additional information is required by the different MCDM methods, “black box” effects are likely to occur thus compromising the ability of the decision-maker to clearly follow the decision process and evaluate the results.

  8. 8.

    Uncertainty: The results are often provided to two decimal places which give a false sense of accuracy considering the uncertainties in the input data used and their error propagation in the model. Uncertainty is also inherent in the decision-making process in that it is difficult to quantify and represent performance of most options by a single value.

RPA (2004) divides multi-criteria decision-making methods into different categories and gives brief description of each MCDM method and discusses some key issues associated to some MCDM methods (see Table 2.2).

Table 2.2 Brief summary of main categories of multi-criteria decision-making methods and key issues arising from their application (after RPA 2004)

Based on different types of information (e.g. information on criteria, information alternative), Hwang and Yoon (1981) have categorized different MCDM methods (see Fig. 2.4).

Fig. 2.4
figure 4

Grouping of multiple criteria decision making methods (after Hwang and Yoon 1981)

Hajkowicz et al. (2000) present classification of input data (i.e. quantitative, qualitative, and mixed) (Fig. 2.5).

Fig. 2.5
figure 5

Classification of MCDM methods (after Hajkowicz et al. 2000)

2.7 How to Select an Appropriate MCDM Method

Abrishamchi et al. (2005) state that selecting an appropriate MCDM from a long list of available MCDM methods is a multi-criteria problem itself. There is no single MCDM method which can be superior method for all decision-making problems. Different researchers have different views on this issue. Guitouni and Martel (1998) argue that different MCDM methods will yield different recommendations while Hajkowicz and Higgins (2008) argue that the ranking of decision alternatives is unlikely to change noticeably by using a different MCDM method provided ordinal and cardinal data are handled correctly. However, Guitouni and Martel (1998) have developed some guidelines which can still be helpful in selecting an appropriate MCDM method. A recent review of MCDM for water resource planning and management has shown that MCDM is mostly used for water policy evaluation, strategic planning and infrastructure selection (Hajkowicz and Collins 2007).

2.8 The Role of Weights and Their Interpretation in MCDM Methods

Many MCDM methods (e.g. ELECTRE I, II; PROMETHEE) use criteria weights in their aggregation process. These weights to criteria play an important role for measuring overall preferences of alternatives. Because of having different aggregation rules, MCDM methods use these weights in different ways. For that, different weighting methods have been developed to use them in different MCDM methods. It is very importance that the decision-maker (DM) understands the true meaning of these weights. Choo et al. (1999) suggested that the questions posed to decision-maker in the weight elicitation process must convey the correct meaning of criteria weights. The questions posed to the decision-maker should be direct and simple but not compromising the underlying theoretical validity (Choo et al. 1999). In MCDM literature, the criteria weights w 1, …, w p have been given a diverse array of plausible interpretations associated with the following (Choo et al. 1999):

  1. 1.

    marginal contribution per unit of zk(x) or rk(x),

  2. 2.

    indifference trade-offs or rates of substitution,

  3. 3.

    gradient of the overall value function U(Z(x)) or U(R(x)),

  4. 4.

    scaling factors converting into commensurate overall value,

  5. 5.

    U(Z(x)) ɵ = (Σ wkzk(x)) ɵ or U(R(x)) ɵ = Σ wkrk(x) ɵ is linear,

  6. 6.

    relative contribution of the average criterion specific scores,

  7. 7.

    discriminating power of the criteria on the alternatives,

  8. 8.

    relative contribution of swing from worse to best on each criterion,

  9. 9.

    vote values in binary choices,

  10. 10.

    relative contribution of the criteria at the optimal alternative,

  11. 11.

    parameters used in interactive optimization,

  12. 12.

    relative information content carried in the criteria,

  13. 13.

    relative functional importance of the criteria.

Table 2.3 presents summary of the interpretations of criteria weights in MCDM methods.

Table 2.3 Meaning of criteria weights and MCDM methods (after Choo et al. 1999)

2.9 Classification of Weighting Methods

In literature, different weighting methods have been proposed to assign weights to the criteria (Pöyhönen and Hämäläinen 2001; Stewart 1992). The simplest way to assign weights to criteria is ‘equal weights method’ that distributes weights equally among all the criteria. The ‘Equal weights method’ has been applied in many decision-making problems (Wang et al. 2009).

Weights assigned to criteria in multi-criteria evaluation method is an important step as final results of the multi-criteria decision-making method largely depend on such weights. Tervonen et al. (2009) state that assigning weights to criteria in a MCDM approach is the most difficult task. The main purpose of a weighting method is to attach cardinal or ordinal values to different criteria to indicate their relative importance in a multi-criteria decision-making method. These values are then used by the MCDM method in subsequent evaluation of the alternatives. A classification of weighting methods based on internal and external types of weighting methods is shown in Fig. 2.6.

Fig. 2.6
figure 6

Schematic diagram of the weighting methods

Wang et al. (2009), classifies the rank-order method into three categories: subjective weighting method, objective weighting method and combination weighting method. The subjective methods determine criteria weights based on the preferences of the decision-makers. They explain the elicitation process more clearly and are the most used for MCDM in water resources management. They include SMART, AHP, SIMOS and the Delphi method. The objective weights are obtained by mathematical methods based on the analysis of initial data. The objective weight procedure is not very clear and includes methods such as least mean square (LMS), minmax deviation, entropy, TOPSIS and multi-objective optimization. The combination or optimal weighting methods are a hybrid of methods that include multiplication and additive synthesis.

There are also other weighting methods for assigning differential weights to decision criteria. These weighting methods can be divided into two categories: ‘objective weighting methods’ and ‘subjective weighting methods.’ In ‘objective weighting methods,’ weights are obtained by mathematical methods and decision-makers have no role in determining the relative importance of criteria (Wang et al. 2009). In the use of ‘subjective weighting methods,’ the process of assigning importance to criteria depends on the preferences of decision-makers, and has been more commonly used in different studies (e.g. Zardari 2008). On the basis of objective, subjective, and combined properties, classification of weighting methods is shown in Fig. 2.7.

Fig. 2.7
figure 7

Classification of weighting methods

2.9.1 Subjective Weighting Methods

In the subjective weighting methods, criteria weights are derived from the decision-maker’s judgment on criteria. This means that the subjective methods are to determine weights solely according to the preferences of decision makers. Criteria weights determined by the subjective weighting methods reflect the subjective judgment of the decision-maker, but analytical results or rankings of alternatives based on the weights can be influenced by the decision maker due to his/her level of knowledge and experience in the relevant field (Ahn 2011).

2.9.2 Objective Weighting Methods

In the objective weighting methods, preferences of decision maker on multiple criteria are not involved and the criteria weights are obtained from mathematical algorithms or models. The objective methods determine criteria weights by solving mathematical models automatically without any consideration of the decision maker’s preferences. Objective weighting methods determine criteria weights by making use of the mathematical models, but they neglect the subjective judgment information of the decision maker (Aalianvari et al. 2012).

2.10 Popular Subjective Weighting Methods

The most popular weighting methods that have been used in previous multi-criteria decision-making studies are listed as below:

  1. 1.

    Direct Rating

  2. 2.

    Ranking Method

  3. 3.

    Point Allocation

  4. 4.

    Pairwise Comparison

  5. 5.

    Ratio Method

  6. 6.

    Swing Method

  7. 7.

    Graphical Weighting

  8. 8.

    Delphi Method

  9. 9.

    Simple multi-attribute ranking technique (SMART)

  10. 10.

    SIMOS Method

2.10.1 Direct Rating Method

The rating technique obtains a score from a decision maker to represent the importance of each criterion. It is similar to scales used on a Likert-scale questionnaire. Often the numbers 1–5, 1–7 or 1–10 are used to indicate importance (Nijkamp et al. 1990). The rating method does not constrain the decision maker’s responses as the fixed point scoring method does. It is possible to alter the importance of one criterion without adjusting the weight of another. This represents an important difference between the two approaches.

An example of a survey task designed to elicit weights using the rating technique could ask a decision maker to show the importance of each criterion on an ordinal scale as shown in Table 2.4.

Table 2.4 Example of weighting using a rating scale

2.10.2 Ranking Method

The ranking method is the simplest approach for assigning weights to criteria. Essentially, the criteria are ranked in order from most important to least important. Once this is done, there are three main methods to calculate weights. They include:

  1. 1.

    rank sum,

  2. 2.

    rank reciprocal and

  3. 3.

    the rank exponent method (Malczewski 1999).

In rank sum, the rank position rj is weighted and then normalized by the sum of all weights.

Rank reciprocal weights are derived from the normalized reciprocals of a criterions rank. The rank exponent method requires the decision maker to specify the weight of the most important criterion on a 0–1 scale. The value is then used in a numerical formula. To better understand how weights are calculated, the Table 2.5 is provided. It is based on the example given by Malczewski (1999). Again rj is the rank of the criterion and n is the number of criteria.

Table 2.5 Ranking methods to assign weights (after Malczewski 1999)

These three ranking methods are very attractive due to their simplicity. They also provide a satisfactory approach to weight assessment. As a starting point in deriving weights, the three ranking methods provide a way to simplify multi-criteria analysis. However, they are limited by the number of criteria to be ranked. This method is really not appropriate for a large number of criteria since it becomes very difficult to straight rank as a first step. Another problem is in the lack of any real theoretical foundation. These techniques should be considered weight approximation techniques only (Malczewski 1999).

2.10.3 Point Allocation

In point allocation weighting method, the decision maker allocates numbers to describe the criteria weights directly. The decision maker is asked, for example, to divide 100 points among the criteria. In many experiments, the analysts do not fix the total number of points to be divided but the subjects are asked to give any numbers they liked to reflect the weights. The more points a criterion receives, the greater its relative importance. The total of all criterion weights must sum to 100. This method is easy to normalize. This is very easy weighting method. However, the weights obtained from the use of point allocation method are not very precise. This method could be difficult if the number of criteria increases to 6 or more.

2.10.4 Pairwise Comparison Method

The pairwise comparison method is actually a very old psychometric technique that has been used by several generations of psychologists (Whitfield 1999). It is a well developed method of ordering criteria. Pairwise comparisons involve the comparison of each criterion against every other criterion in pairs. It can be effective because it forces the decision maker to give thorough consideration to all elements of a decision problem. The number of comparisons can be determined by:

$$ o\; = \;\frac{m(m - 1)}{2} $$

where:

  • o = the number of comparisons; and

  • m = the number of criteria

Calculating weights using the pairwise comparison method has three main steps (see Table 2.6). The first step is to develop a matrix comparing the criteria as shown in step one of Table 2.6. Next the intensity values are used to fill in the matrix of comparisons. Note that not all values need to be used. For example 1, 3, 5, 7, 9 or 1, 5, 9 could be used if the user finds it difficult to distinguish between definitions. With three criteria (price, slope, and view), the top right part of the matrix is filled in based on the comparisons. So for example, price is moderately to strongly preferred over the slope criterion therefore receiving a value of 4. The diagonal in the matrix is always 1 and the lower left values are inverse values because we make that assumption that the matrix is reciprocal. This completes the first step.

Table 2.6 Pairwise comparison method weight calculation (after Strager 2002)

The second step is to compute the criterion weights. This is done by summing the values in each column, dividing each element by the column total, and dividing the sum of the normalized scores for each row by the number of criteria (3 criteria in this example).

The third step is to compute a consistency ratio. Many software programs such as Criterion Decision Plus and Expert Choice provide the consistency ratio for users. If the consistency ratio is less than 0.10, then the ratio indicates a reasonable level of consistency in the pairwise comparisons. If it is larger than 0.10, the values of the ratio are indicative of inconsistent judgments.

The pairwise comparison method is often criticized for simply asking for the relative importance of evaluation criteria without reference to the scales on which the criteria are measured. This fuzziness may mean that decision makers interpret the questions in different and possibly erroneous ways. Also, if many criteria are being compared, the number of individual comparisons may be cumbersome. Abbreviated pairwise comparisons can deal with this problem. Advantages of pairwise comparison include: the method requires only two criteria to be considered at one time, and the method has been tested theoretically and empirically for a variety of decision situations including spatial decision making (Malczewski 1999).

2.10.5 Ratio Weighting Method

The ratio method (Edwards 1977) requires the decision makers to first rank the relevant criteria according to their importance. The least important criterion is assigned a weight of 10 and all others are judged as multiples of 10. The resulting raw weights are then normalized to sum to one. The ratio method is an algebraic, decomposed, direct procedure.

2.10.6 Swing Weighting Method

The swing method (von Winterfeldt and Edwards 1986) starts from an alternative with the worst outcomes on all criteria or attributes. The decision maker is allowed to change one criterion from worst outcome to best. The decision maker is asked which ‘swing’ from the worst to the best outcome would result in the largest, second largest, etc., improvement. The criterion with the most preferred swing is most important, and given 100 points. The magnitudes of all other swing are expressed as percentages of the largest swing. Again, the derived percentages are the raw weights that are normalized to yield final weights. This method’s strength is that it does take into account the range of each criterion, and it is a relatively simple, straightforward method. However, it does not allow participants to directly compare each criterion with each other.

2.10.7 Graphical Weighting Method

There are many variations on graphical weighting of criteria. One approach is to have a decision maker place a mark on a horizontal line. Criteria importance increases as the mark is placed further to the right end of the line. A quantitative weight can be calculated by measuring the distance from the mark to the left extremity of the line. Scores are usually normalized to obtain an overall weights vector. This approach enables decision makers to express preferences in a purely visual manner. The graphical weighting technique is sometimes criticized because it permits decision maker’s to be carefree in assigning weights. For example, it is easy for a decision maker to place a mark on a horizontal line without considering the implications for criteria weights. In favor of graphical methods, however, is the ease and quickness with which they can be used. Many decision makers do not have sufficient time for some of the more complex and involved approaches (Hajkowicz et al. 2000). An example of graphical weighting method is shown in Fig. 2.8.

Fig. 2.8
figure 8

Graphical weighting example

2.10.8 Delphi Method

In Delphi Methods the weights are derived in following three stages.

  • Stage 1: Participants are chosen. Initial data (what type of initial data is gathered?) is gathered and participants present their views on the policy.

  • Stage 2: A list of possible alternatives is compiled and distributed to participants. Ideas are synthesized and a smaller number of possible policy recommendations are compiled.

  • Stage 3: An amended list of alternatives is distributed. These “policy” ideas are fine-tuned by the participants.

2.10.9 Simple Multi-attribute Rating Technique (SMART)

Simple multi-attribute rating technique (SMART) is originally described as the whole process of rating alternatives and weighting criteria by Von Winterfeldt and Edwards (1986). In this method decision maker is asked to rank the importance of criteria from worst levels to best levels. Then they assign 10 points to least important criteria, and an increasing number of points are assigned to the other criteria to address their importance relative to the least important criteria. The weights are calculated by normalizing the sum of the points to one. On this basis some new versions such as SMARTER and SMARTS presented to elicit the weights.

2.10.10 SIMOS Weighting Method

SIMOS (1990a, b) proposed a technique allowing any decision-maker (not necessarily familiarized with multi-criteria decision aiding) to think about and express the way in which he wishes to hierarchise the different criteria of a family F in a given context. This procedure also aims to communicate to the analyst the information he needs in order to attribute a numerical value to the weights of each criterion of F, when they are used in an ELECTRE type method (Roy and Mousseau 1996; Roy and Bouyssou 1993). The procedure has been applied to different real-life contexts; it proved to be very well accepted by decision-makers and we believe that the information obtained by this procedure is very significant from the decision-maker’s preference point of view. However, the way SIMOS recommends to process the information needs a revision for two main reasons:

  1. 1.

    It is based on an unrealistic assumption. This occurs by the lack of essential information (as it was already underlined by Scharlig 1996).

  2. 2.

    It leads to process criteria having the same importance (i.e., the same weight) in a not robust way.

2.10.11 Revised SIMOS Weighting Method

Figueira and Roy (2002) developed a weighing. The main innovation of this procedure is relating a “playing card” to each criterion. The procedure can be summarized into four main steps as follows:

  1. 1.

    Each decision-maker (DM) is given n colored cards (or n criteria). Each card has the criterion name inscribed on it and objective of the criterion. A number of white cards (blank cards) are also provided.

  2. 2.

    The DM is then asked to rank the cards from the least important to the most important. If certain criteria are perceived to be of equal importance (same weighting), the cards are grouped together (same rank position).

  3. 3.

    The DMs are asked to insert the white cards between two successively ranked colored cards (or group of cards) in order to express their strong preference between criteria. The number of white cards is proportional to the difference between the importance of the considered criteria.

  4. 4.

    The DM is finally asked to answers the question “how many times more important the first ranked criterion (or group of criteria) is, relative to the last ranked criterion (or group of criteria)?”

2.10.12 Fixed Point Scoring

In this method the decision maker is required to distribute a fixed number of points amongst the criteria. A higher point score indicates that the criterion has greater importance. Often percentages are used as this is a measure with which many decision makers are familiar. The key advantage of fixed point scoring is that it forces decision makers to make trade-offs in a decision problem. Through fixed point scoring it is only possible to ascribe higher importance to one criterion by lowering the importance of another. This presents a difficult task to the decision maker which requires careful consideration of the relative importance of each criterion. Fixed point scoring is the most direct means of obtaining weighting information from the decision maker. It requires the least amount of operations to transform information supplied by the decision maker into a weights vector satisfying the requirements mentioned earlier.

2.11 Popular Objective Weighting Methods

Following is the list of popular objective weighting methods.

  1. 1.

    Entropy method.

  2. 2.

    Criteria Importance Through Inter-criteria Correlation (CRITIC).

  3. 3.

    Mean Weight.

  4. 4.

    Standard Deviation.

  5. 5.

    Statistical Variance Procedure.

2.11.1 Entropy Method

Entropy method is a measure of uncertainty in the information formulated using probability theory. It indicates that a broad distribution represents more uncertainty then the sharply peaked one (Deng et al. 2000). To calculate the weights by entropy method first the information matrix is normalized then following equations are used.

$$ p_{ij} = \frac{{x_{ij} }}{{\sum\nolimits_{i = 1}^{m} {x_{ij} } }}\quad i = 1, \ldots ,m;\;j = 1, \ldots ,n $$
$$ E_{j} = - (\sum\limits_{i = 1}^{m} {p_{ij} \ln (p_{ij} ))/\ln (m)\quad j = 1, \ldots ,n} $$
$$ w_{j} = \frac{{1 - E_{j} }}{{\sum\nolimits_{i = 1}^{n} {(1 - E_{k} )} }}\quad j = 1, \ldots ,n $$

where

  • xij = Original measured data

  • Ej = Information Entropy method

  • wj = Entropy method Weight

2.11.2 CRITIC Weighting Method

In addition to the entropy method, any other method of measuring the divergence in performance ratings can be used to determine the criteria weights. Diakoulaki et al. (1995) have proposed the CRITIC (The Criteria Importance Through Intercriteria Correlation) method that uses correlation analysis to detect contrasts between criteria. First vector rj of the normalized matrix is generated where rj denotes the scores of all n alternatives.

Each vector rj is characterized by the standard deviation (σj), which quantifies the contrast intensity of the corresponding criterion. So, the standard deviation of rj is a measure of the value of that criterion to be considered in the decision-making process. Next, a symmetric matrix is constructed, with dimensions m × m and a generic element l jk , which is the linear correlation coefficient between the vectors r j and r k . It can be seen that the more discordant the scores of the alternatives in criteria j and k are, the lower is the value l jk . In this sense, Eq. (2.1) represents a measure of the conflict created by criterion j with respect to the decision situation defined by the rest of the criteria:

$$ \sum\limits_{k = 1}^{m} {1 - l_{jk} } $$
(2.1)

The amount of information Cj conveyed by the jth criterion can be determined by composing the measures which quantify the above 2 notions through the multiplicative aggregation formula (Eq. 2.2). The higher the value Cj is, the larger is the amount of information transmitted by the corresponding criterion and the higher is its relative importance for the decision-making process. Objective weights are derived by normalizing these values to unity (Eq. 2.3).

$$ C_{j} = \sigma_{j} \sum\limits_{k = 1}^{m} {(1 - l_{kj} )} $$
(2.2)
$$ w_{j} = C_{j} \left[ {\sum\limits_{k = 1}^{m} {C_{k} } } \right]^{ - 1} $$
(2.3)
  • rj = Scores of all alternatives

  • Cj = Amount of information

  • wj = Weight of Criteria

2.11.3 Mean Weight (MW)

In Mean Weight the weights are derived objectively by using equation wj = 1/n, where n is number of criteria. This is based on the assumption that all criteria are of equal importance. Mean weight is used in MCDM when the there is no information from decision maker or information is not sufficient to reach a decision.

2.11.4 Standard Deviation Method

Standard deviation (SD) method is similar to Entropy method which assigns small weights to an attribute, if it has similar attribute values across alternatives. The SD method determines the weights of criteria in terms of their SDs through following equations (Jahan et al. 2012).

$$ w_{j} = {\raise0.7ex\hbox{${\sigma_{j} }$} \!\mathord{\left/ {\vphantom {{\sigma_{j} } {\sum\nolimits_{j = 1}^{n} {\sigma_{j} \quad j = 1, \ldots ,n} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\sum\nolimits_{j = 1}^{n} {\sigma_{j} \quad j = 1, \ldots ,n} }$}} $$
$$ \sigma_{j} = \sqrt {\frac{{\sum\nolimits^{m}_{i = 1} {(x_{ij} - \overline{{x_{j} )^{2} }} } }}{m}} \quad j = 1, \ldots ,n $$

where

  • wj = Weight of criteria

  • σj = Standard deviation

2.11.5 Statistical Variance Procedure

Statistical Variance procedure is an Objective Weighting method in which objective weights are derived. Initially statistical variance of information is calculated by

$$ V_{j} = (1/n)\sum\limits_{i = 1}^{n} {(x_{ij}^{*} } - (x_{ij}^{*} )_{mean} )^{2} $$
  • Vj = Statistical Variance

  • Xij = average value of data set of points

The objective weight can be obtained by following equation

$$ w_{j}^{{^{o} }} = \frac{{V_{j} }}{{\sum\nolimits_{i = 1}^{m} {V_{j} } }} $$

2.11.6 Integrated or Combined Weighting Methods

In Integrated or Combined Weighting methods the weights are derived from both subjective and objective information on criteria weights.

Weighting is a very critical task in decision making because it involves controversy and uncertainty (Chen et al. 2009) and it influences the final outcome, the ranking of alternatives. Several methods have been developed for this purpose, which are reported in the literature: swing weights; ranking; rating; pairwise comparison; trade-off analysis; qualitative translation, etc. Reviews of these methods are provided by Beinat (1997), Malczewski (1999) and Sharifi et al. (2004). Crucial factors for selecting the most appropriate method for assigning weights to criteria for a certain decision problem are the number of criteria and the grade of uniqueness between them. Two factors were taken into account when making the decision to choose methods for the Evaluation module. First, the number of criteria involved in the evaluation process carried out in this model is quite small, i.e. five. This falls within the so called ‘seven plus or minus two’ range that is considered as the maximum number of entities that can be simultaneously processed by the human brain (Miller 1956). Second, given that certain evaluation criteria are explicit in terms of their context and meaning, it was judged that two of the most straightforward and popular methods should be utilized and provided by the Evaluation module: direct ranking and qualitative rating. The direct ranking method allows the user to enter weights when they are known a priori or developed using another method while the qualitative ranking method developed here offers users a way of developing the weights within the process. These two methods are explained in more detail below.

2.11.7 Direct Ranking

Direct ranking (or direct estimation) is the most straightforward method for assigning values to criteria (that sum up to 1) when the number of criteria is small and manageable. However, even for such a small number of criteria, it is not straightforward when weighting values have two or more decimals. For instance, sometimes is not easy to justify why a criterion has a weight of 0.2 and another criterion has 0.18; it is even more difficult to differentiate a criterion from another by assigning weights of 0.125 and 0.120. Thus, weighting with this method can be reliable and accurate when values have one decimal, i.e. 0.1, 0.2 or two decimals with the last digit being 5, i.e. 0.15 or 0.25. Because these preconditions cannot always be fulfilled, we provide the planner with a modified rating method called ‘qualitative rating’, which has been proposed in this research and is explained in the next section.

2.11.8 Qualitative Rating Method

Ranking methods involve the ordering of criteria to identify the most important to the least important criteria or vice versa. Several procedures (e.g. rank sum, rank reciprocal and rank exponent method) are then utilized for estimating a numerical value of weights based on that rank order (Malczewski 1999). Although these methods are simple, they involve a great disadvantage since they do not provide the potential to rank two or more criteria with equal importance, a fact that is obviously not reasonable in practice.

Kamal (2012) has presented a comprehensive review on recent literature on applications of various weighting methods in different fields of research. Tables 2.7, 2.8 and 2.9 show subjective, objective, and combined weighting methods applications along with the list of countries in which these studies were completed.

Table 2.7 Application of subjective weighting methods in past studies (after Kamal 2012)
Table 2.8 Application of objective weighting methods in past studies (after Kamal 2012)
Table 2.9 Subjective and objective weighting methods used in past studies (after Kamal 2012)

2.12 Objective Weighting Methods Used in Past Studies

See Table 2.8.

2.13 Subjective and Objective Weighting Methods Used in Past Studies

See Table 2.9.

2.14 Selection of Weighting Method

Hobbs (1980) states that different weighting methods produce different set of criteria weights and final results of the multi-criteria decision-making methods are sensitive to criteria weights. Therefore, it is paramount to emphasis on the selection of weighting method for solving a multi-criteria decision problem.

The selection of a particular method is highly dependent on the particular decision problem (Hobbs 1980; Zardari et al. 2010).

Hajkowicz et al. (2000) applied five weighting methods to weight six economic, environmental and social criteria. Comparisons were made between criteria weights obtained from each method and decision makers evaluated each method for its ease of use and how much it helped clarify the decision problem.

Findings of their study indicate that, in general, decision makers will assign similar weight values to the criteria with the different methods. However, minor variations in weight vectors such as these have the potential to cause significant changes in the subsequent ranking of alternatives. This indicates that it is undesirable to rely upon any single weighting technique in a MCDM approach as there may be bias associated with that particular technique (Fig. 2.9).

Fig. 2.9
figure 9

Number of decision makers identifying weighting methods as the ‘best’ or ‘worst’. The ‘best’ and ‘worst’ categories do not total to 55 due to non-responses from some decision makers (after Hajkowicz et al. 2000)

There are several methods for transforming experts’ judgments into relative weights. Eckenrode (1965) found no significant differences among the techniques they investigated. Since no method is clearly superior, the preferred method in any application depends on the intended use of the scale (ordinal, interval, or ratio level of measurement), the time required to use it, the subjects’ mental attitudes, their understanding of the overall problem, their perception of the instructions for weighting the criteria, and their understanding of the criteria definitions.

Methods for choosing weights have been surveyed before (Eckenrode 1965; Huber 1974) but not with an eye to the theoretical requirements imposed by weighting summation. A number of techniques are presented below, as are their applications in power plant siting. In general, methods that ask decision makers to choose weights directly do not guarantee that the weights are theoretically valid. Methods that derive weights by ensuring that the decision rule is consistent with trade-offs expressed by decision makers are more likely to yield valid weights. Those methods, however, generally are more difficult to apply.

In the absence of test/retest data, a rough indication of the reliability of the weights may be obtained by comparing the weights chosen by the two persons most likely to have similar values and knowledge. The two Maryland participants have worked together on siting studies and have similar perspectives on the siting problem. The correlation between their rating weights was 0.783, the highest between any pair of rating weight sets. Their indifference trade-off weight sets had a correlation of 0.624. These two correlations can be taken as a measure of method reliability. Both are much higher than the mean “between method” correlations. Hence, choice of weighting method appears to result in greater differences between weight sets than what can be attributed to method unreliability alone.

These large differences in candidate area sets contradict assertions made by some researchers (Dawes and Corrigan 1974; Wainer 1976) to the effect that weighting is unimportant as similar rank orders will often result from very different weight sets. Choice of weights is important here because siting is concerned only with the best few alternatives, not the entire rank order. Correlations of suitability are high, but the candidate areas differ.

Nijkamp et al. (1990) suggest various methods to estimate criteria weighting. These are broadly divided into two main approaches: direct and indirect estimation.

Direct estimation of criterion weights refers to the expression of relative importance of the objectives or criteria in a direct way through questionnaire surveys. Respondents are asked questions within which their priority statements are conveyed in numerical terms. Respondents can be members of the design team, representatives from the client, local council and public (Seabrroke et al. 1997). This is another opportunity for the increasing demand for public participation in the decision-making process (Joubert et al. 1997).

Direct estimation method techniques come in various forms:

  • The trade-off method where the decision-maker is asked directly to place weights on a set of criteria to all pairwise combination of one criterion with respect to all other criteria.

  • The rating method where the decision-maker is asked to distribute a given number of points among a set of criteria to reflect their level importance.

  • The ranking method where the decision-maker is asked to rank a set of criteria in order of their importance.

  • The seven-points (or five-points) scale which helps to transform verbal statements into numerical values.

  • The paired comparison, which is similar to the seven-point scale, obtains the relative importance of criteria by comparing all pairs of criteria on a non-points scale.

However, all these methods run into trouble when the number of objectives becomes large (van Pelt 1993; Hobbs and Meier 2000). When this happens, objectives may have to be structured in a hierarchical model to separate objectives into different levels (Saaty 1994).

The indirect approach is based on investigating the actual behavior of respondents in the past. Weights are obtained through estimating actual previous behavior derived from ranking alternatives or through an interactive procedure of obtaining weights by questioning the decision-maker and other involved parties. Hypothetical weights may also be used in some projects. Here, the analyst prepares weights to represent the opinion of specific groups in the community, then policy-makers may comment accordingly. Each approach has restrictions and limitations in terms of accuracy and cost. Their usefulness strongly depends on the time required and the attitude of respondents (Voogd 1983; Nijkamp et al. 1990; Hobbs and Meier 2000).

Hobbs (1980) used two weighting methods, one deriving weights from trade-offs made by decision makers and the other asking decision makers to choose weights on a scale of 0–10. They found that the power plant locations picked by the two weightings methods were differing noticeably. This shows that different weighting methods produce different set of criteria weights and final results of the multi-criteria decision-making models are sensitive to criteria weights. Therefore, it is paramount to emphasis on the selection of weighting method for solving a multi-criteria decision problem.

Weighting of criteria is subjective and has direct influence on the results of prioritizing strategy options. It is therefore critical that criteria weights are determined rationally and truthfully (Hobbs 1980).

2.15 Weighting Methods Supported by Softwares

Kamal (2012) has presented a detailed survey of literature and observed that a large number of softwares are developed for the multi-criteria decision-making methods and but not separate softwares were developed for the weighting methods. He stated, however, different MCDM methods support various weighting methods. Some softwares support only one weighting method and some support more than one weighting method. A review on softwares’ availability for various MCDM methods supported for different weighting methods is given as below (Kamal 2012).

2.15.1 Pairwise Comparison

Thirty-two multi-criteria decision-making methods softwares were found which supported pairwise comparison weighting method. These are:

  1. 1.

    Super Decisions

  2. 2.

    V.I.S.A

  3. 3.

    1000 Mind

  4. 4.

    Decision Lens

  5. 5.

    Make it Rational

  6. 6.

    Expert Choice

  7. 7.

    D-Sight

  8. 8.

    Decision Plus

  9. 9.

    DEMATEL

  10. 10.

    Criterium Decision Plus SELECT PRO SOFTWARE LLC

  11. 11.

    DEFINITE

  12. 12.

    AHP (Analytic Hierarchy Process) Calculation software by CGI

  13. 13.

    USAGE: Calculation (Software): Weights of AHP

  14. 14.

    MULINO-DSS software

  15. 15.

    HIPRE 3

  16. 16.

    Web-HIPRE

  17. 17.

    Joint Gain

  18. 18.

    Logical Decisions Portfolio v6.2

  19. 19.

    Logical Decisions for Windows v6.2

  20. 20.

    Mind Decider

  21. 21.

    Select Best Voll version

  22. 22.

    Open Decision Maker

  23. 23.

    Right Choice DSS

  24. 24.

    Select Pro

  25. 25.

    AHP project

  26. 26.

    AHP with Qualica planning Suit

  27. 27.

    AHP Decision

  28. 28.

    Choice Results

  29. 29.

    M-AHP

  30. 30.

    Vanguard System

  31. 31.

    MVLSoft Very Good Choice

  32. 32.

    ANOVA

2.15.2 Point Allocation Method

Two softwares which support point allocation weighting method found in the literature. These are:

  1. 1.

    1000 Mind

  2. 2.

    QUALIFLEX

2.15.3 Ranking Method

Four softwares were supporting ranking method. These are:

  1. 1.

    Logical Decision

  2. 2.

    QUALIFLEX

  3. 3.

    Select Pro

  4. 4.

    RPM decision

2.15.4 Rating Method

Three softwares were supporting rating weighting method. These softwares are:

  1. 1.

    1000 Mind

  2. 2.

    Criterium DecisionPlus

  3. 3.

    MULINO-DSS software

2.15.5 SMART Weighting Method

From detail review of literature and internet browsing, Kamal (2012) found seven softwares which were supporting SMART weighting method and these include:

  1. 1.

    1000 Mind

  2. 2.

    Criterium® Decision Plus

  3. 3.

    HIPRE 3

  4. 4.

    Win Pre

  5. 5.

    Web-HIPRE

  6. 6.

    Equity3

  7. 7.

    MACBETH

2.15.6 SWING Weighting Method

A large number of commercial softwares were available in the literature that were found supporting SWING weighting method. These include:

  1. 1.

    V.I.S.A

  2. 2.

    1000 Mind

  3. 3.

    Logical Decision

  4. 4.

    Win Pre

  5. 5.

    Web-HIPRE

  6. 6.

    Tree Top

  7. 7.

    RICH Decisions

  8. 8.

    Promax 2010 Standard

  9. 9.

    QMms (Quantitative Methods for Management Science)

  10. 10.

    Logical Decisions Portfolio v6.2

  11. 11.

    Logical Decisions for Windows v6.2

  12. 12.

    MindDecider

  13. 13.

    IDS (Intelligent Decision System)

  14. 14.

    Hiview3

  15. 15.

    Equity3

  16. 16.

    Analytica 4.2

  17. 17.

    RISK

2.15.7 Trade-off Weighting Method

Only two softwares were available in literature that was supporting trade-off weighting method, which are:

  1. 1.

    MULINO-DSS software

  2. 2.

    Criterium Decision Plus

2.15.8 Delphi Method

For Delphi weighting method, only one software was found in the literature, which is:

  1. 1.

    Delphi Decision Aid

2.15.9 Revised SIMOS Procedure

Revised SIMOS weighting method was supported by only one software which is:

  1. 1.

    SRF software

Kamal (2012) summarized all available softwares for each weighting method. Table 2.10 shows how many softwares were available for some specific popular weighting methods.

Table 2.10 Distribution of number of softwares against each weighting method

Based on the availability of number of softwares, Kamal (2012) provided ranking of weighting methods (popularity of weighting methods) as follows:

  • Pairwise Comparison > SWING > SMART > Ranking method > Rating Method > Trade-Off = Point Allocation > Delphi Method = Revised SIMOS Procedure (where ‘>’ is ‘better than’).

2.16 Advantages and Disadvantages of Weighting Methods

Kamal (2012) summarizes advantages and disadvantages of some popular weighting methods. The advantages and disadvantages are presented as below for the each weighting method.

2.16.1 Pairwise Comparison

Advantages

  1. 1.

    Pairwise comparison is useful when the decision maker is unable to rank the alternatives holistically and directly with respect to a criterion.

  2. 2.

    This method is easy to calculate. The results are clear, and especially distinctive for issues about qualitative factors which are used for decision making or evaluation.

  3. 3.

    The Pairwise comparisons method is often used as an intermediate step in multi-criteria decision making, when the decision maker (DM) is unable to directly assign criteria weights or scores of alternatives.

  4. 4.

    Pairwise comparison can be effective because it forces the decision makers to give through consideration to all elements of decision problem (Hajkowicz et al. 2000).

  5. 5.

    Pairwise comparison is commonly used to estimate preference values of finite alternatives with respect to a given criterion.

  6. 6.

    In the pairwise comparison prioritization process it is also assumed that DMs are able to express the strength of their preferences by providing additional cardinal information.

  7. 7.

    This methodology is found to be the most user transparent and scientifically sound methodology for assigning weights representing the relative importance of criteria.

Disadvantages

  1. 1.

    Pairwise comparisons suffer from two major shortcomings. First, many do not allow participants to explicitly convey a sense of distance in their choices, since participants are usually asked to simply select an attribute from a pair. Second, the complexity of comparing items in pairs can be quite high for large attribute sets, usually resulting in conflicting choices and lack of transitivity.

  2. 2.

    There is inconsistency at DM’s idea in pairwise comparison and it increases either by higher number of attributes or judging the important degree.

  3. 3.

    The main difficulty is to reconcile the inevitable inconsistency of the pairwise comparison matrix elicited from the decision makers in real-world applications.

  4. 4.

    The Pairwise comparison is used to derive weights for Analytical hierarchy process. As the size (n) of the hierarchy increases, the number of Pairwise comparisons increases rapidly. The completion of n(n − 1)/2 comparisons (quite high in realistic problems) can become a very difficult task for the decision maker when applied to all levels of the hierarchy.

  5. 5.

    The pairwise comparison is seems to be insufficient and imprecise to capture the right judgments of decision-maker(s) with vagueness and uncertainty of data.

  6. 6.

    In many real world applications, human pairwise judgment is highly ambiguous and uncertain.

  7. 7.

    Pairwise comparison is an important step in AHP to be completed by the experts. However, AHP is widely criticized for such tedious process especially when a large number of criteria or alternatives are involved. Someone may doubt the expert judgments because people are very likely to feel tired and lose patience during this process and therefore, they may not make their judgments conscientiously. They may change their minds frequently in order to ascertain the acceptance of the consistency ratio (CR) value as well as shorten the whole process. To avoid such drawback, only reasonable and manageable amounts of criteria are contained in the model and the author of this study has acted as a facilitator to take over the judgment process (Lee and Chan 2008).

2.16.2 Simple Multi-attribute Rating Technique (SMART)

Advantages

  1. 1.

    The Simple Multi-attribute Rating Technique (SMART) can be used to quickly obtain a total weighted score (Huang 2011).

  2. 2.

    SMART is one of the most applicable MCDM methods, and since the majority of the panelists’ were not familiar with MCDM methods, the method had to be simple (Yeh and Chang 2009).

  3. 3.

    SMART method is easy to modify when the number of impact categories increased (Yeh and Chang 2009).

  4. 4.

    The SMART approach utilizes ratio-scales to assess panellists’ preferences (Yeh and Chang 2009).

  5. 5.

    SMART is a useful technique since it is simple, straightforward and requires less time in decision making that is quite important for those involved in the decision-making process (Gu et al. 2012).

  6. 6.

    In SMART, changing the number of alternatives will not change the decision scores of the original alternatives and this is useful when new alternatives are added (Chen and Hou 2004; Panagopoulos et al. 2012).

  7. 7.

    Using SMART in performance measures can be a better alternative than other methods (Gu et al. 2012).

  8. 8.

    The SMART is popular because its analysis incorporates a wide variety of quantitative and qualitative criteria (Chen and Hou 2004).

  9. 9.

    SMART has been successfully applied in MCDM problems, this approach is ineffective when dealing with the inherent imprecision of linguistic valuation in the decision-making (Gu et al. 2012; Chen and Hou 2004).

  10. 10.

    The advantage of the smart model is that it is independent of the alternatives (Panagopoulos et al. 2012; Afshar et al. 2011).

  11. 11.

    The nontechnical participants especially felt that SMART was easier to understand as compared to Trade-off method (Dai et al. 2012).

Disadvantages

  1. 1.

    It has been stressed that the comparison of the importance of the attributes is meaningless, if it does not reflect the consequence ranges of the attributes as well (Von Winterfeldt and Edwards 1986).

  2. 2.

    One of the limitations of this technique is that it ignores the interrelationships between parameters (Demirci et al. 2009).

  3. 3.

    The ratings of alternatives are not relative; changing the number of alternatives considered will not in itself change the decision scores of the original alternatives (Valiris et al. 2005).

  4. 4.

    Due to the large number of attributes, we determined that the SMARTS method would be too difficult to implement and defend (Benzerra et al. 2012).

2.16.3 Point Allocation Method

Advantages

  1. 1.

    The key advantage of fixed point scoring/Point Allocation Method is that it forces decision makers to make trade-offs in a decision problem (Deng et al. 2000).

  2. 2.

    Through fixed point scoring/Point Allocation (PA) Method it is only possible to ascribe higher importance to one criterion by lowering the importance of another. Fixed point scoring/Point Allocation Method is the most direct means of obtaining weighting information from the decision maker (Deng et al. 2000).

  3. 3.

    It requires the least amount of operations to transform information supplied by the decision maker into a weights vector satisfying the requirements mentioned earlier (Deng et al. 2000).

  4. 4.

    According to using a simple PA system or other technique probably works well with a small number of attributes (Mustajoki et al. 200).

  5. 5.

    The weights elicited by Point allocation method were more reliable than those elicited by direct rating in a test-retest situation (Von Winterfeldt and Edwards 1986).

Disadvantages

  1. 1.

    The point allocation method/Fixed point scoring is a more difficult task since it is easier to take 100 as the weight for the most important attribute and then to allocate weights relative to this 100 starting point as the weight of successive attributes. The decision maker has no need to worry about the constraint that the total must be some specified value. Since the set of cognitive operations required to use the two methods is different, there is every possibility that different decision weights will result (Demirci et al. 2009).

  2. 2.

    Although this method of determining weights and the direct rating method would seem to be minor variants of each other, however, in practice they produce different profiles of decision weights (Demirci et al. 2009).

  3. 3.

    It is a difficult task for the decision maker to ascribe higher importance to one criterion by lowering the importance of another which requires careful consideration of the relative importance of each criterion (Deng et al. 2000).

2.16.4 Revised SIMOS’ Procedure

Advantages

  1. 1.

    The revised SIMOS’ procedure is simple and easy to use, requiring little computational effort, thus increasing its applicability (Fontana et al. 2011).

  2. 2.

    It is shown to be efficient when evaluating alternatives on qualitative attributes when applying an additive method (Fontana et al. 2011).

  3. 3.

    Revised SIMOS’ procedure to minimize the rounding off errors when the normalized criteria weights are calculated (Fontana et al. 2011).

  4. 4.

    The ‘Revised SIMOS’ Procedure’, the technique used to collect information on weights, proved to be well accepted by all participants (Özcan et al. 2011).

  5. 5.

    SIMOS’ technique allows any DM (not necessarily familiarized with multi criteria decision aiding) to think about and express the way in which he wishes to hierarchies the different criteria of a family “F” in a given context. This procedure also aims to communicate to the analyst the information he needs in order to attribute a numerical value to the weights of each criterion of “F”.

  6. 6.

    The procedure has been applied to different real-life contexts; it proved to be very well accepted by DMs and we believe that the information obtained by this procedure is very significant from the DM’s preference point of view.

  7. 7.

    The software developed allows not only an easy collection of different data sets but also a quick processing of the information thus obtained.

  8. 8.

    In multi criteria decision aiding contexts, the new procedure and the software can also be used to adapt or convert a scale of a given criterion into an interval scale or a ratio scale.

Disadvantages

  1. 1.

    In Revised SIMOS’ procedure, interval scale evaluation is required (Fontana et al. 2011).

  2. 2.

    In cases where the DM’s spontaneous response to the question ‘How many times more important is the most important PM (or group of PMs), relative to the least important PM (or group of PMs)?’ differs substantially from the total number of cards used (including blank cards), the calculated normalized weights of PMs shows a distortion of the original PM rank order expressed by the DM (Özcan et al. 2011).

  3. 3.

    It is suggested that identify whether the DM’s understanding of this scale (with blank cards inserted) would be a ranking order or a ratio scale (Özcan et al. 2011).

  4. 4.

    It can occasionally process some criteria that have the same importance in an uncontrolled manner (Chen and Zhang 2010).

  5. 5.

    Similar to the AHP method, there is no reference made to criteria scales and therefore certain combinations of weights may be excluded (Chen and Zhang 2010).

2.16.5 Trade-off Weighting Method

Advantages

  1. 1.

    Its advantage is the strong theoretical foundation (Taylor and Ryder 2003).

  2. 2.

    Trade-off method does not require a person to assign weights to, nor state relative importance of, the attributes or criteria directly. Instead, it asks one to state how much compromise he is willing to make between two attributes or criteria when an ideal combination of the two is not attainable.

  3. 3.

    Some weighting methods derive weights from trade-offs decision makers are willing to make. Such weights are likely to be theoretically valid (Hajkowicz et al. 2000).

  4. 4.

    A common feature of AHP and SMART methods is that they rely on ratio comparisons about the “relative importance” of attributes, although the resulting weights are not explicitly linked to unit changes in the component value functions. To avoid this shortcoming, several authors have recommended the use of the trade-off method (Delgado-Galván et al. 2010).

Disadvantages

  1. 1.

    In practice, the trade-off method is difficult and time consuming to use compared with the other methods (Fatthi and Fayyaz 2010).

  2. 2.

    The trade-off method was considered more difficult and some participants had real problems understanding the underlying logic behind it (Chang et al. 2010).

  3. 3.

    The elicitation of these exact weights imposes a level of precision that is often absent in people’s minds (Morais and Almeida 2010).

  4. 4.

    DM may find difficulty in giving precise responses to the trade-off questions (Delgado-Galván et al. 2010).

2.16.6 Delphi Method

Advantages

  1. 1.

    A key advantage of the approach is that it avoids direct confrontation of the experts (Afshar et al. 2011).

  2. 2.

    A benefit to theory building derives from asking experts to justify their reasoning (Afshar et al. 2011).

  3. 3.

    Delphi researchers employ this method primarily in cases where judgmental information is indispensable, and typically use a series of questionnaires interspersed with controlled opinion feedback (Afshar et al. 2011).

  4. 4.

    The Delphi method, a consensus-building tool, is a promising process to promote and encourage involvement from all stakeholders during the evaluation framing process (Dai et al. 2012).

  5. 5.

    The Delphi method removes geographic challenges and time boundaries allowing all stakeholders to participate (Dai et al. 2012).

  6. 6.

    The most important advantage of the Delphi method is that it leads to a group decision. Group decisions have many merits such as the avoidance of the extreme judgment of individual assessors (Chou et al. 2008).

Disadvantages

  1. 1.

    It has some limitations including the potential of falling victim to the band wagon effect. Dominant personalities can unduly influence the face-to-face group (Anagnostopoulos and Petalas 2011).

  2. 2.

    Critics have noted other limitations of the Delphi methodology: potential for sloppy execution, crudely designed questionnaires, poor choice of panelists, unreliable result analysis, limited value of feedback and consensus, and instability of responses among consecutive Delphi rounds.

  3. 3.

    A further limitation, fatigue, occurs when there are a large number of topics or questions per Delphi topic, or when questions are difficult to understand (Peng and Zhou 2011).

  4. 4.

    It consumes high cost for conducting operation (Chou et al. 2008).

  5. 5.

    The drawback of the Delphi method is that it is very time-consuming and expensive due to more than one round being needed (Chou et al. 2008).

2.16.7 SWING Method

Advantages

  1. 1.

    Swing Weight Matrix was very useful to assess, explain, and defend weights. The Swing Weight Matrix Method provided an efficient and effective means to discuss, assess, brief, and explain the attribute weights (Von Winterfeldt and Edwards 1986).

  2. 2.

    We believe this method has four advantages over traditional weighting methods. First, it develops an explicit definition of importance. Second, it forces explicit consideration of the variation of measures. Third, it provides a framework for consistent swing weight assessments. Fourth, it provides a simple yet effective framework to present and justify the weighting decisions (Von Winterfeldt and Edwards 1986).

  3. 3.

    Swing method overcomes many of the problems of constant-sum ratio estimation; is relatively simple, transparent and easy to use; and produces weights which are practically indistinguishable from indifference methods (Demirci et al. 2009).

  4. 4.

    The swing weights technique is more parsimonious than techniques that involve Pairwise comparisons like AHP when many (>4) criteria need to be weighted (Valiris et al. 2005).

  5. 5.

    The Swing method uses a reference state in which all attributes are at their worst level and let the interviewee assign points to states in which one attribute moves to the best state. The weights are then proportional to these points (Hayashi 2000).

  6. 6.

    It is fairly fast and interviewees readily give answers (Hayashi 2000).

  7. 7.

    Another advantage of the Swing method is that it does not depend on the shape of the value functions of the sub-objectives. Only the attribute ranges must be known and the levels of the best and worst outcomes (in most cases corresponding to the endpoints of the ranges). This makes it possible to elicit weights prior to assessment of the value functions of the sub-objectives, which can reduce the splitting bias, as mentioned below (Hayashi 2000).

  8. 8.

    The subjects of this study found the swing weighting method relatively easy to follow, although most participants indicated that they would have preferred further explanation (Hämäläinen and Alaja 2008).

  9. 9.

    The SWING method is of intermediate complexity and was found by participants to be relatively easy to use, making its employment in questionnaire survey appropriate (Hämäläinen and Alaja 2008).

Disadvantages

  1. 1.

    Swing method holds the risk that people respond without thoroughly considering the consequences of their answers (Hayashi 2000).

  2. 2.

    The disadvantages are that the technique is based on direct rating, it does not include consistency checks, and the extreme outcomes to be compared may not correspond to a realistic alternative, which makes the questions difficult to answer.

  3. 3.

    In terms of external validity, assessed by comparing the participant weights with weights externally elicited from experts by comparing with Tradeoff method, the Trade-off method performed better than swing weighting (Hämäläinen and Alaja 2008).

2.16.8 Entropy Method

Advantages

  1. 1.

    Entropy method can computes unbiased relative criteria weights, entropy approach enables measuring the source and determining the relative weights of criteria (w1, w2, …, wm) in rather simple and straightforward manner (Srdjevic et al. 2004).

  2. 2.

    Entropy approach has been proved as sufficiently reliable in identifying both contrast intensity and conflict of criteria and computing their weights appropriately (Srdjevic et al. 2004).

  3. 3.

    It suggests if the available information is adequate or not and if not, then additional information should be sought. In this way it brings the model, the modeller, and the decision-maker closer (Singh 2000).

  4. 4.

    It permits a quantitative assessment of efficiency and benefit/cost parameters (Singh 2000).

  5. 5.

    The entropy method for determination of weight considers adequately the information of values all the monitoring sections provided to balance the relationship among numerous evaluating objects. This weakens the bad effect from some abnormal values and makes the result of evaluation more accurate and reasonable.

  6. 6.

    The entropy method for determination of weight is a very effective method for evaluating indicators.

  7. 7.

    The traditional entropy method focuses on the discrimination among data to determine attribute weights. If an attribute can discriminate the data more effectively, it is given a higher weight.

  8. 8.

    The Entropy method produces more divergent coefficient values for all the criteria. We regard this phenomenon as favourable to the Entropy method as it can better resolve the inherent conflict between the criteria embedded in Multi-attribute decision problems (Diakoulaki et al. 1995).

Disadvantages

  1. 1.

    Its possible disadvantage is related to proper problem sizing, i.e. preserving that the decision matrix contains sufficiently large set of alternatives (Srdjevic et al. 2004).

  2. 2.

    It does not seem that considering the weights only based on entropy values without expert judgment would be sufficient.

  3. 3.

    The weights of attributes determined by the Correlation coefficient and Standard deviation (CCSD) method are more comprehensive and convincing than entropy weights. The former considers not only the amount of information each attribute contains, but also the impact of each attribute on decision making; while the latter takes no account of the mutual relationships among attributes (Mustajoki et al. 2004).

  4. 4.

    The entropy technique does not give scope to designer’s preferences.

  5. 5.

    General purpose MADM techniques, such as entropy, could not effectively model public sector university ranking decision problems. Such decision problems require a new methodology to be developed.

2.16.9 Rank Ordering Centroid

Advantages

  1. 1.

    The key advantage of ROC Methodology is its simplicity in surveying.

  2. 2.

    ROC is simple and easy to follow (Chang et al. 2010).

  3. 3.

    ROC weights represent excellent trade- offs between ease of assessment and efficacy of selection of the best or near best alternative (Morais and Almeida 2010).

  4. 4.

    ROC weights possess other attractive properties. The best ROC alternative has the highest average value over the entire weight simplex, and ROC is the expected value of the weight distribution consistent with the information. Of greater usefulness is the fact that ROC is a specific example of Centroid weights (CW), which generalizes to any convex weight set specified by linear equalities in the unspecified weights (Morais and Almeida 2010).

  5. 5.

    This method is a simple way of giving weight to a number of items ranked according to their importance. The decision makers usually can rank items much more easily than give weight to them.

  6. 6.

    The four methods (RS, RR, ROC, and EW) are compared by using a simulation study and report that the ROC weights appear to perform better than the other approximate weights. They have also shown that the ROC weights are given by the arithmetic mean of the extreme points of ranked weights (Morais and Almeida 2010).

  7. 7.

    A common conclusion of these studies is that ROC weights have an appealing theoretical rationale and appear to perform better than the other rank-based schemes in terms of choice accuracy.

Disadvantages

  1. 1.

    The weights which are given by ROC are highly dispersed (Chang et al. 2010).

2.16.10 CRITIC Method

Advantages

  1. 1.

    The weights derived incorporate both contrast intensity and conflict which are contained in the structure of the decision problem (Jahan et al. 2012).

  2. 2.

    The method developed is based on the analytical investigation of the evaluation matrix for extracting all information contained in the evaluation criteria (Jahan et al. 2012).

  3. 3.

    The method can be easily converted into an algorithmic form (Jahan et al. 2012).

  4. 4.

    The weights derived from the method CRITIC proposed in this paper are found to embody the information which is transmitted from all the criteria participating in the multi-criteria problem. In addition objective weights offer an insight into the nature of the dilemmas created by the existence of conflicting criteria and enable the incorporation of interdependent criteria (Jahan et al. 2012).

Disadvantages

  1. 1.

    CCSD method such as no specific requirement of normalization formulations, clearer modeling mechanism than the CRITIC method (Mustajoki et al. 2004).