Background

A learning management system (LMS) is a web-based software application used to organize, implement, and evaluate education. LMS packages provide online learning material, evaluation, and collaborative learning environment. A number of LMSs, such as ATutor, Claroline, and Moodle, have been produced with an open source software license. These free-licensed LMSs are extremely popular for e-learning (Awang and Darus 2011). Open source software (OSS) is software without a license fee and includes its computer program source code. OSS is a means of addressing the rising costs of campus-wide software applications while developing a learner-centered environment (van Rooij 2012; Williams van Rooij 2011).

The number of available OSS in LMSs (OSS-LMSs) online is continuously growing and gaining considerable prominence. This repertoire of open source options is important for any future planner interested in adopting a learning system to evaluate and select existent applications. In response to growing demands, software firms have been producing a variety of software packages that can be customized and tailored to meet the specific requirements of an organization (Jadhav and Sonar 2011; Cavus 2011). The evaluation and selection of inappropriate OSS-LMS packages adversely affect the business processes and functions of the organization. The task of OSS-LMS package evaluation and selection has become increasingly complex because of the (1) difficulties in the selection of appropriate software for business needs given the large number of OSS-LMS packages available on the market, (2) the lack of experience and technical knowledge of the decision maker, and (3) the on-going development in the field of information technology (Lin et al. 2007; Jadhav and Sonar 2009a, b; Cavus 2010).

The task of OSS-LMS evaluation and selection is often assigned under schedule pressure, and evaluators may not have time or experience to plan the evaluation and selection in detail. Therefore, evaluators may not use the most appropriate framework for evaluating and selecting OSS-LMS packages (Jadhav and Sonar 2011). Evaluating and selecting an OSS-LMS package that meets the specific requirements of an organization are complicated and time-consuming decision-making processes (Jadhav and Sonar 2009a, b). Therefore, researchers have been investigating an improved means of evaluating and selecting OSS-LMS packages.

The comprehensive insights into the evaluation and selection of OSS-LMS packages in this paper are based on three directions: available OSS-LMSs from published papers are investigated; the criteria for evaluating OSS-LMS packages are specified; the abilities of the selection methods that appear fit to solve the problem of OSS-LMS packages based on multi-criteria evaluation and selection problem are discussed to select the best OSS-LMS packages. This paper is organized as follows. “Research method” section investigates the method for evaluating and selecting OSS-LMS packages. Section “Conclusions” and 4 discuss the limitations and research contributions of the study, respectively. Section 5 concludes.

Research method

This study is based on the current and active OSS in the education field and presents a list of active OSS-LMS packages, the evaluation criteria with their descriptions to evaluate OSS-LMS packages, and the multi-attribute or multi-criteria decision-making (MADM/MCDM) techniques that are used as recommended solutions to select the best OSS-LMS packages. This study aims to (a) provide a summary of available OSS-LMS reviews and (b) bridge any gaps in technical literature regarding the evaluation and selection of OSS-LMSs by using MADM/MCDM. The conceptual framework (Fig. 1) offers an overview of the research design.

Fig. 1
figure 1

Conceptual framework of the design and contribution of the study

The range of our research in terms of the evaluation and selection of OSS-LMSs based on MADM/MCDM techniques applies only to the LMS packages detected in the search engine databases we used. The review of technical literature was conducted in early 2014 by using three electronic databases: Elsevier’s ScienceDirect, IEEE Xplore, and Web of Science. The search query included the keywords “evaluation learning management system,” “evaluation and selection learning management system,” “e-learning system,” “open source,” and “open source software.” In the process, the title, abstract, conclusion, and methodology were also reviewed to filter the papers by using the scope and inclusion criteria.

This portion describes an analysis of the information research taken through selected published papers. This section consists of three parts: investigation of availability of OSS LMS packages; evaluation criteria classification related to OSS LMS; recommended solutions for software selection problem based on MADM/MCDM techniques.

Available OSS-LMS packages

We surveyed papers that featured open source LMSs. After analyzing the scope of these papers, 55 studies were selected. From the review, we created an initial taxonomy of 5 categories. We used 4 papers for the adoption of OSS-LMS, 27 papers for the evaluation process, 12 papers for system-based reports, 8 papers for the utilization of an OSS-LMS, and 4 papers for the simple mention of OSS-LMSs (Abdullateef et al. 2015).

This survey aims to identify systems known to decision makers who intend to adopt such systems in their educational institutes. The following table lists OSS-LMSs and the papers in which they are cited.

Table 1 presents the frequency of references. The Moodle system is the most popular OSS-LMS because it is cited in over 40 papers. The Sakai system is the second most popular according to the amount of mentions. The Dot LRN, Claroline, and ATutor systems are fairly equal in the number of references. The Dokeos, Online Learning and Training (OLAT), and LON-CAPA systems are not as popular as the Moodle system and have 9 or less references each. WeBWork, Spaghetti Learning, and Bodington systems are mentioned in only 2 papers. The least cited systems are Totara LMS, Open Source University Support System (OpenUSS), Online Platform for Academic Learning (OPAL), LearnSquare, LogiCampus, Ganesha LMS, eFront, Chamilo, Canvas, and Bazaar LMS. The Moodle system can be deduced as the most studied system because it has the highest amount of references (Table 1).

Table 1 OSS-LMSs and frequency in papers

Table 2 lists the summaries of OSS-LMS packages. The list consists of 23 OSS-LMS packages, along with their respective websites, and a brief description by the vendor to help administrators or decision makers who intend to evaluate and select an OSS-LMS.

Table 2 Summaries of OSS-LMSs

Evaluation criteria of OSS-LMS packages

Software packages are evaluated to determine if they are suited to functional, non-functional, and user requirements. By comparing a well-prepared list of criteria, along with a number of realistic analyses, the evaluator can decide if the software is appropriate for his customer (Radwan et al. 2014). According to our study of technical literature, we selected the method for evaluating software on the basis of the following steps:

  1. 1.

    Determine the availability of an OSS-LMS packages from a list of possibly suitable software (Blanc and Korn 1992; Jadhav and Sonar 2009a, b; Cavus 2011; Graf and List 2005).

  2. 2.

    Specify the evaluation criteria for OSS-LMS packages (Jadhav and Sonar 2011; Cavus 2011).

The available OSS-LMS packages are presented in the previous section. The specification of the criteria for software evaluation is explained in detail in the following sections.

Specification of the evaluation criteria for OSS-LMS packages

The specification of evaluation criteria for OSS-LMS packages is divided into three main parts, namely, identified evaluation criteria, established evaluation criteria, and crossover between the identified and established evaluation criteria.

Identified evaluation criteria for OSS-LMS packages

A category was used for the evaluation process of the 27 papers were selected when surveyed the OSS-LMS packages. A total of 16 papers evaluated the learning process, whereas 11 papers evaluated the LMS. We selected the 11 papers, as well as 1 paper from the Google Scholar database that also evaluated OSS-LMS, to obtain the evaluation criteria for OSS-LMS. These studies are described in Table 3. The table offers a brief description about the evaluation criteria and which LMS was used for the evaluation process.

Table 3 Brief description about the evaluation criteria and sample of LMS

Established evaluation criteria for OSS-LMS packages

From our research of technical literature, we combined and classified a collection of criteria suitable for OSS-LMS evaluation. We also defined the meaning of each evaluation criterion. The criteria are categorized into several groups such as functionality, reliability, usability, efficiency, maintainability, and portability. These criteria have been featured in several studies (Franch and Carvallo 2002, 2003; Morisio and Tsoukias 1997; Oh et al. 2003; Ossadnik and Lange 1999; Rincon et al. 2005; Welzel and Hausen 1993; Stamelos et al. 2000; Jadhav and Sonar 2009a, b). Among the ISO/IEC standards related to software quality, ISO/IEC 9126-1 specifically provides a quality model definition, which is used as a framework for software evaluation (Jadhav and Sonar 2009a, b). The rest of the criteria evaluate e-learning standards, security, privacy, vendor criteria, and learner’s communication environment. The efficiency of an LMS is evaluated by the sub-criteria of the usability group. The following subsections explain the criteria groups with their sub-criteria.

Functionality group

Functionality is the ability of the software to provide functions that meet the user’s requirements when using the software under specific conditions (Bevan 1999). Functionality is used to measure the level in which an LMS satisfies the functional requirements of an organization (Jadhav and Sonar 2011). The functional group includes several criteria: course development, activity tracking, and assessment. Course development is a web interface for organizing the course’s materials. Activity tracking is important for students. Hence, we focused on the criteria that cover the students’ progress: analysis of current data, time analysis, and sign-in data. Assessment is the possibility for the tutor to test the student through various means (Arh and Blazic 2007). Table 4 presents the criteria with their sub-criteria, along with a description and availability of each sub-criterion.

Table 4 Functionality group

Reliability group

Reliability is the ability of the software package to run consistently without crashing under specific conditions (Jadhav and Sonar 2011). Reliability is used to assess the level of fault tolerance for the software package. Furthermore, reliability can be measured by monitoring the number of failures in a given period of execution for a specific task (Bevan 1999). Table 5 depicts the reliability group, its several sub-criteria, and the description and presence of the procedure.

Table 5 Reliability group

Usability group

Usability establishes how efficient, convenient, and easy a system is for learning (Kiah et al. 2014). The usability group and sub-criterion descriptions are indicated in Table 6 along with the presence of procedure.

Table 6 Usability group

Maintainability

Maintainability is the ability of the software to be modified. Modifications may include corrections, improvements, or adaptation of the software to changes in the environment, requirement, and functional specifications (Bevan 1999). Maintainability metrics are difficult to measure in a limited experimental setting; they require long-term real-world evaluation. Therefore, we will not consider the maintainability in our study (Kiah et al. 2014).

Portability group

Portability is the capability of software to be transferred from one environment to another. (Bevan 1999; Jadhav and Sonar 2011). Table 7 lists the portability criteria and its several sub-criteria along with their definitions and the presence of the procedure.

Table 7 Portability group

E-learning standards group

E-learning standards evaluate learning resources and provide descriptions of the learners’ profiles. E-learning standards are generally developed within the system to ensure interoperability, reusability, and portability, specifically for learning resources (Arh and Blazic 2007). Table 8 indicates the e-learning standards criteria, including several sub-criteria, the description of each sub-criterion, and the procedures.

Table 8 E-learning standards group

Security and privacy group

An overall evaluation of the security of LMS systems is beyond the scope of this research. Evaluating security requires extensive analysis in several aspects; however, we have obtained certain important criteria regarding security and privacy from (Arh and Blazic 2007) and (Jadhav and Sonar 2011). We have used the security and privacy criteria to establish the ability of a system to safeguard personal data and safeguard communication from attacks and danger on a user’s computer, as well as the user’s level of permission (Arh and Blazic 2007). Table 9 depicts the security and privacy criteria, the sub-criteria, and the presence of the procedure.

Table 9 Security and privacy group

Vendor criteria

Vendor criteria are utilized to evaluate the vendor capabilities of software packages. The vendor criteria are important for selecting software because they offer guides for establishing, operating, and customizing software packages (Jadhav and Sonar 2009a, b, 2011; Lee et al. 2013). Table 10 depicts the vendor criteria with a description of each sub-criterion description and presence of the procedure.

Table 10 Vendor group

Learner’s communication group

To ensure continuous communication between teachers and students, LMSs require communication tools that use the latest technology. We use learner’s communication criteria to evaluate continuous communication and interaction (Arh and Blazic 2007). Learner’s communication has two types, namely, communication synchronous and communication asynchronous, (Arh and Blazic 2007). Table 11 describes the learner’s communication criteria, its sub-criteria, and the presence of the procedure.

Table 11 Learner’s communication group

Crossover between identified and established evaluation criteria for OSS-LMS packages

To determine the gap between the identified and established evaluation criteria, a crossover between the two is required. Table 12 uses x to indicate the criteria used in the papers. For each group of criteria, a percentage of papers that featured that group are provided.

Table 12 Crossover between the identified and established evaluation criteria

Table 12 indicates a gap in the OSS-LMS evaluation criteria. The existing software evaluation criteria are insufficient, and establishing new overall quality criteria are needed to evaluate the OSS-LMS packages. The group criteria are insufficient. If we expand the analysis to calculate the criteria by using the percentage from the papers for each group, we find that the proportion of functional group criteria used is 41.8 %, which does not qualify the applicability of the functional group evaluation to a programmed LMS. The same issue is also present in the other groups. The reliability group has 13.3 %, usability group has 6.6 %, portability group has 7.5 %, e-learning standard has 33.3 %, security and privacy group has 34 %, vendor group has 6.2 %, and learner’s communication environment group has 42.5 %.

On the basis of these issues, we deduce that no group has completed the evaluation criteria compared with an established list. In our view, the problem of these percentages can be interpreted in two ways: first, the applicability of this type of criteria in the evaluation process is insufficient; second, this criterion does not meet international software engineering evaluation standards.

Another problem emerged when the software was evaluated by using several criteria (including functionality, reliability, usability, portability, e-learning standards support, security and privacy, vendor criteria, and learner’s communication environment). Each piece of software has several attributes, and each decision maker has different weights for these attributes. Thus, selecting the suitable software to use is difficult. On one hand, users who aim to use one kind of software may prioritize functionality, usability, and user support rather than other features, whereas users who intend to develop this software in actual education environments would probably target different attributes. On the other hand, LMS package selection (in particular, OSS) is an MADM/MCDM problem where each type of software is considered an available alternative for the decision maker. In other words, the MADM/MCDM problem refers to making preference decisions over the available alternatives that are characterized by multiple and usually conflicting attributes (Zaidan et al. 2014). The process of selecting the OSS-LMS packages involves the simultaneous consideration of multiple attributes to rank the available alternatives and select the best one. Thus, the selection process of the OSS-LMS packages can be considered a multi-criteria decision-making problem. Additional details of the fundamental terms of software selection based on multi-criteria analysis will be provided in the following section.

Recommended solution techniques based on MADM/MCDM

The useful techniques for dealing with MADM/MCDM problems in the real world are defined as recommended solutions in a collective method to help decision makers organize the problems to be solved and conduct the analysis, comparisons, and ranking of the alternatives or multiple platforms. Accordingly, the selection of a suitable alternative is described in previous literature (Jadhav and Sonar 2009a, b); MADM/MCDM methods seem to be suitable for solving the problem of OSS-LMS package selection. The goals of the MADM/MCDM are as follows: (1) help DMs to choose the best alternative, (2) categorize the viable alternatives among a set of available alternatives, and (3) rank the alternatives in decreasing order of performance (Zaidan et al. 2015; Jadhav and Sonar 2009a, b). Each platform has its own multiple criteria that depend on a matrix with—several names: the payoff matrix, evaluation table matrix (ETM), or decision matrix (DM) (Whaiduzzaman et al. 2014). In any MADM/MCDM ranking, the fundamental terms need to be defined, including the DM or the ETM, LMS, and its criteria (Al-Safwani et al. 2014). The ETM that consists of LMS m and n criteria must be created. With the intersection of each LMS and criteria given as x ij , we obtain matrix (x ij ) m*n :

$$\begin{array}{*{20}c} { \begin{array}{*{20}c}\qquad\qquad\qquad\qquad {\begin{array}{*{20}c} {C_{1} } & {C_{2} } \\ \end{array} } & {\begin{array}{*{20}c} \ldots & {C_{n} } \\ \end{array} } \\ \end{array} } \\ {DM/ETM = \begin{array}{*{20}c} {\begin{array}{*{20}c} {LMS_{1} } \\ {LMS_{2} } \\ \end{array} } \\ {\begin{array}{*{20}c} \vdots \\ {LMS_{m} } \\ \end{array} } \\ \end{array} \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {x_{11} } & {x_{12} } \\ {x_{21} } & {x_{22} } \\ \end{array} } & {\begin{array}{*{20}c} \ldots & {x_{1n} } \\ \ldots & {x_{2n} } \\ \end{array} } \\ {\begin{array}{*{20}c} \vdots & \vdots \\ {x_{m1} } & {x_{m2} } \\ \end{array} } & {\begin{array}{*{20}c} \vdots & \vdots \\ \ldots & {x_{mn} } \\ \end{array} } \\ \end{array} } \right]} \\ \end{array} ,$$

where \(LMS_{1} , LMS_{2 } , \ldots , LMS_{m}\) are possible alternatives that decision makers must score (i.e., Moodle platform); \(C_{1} , C_{2 } , \ldots , C_{n}\) are the criteria for measuring each LMS’s performance (i.e., functionality criteria, reliability iteria, usability, etc.). Finally, x ij is the rating of alternative LMS i with respect to criterion \(C_{j}\). Certain processes need to be conducted to rank alternatives, such as normalization, maximization indicator, adding the weights, and other processes, depending on the method.

For example: Suppose D is the DM to rank the performance of alternative Ai (i = {1, 2, 3 and 4}) on the basis of Cj (j = {1, 2, 3, 4, 5 and 6}). Table 13 is an example of a multi-criteria problem reported in (Hwang and Yoon 1981).

Table 13 Example of a multi-criteria problem

The data in the chart is difficult to evaluate because of the large numbers of c2 and c3. See Fig. 2.

Fig. 2
figure 2

Graphic presentation of the Example in Table 13

Selecting the best software process from the software on offer is an important aspect of managing an information system. The selection process can be considered a MADM/MCDM problem that can address different and inconsistent criteria to select between predetermined decision alternatives (Oztaysi 2014). We will divide this section into two subsections. The first subsection describes the current selection methods applied for LMS selection. The second examines recent studies related to MCDM techniques applied for other applications, as shown in Fig. 3.

Fig. 3
figure 3

A review MCDM methods

Selection techniques/tools applied on LMS

MADM/MCDM is an effective approach for addressing various types of decision-making problems. In the field of education, some papers employ MADM/MCDM techniques and tools to evaluate and select the best LMS. These techniques and tools include the decision expert shell (DEX shell) system (Arh and Blazic 2007; Pipan et al. 2007), easy way to evaluate LMS (EW-LMS) (Cavus 2011), and analytic hierarchy process (Srđević et al. 2012). Table 14 presents a brief description of these references, as well as the MADM/MCDM techniques and tools used for selecting the best LMS.

Table 14 MADM/MCDM techniques are used to select LMS

DEX shell system

In this section (Arh and Blazic 2007; Pipan et al. 2007), we looked at the DEX shell system for scoring, ranking, and selecting the best LMS. DEX is developed as an interactive expert system shell that offers tools to create and verify a knowledge base, evaluate choices, and explain the final results. The structure of the knowledge base and evaluation procedures closely match the multi-criteria decision-making paradigm; however, the system considers the consistency of the decision-making process and the weighed sum of the criteria is achieved with limited theoretical justification (Srđević et al. 2012).

EW-LMS

Cavus (2011) developed the EW-LMS. This system is a web-based system that can be used easily on the Internet anywhere and anytime. This system is designed as a decision support system that uses a smart algorithm derived from artificial intelligent concepts with fuzzy values. This system adopts fuzzy logic values to assign the weight of each criterion and utilizes the linear weighted attribute model to select the best alternative. However, the technique used to assign the criterion weight is inaccurate because the user weighs in the group arbitrarily uses fuzzy logic values (Jadhav and Sonar 2009a, b).

LMS selection based on AHP

Srđević et al. (2012) presented an evaluation method for selecting the most appropriate LMS. The authors propose a breakdown of complex criteria into easily comprehended sub-criteria through a method called analytic hierarchy process (AHP). AHP ranks alternative software when the features are considered and modified and deletes unsuitable software from the evaluation process (Zaidan et al. 2014; Jadhav and Sonar 2009a, b).

AHP was devised by Saaty in 1980. AHP has become a commonly used and widely distributed technique for MCDM. AHP allows the use of both qualitative and quantitative criteria at the same time. It also allows the utilization of independent variables and compares attributes in a hierarchal structure.

In a tree structure, the hierarchy begins at the top and comes down toward the goal. The lower levels correspond to the criteria, sub-criteria, and so on. In this hierarchal tree, the process starts from the leaf nodes and progresses up to the top level. Each output level represents the hierarchy that corresponds to the weight or influence of different branches originating from that level. Finally, the different branches are compared to select the most appropriate alternative on the basis of the attributes (Whaiduzzaman et al. 2014; San Cristóbal 2011; Zaidan et al. 2014; Oztaysi 2014; Srđević et al. 2012; Jadhav and Sonar 2011; Ngai and Chan 2005; Krylovas et al. 2014).

  • Step 1: Pairwise comparison between criteria;

  • Step 2: Raising the attained matrix to an arbitrarily large power;

  • Step 3: Normalizing row sums of the raised matrix using the following equation:

$${\text{w}}_{\text{i}} = \frac{{{\text{w}}_{\text{i}}^{ '} }}{{\mathop \sum \nolimits_{{{\text{j}} = 1}}^{\text{m}} {\text{w}}_{\text{i}}^{ '} }}$$
(1)

where

$$w_{i}^{'} = \mathop \sum \limits_{j = 1}^{m} a_{ij}^{'} i,\quad j = 1,2, \ldots ,m$$

and \(a_{ij}^{\prime}\) is the corresponding element of i th and j th criterion for the raised matrix.

  • Step 4: Rating the alternatives in terms of the criteria;

  • Step 5: Synthesizing the vectors from the last two steps to obtain the final priority vectors for the alternatives

Selection techniques applied for other applications

Decision-making theories have been applied successfully in different fields over the past few decades. The variety and diversity of MADM/MCDM applications have helped decision makers. MADM/MCDM can allow the application of multiple conflicting criteria. One main objective of this study is to introduce a critical assessment of available MADM/MCDM approaches and describe how these approaches are used in OSS-LMS selection (Wang et al. 2013). We selected recent studies related to decision-making selection techniques, which are listed as follows according to Refs. (Wang et al. 2013; Jadhav and Sonar 2011; Triantaphyllou 2000; Triantaphyllou et al. 1998; Al-Safwani et al. 2014).

Analytic network process (ANP)

ANP is defined as a mathematical theory that can handle all types of dependencies systematically. It can be used in numerous fields. ANP was developed by Saaty (Wu and Lee 2007), and includes a multi-criteria decision-making method that compares different alternatives to select the best alternative.

ANP technique allows the addition of an extra relevant criterion to an existing one, which are either tangible or intangible, thus significantly influencing the decision-making process. Furthermore, ANP considers interdependencies for different levels of set criteria. Finally, ANP permits quantitative and qualitative feature analysis, thus making ANP a preferred technique in many real-world situations (Yazgan et al. 2009). ANP is composed of four major steps:

  • Step1: Model construction and problem structuring

  • Step2: Pairwise comparison matrices and priority vectors

  • Step3: Super matrix formation

  • Step4: Selection of the best alternatives

Elimination and choice expressing reality

Roy and his colleagues at the SEMA consultancy company developed Elimination and Choice Expressing Reality (ELECTRE) in 1991. Since then, several variations of the method have been coined, such as ELECTRE I, ELECTRE II, ELECTRE III, ELECTRE IV, ELECTRE IS, and ELECTRE TRI (ELECTRE Tree). All of these variations consist of two sets of parameters: veto thresholds and the importance coefficient (Mohammadshahi 2013; Whaiduzzaman et al. 2014).

The method is classified as an outranking MCDM method. Compared with previous methods, this approach is computationally complex because the simplest method of ELECTRE was reported to involve up to 10 steps. The mechanics of the method allow it to compare alternatives to determine their outranking relationships. The relationships are utilized to define and/or eliminate the alternatives subdued by others, thus subsequently reducing the amount of available alternatives.

Another feature of ELECTRE is its ability to handle both qualitative and quantitative criteria. This provides a basis for a complete order of different options. The preferred alternatives are weighed against dependence on concordance indices, and their thresholds allow the drafting of graphs that can be later used to obtain the ranking of alternatives (Rehman et al. 2012; San Cristóbal 2011; Whaiduzzaman et al. 2014).

The IF ELECTRE method includes eight steps (Wu and Chen 2009):

Step 1: Determine the DM

$${\text{Let}}\;X_{ij} = \left( {\mu_{ij} , \, \nu_{ij} , \, \pi_{ij} } \right),$$

μij is the degree of membership of the ith alternative with respect to the jth attribute, ν ij is the degree of non-membership of the ith alternative with respect to the jth attribute, and π ij is the intuitionistic index of the ith alternative with respect to the jth attribute. M is an intuitionistic fuzzy DM where

$$0 \le \mu_{ij} + \nu_{ij} \le 1,\quad i \, = 1,2, \ldots ,m, \;\; j \, = 1,2, \ldots ,n$$
(2)
$$\pi_{ij} = 1 - \mu_{ij} - \nu_{ij}$$
(3)
$$M = \begin{array}{*{20}c} {A_{1} } \\ \vdots \\ {A_{m} } \\ \end{array} \left[ {\begin{array}{*{20}c} {x_{11} } & \cdots & {x_{1n} } \\ \vdots & \ddots & \vdots \\ {x_{m1} } & \cdots & {x_{mn} } \\ \end{array} } \right]$$

In the DM M, we have m alternatives (from A1 to Am) and n attributes (from x1 to xn). The subjective importance of attributes, W, is given by the decision maker(s).

Step2: Determine the concordance and discordance sets

The method uses the concept of IFS relation to identify (determine) the concordance and discordance set.

The strong concordance set \(C_{\text{kl}}\) of \(A_{\text{k}}\) and \(A_{\text{l}}\) is composed of all criteria where \(A_{\text{k}}\) is preferred to \(A_{\text{l}}\). In other words, the strong concordance set \(C_{\text{kl}}\) can be formulate as

$$C_{kl} = \left\{ {j|\mu_{kj} \ge \mu_{lj} ,v_{kj} < v_{lj} \quad and\quad \pi_{kj} < \pi_{lj} } \right\}$$
(4)

The moderate concordance set \(C_{\text{kl}}^{{''}}\) is defined as

$$C_{kl}^{'} = \left\{ {j|\mu_{kj} \ge \mu_{lj} ,v_{kj} < v_{lj} \quad and \quad \pi_{kj} \ge \pi_{lj} } \right\}$$
(5)

The weak concordance set \(C_{\text{kl}}^{{''}}\) is defined as

$$C_{kl}^{''} = \left\{ {j|\mu_{kj} \ge \mu_{lj} \quad and\quad v_{kj} \ge v_{lj} } \right\}$$
(6)

The strong discordance set \(D_{\text{kl}}\) is composed of all criteria where \(A_{\text{k}}\). is not preferred to A 1. The strong discordance set \(D_{\text{kl}}\) can be formulated as

$$D_{kl} = \left\{ {j |\mu_{kj} < \mu_{lj} ,v_{kj} \ge v_{lj} \quad and\quad \pi_{kj} \ge \pi_{lj} } \right\}$$
(7)

The moderate discordance set \(D_{kl}^{{\prime }}\) \({\prime }\) is defined as

$$D_{kl}^{'} = \left\{ {j |\mu_{kj} < \mu_{lj} ,v_{kj} \ge v_{lj} \quad and\quad \pi_{kj} < \pi_{lj} } \right\}$$
(8)

The weak discordance set \(D_{kl}^{{''}}\) is defined as

$$D_{kl}^{''} = \left\{ {j |\mu_{kj} < \mu_{lj} \quad and \quad v_{kj} < v_{lj} } \right\}$$
(9)

The decision maker(s) provides the weight in different sets.

Step 3: Calculate the concordance matrix

The relative value of the concordance sets is measured by using the concordance index. The concordance index is equal to the sum of the weights associated with these criteria and relations that are contained in the concordance sets.

Thus, concordance index c kl c between A k and A l is defined as follows:

$$c_{kl} = w_{c} \times \mathop \sum \limits_{{j \in C_{kl} }} w_{j} + w_{c'} \times \mathop \sum \limits_{{j \in C'_{kl} }} w_{j} + w_{c''} \times \mathop \sum \limits_{{j \in C''_{kl} }} w_{j}$$
(10)

where wC, wC’, wC’’ are the weight in different sets defined in Step 2, and wj is the weight of attributes identified in Step 1.

Step 4: Calculate the discordance matrix

Discordance index dkl is defined as follows:

$$d_{kl} = \frac{{{}_{{j \in D_{kl} }}^{\hbox{max} } w_{D} * \times dis\left( {x_{kj} , x_{lj} } \right)}}{{{}_{j \in J}^{\hbox{max} } dis\left( {x_{kj} , x_{lj} } \right)}}$$
(11)
$$dis\left( {x_{kj} , x_{lj} } \right) = \sqrt {\frac{1}{2}((\mu_{kj} - \mu_{lj} )^{2} + (v_{kj} - v_{lj} )^{2} + \left( {\pi_{kj} - \pi_{lj} )^{2} } \right)}$$
(12)

where wD* is equal to wD or wD’ or wD’’, which depends on the different types of discordance sets defined in Step 2.

Step 5: Determine the concordance dominance matrix

This matrix can be calculated by adopting a threshold value for the concordance index. A k can only dominate A l if its corresponding concordance index c kl exceeds a certain threshold value c, i.e., c kl  ≥ c, and

$$c^{ - } = \frac{{\mathop \sum \nolimits_{k = l,k \ne l}^{m} \mathop \sum \nolimits_{l = l,l \ne k}^{m} c_{kl} }}{{m \times \left( {m - 1} \right)}}$$
(13)

On the basis of the threshold value, a Boolean matrix F can be constructed; the elements of which are defined as

$$f_{kl} = 1,\quad if \; c_{kl} \ge c^{ - } ;\quad f_{kl} = 0, \quad if \; c_{kl} < c^{ - } .$$

Each element of “1” on matrix F represents a dominance of one alternative with respect to another.

Step 6: Determine the discordance dominance matrix

This matrix is constructed analogously to the F matrix on the basis of a threshold value d to the discordance indices. The elements of g kl of the discordance dominance matrix G are calculated as follows:

$$d^{ - } = \frac{{\mathop \sum \nolimits_{k = l,k \ne l}^{m} \mathop \sum \nolimits_{l = l,l \ne k}^{m} d_{kl} }}{{m \times \left( {m - 1} \right)}}$$
(14)
$$g_{kl} = 1,\quad if \; d_{kl} \le d^{ - } ; g_{kl} = 0, \quad if\; d_{kl} > d^{ - }$$

The unit elements in the G matrix also represent the dominance relationships between any two alternatives.

Step 7: Determine the aggregate dominance matrix

This step involves the calculation of the intersection of the concordance dominance matrix F and discordance dominance matrix G. The resulting matrix, which is called the aggregate dominance matrix E, is defined by using its typical elements e kl as follows:

$$e_{kl} = f_{kl} .g_{kl}$$
(15)

Step 8: Eliminate the less favorable alternatives

The aggregate dominance matrix E provides the partial-preference ordering of the alternatives. If e kl  = 1, A k is preferred to A l for both the concordance and discordance criteria. However, A k still has the chance of being dominated by other alternatives. Hence, when A k is not dominated by ELECTRE, the following is obtained:

$$e_{kl} = 1,\;{\text{for}}\;{\text{at}}\;{\text{least}}\;{\text{one}}\;l,\quad l = 1,2, \ldots ,m,\;k \ne l;$$
$$\left( {e_{ik} = 0,\;{\text{for}}\;{\text{all}}\;i,\quad i = 1,2, \ldots ,m,\;i \ne l,\;i \ne k;} \right)$$

This condition appears difficult to apply. However, the dominated alternatives can be easily identified in the E matrix. If any column of the E matrix has at least one element, this column is “ELECTREcally” dominated by the corresponding row(s). Hence, we simply eliminate any column(s) with an element of one.

Fuzzy

Fuzzy theory was introduced by Zadeh in 1965. It is an extensive theory applied to man’s uncertainties when making a judgment. The theory can also rectify doubts associated with available data and information in multiple criteria decision making.

An MCDM model based on fuzzy theory can be used to evaluate and choose a specific alternative that matches the criteria set by the decision maker from a pool of options. Linguistic values represented by fuzzy numbers label suitable replacements and weigh them against importance. A comparison is then conducted between the numerical values and weighed values to determine the true values with Boolean logic and replace them with intervals in the decision-making process (Alabool and Mahmood 2013; Whaiduzzaman et al. 2014).

Let X be the universe of discourse, and X = {x1, x2,…, xn}. A* is a fuzzy set of X that represents a set of ordered couples {(x1, µA*(x1)), (x2, µA* (x2)),…, (xn, µA* (xn))}, µA*:X → [0,1] is the function of membership grade “Membership Function” of A*, and µA* (xi) stands for the membership degree of xi in A*.

A fuzzy number represents a fuzzy subset in the universe of discourse X that is both convex and normal. Triangular fuzzy number, trapezoidal fuzzy number, and bell-shaped fuzzy number are types of membership functions. However, this study aims to adopt the triangular fuzzy number type. A triangular fuzzy number is a fuzzy number represented by three points (p1, p2, p3) and (p1 < p2 < p3). The interpreted membership function µA* of the fuzzy number A* is:

$$\mu A*\left( {\text{x}} \right) = \left\{ {\begin{array}{*{20}l} {0,} \hfill & {X < p1} \hfill \\ {\frac{x - p1}{p2 - p1},} \hfill & {P1 \le x \le p2} \hfill \\ {0, } \hfill & {x \le p3} \hfill \\ \end{array} } \right.$$

Technique for order of preferences by similarity to ideal solution

The Technique for Order of Preferences by Similarity to Ideal Solution (TOPSIS) presents a preference index of similarity for ideal solutions. Thus, this approach can reach the closest possible solution to the ideal one and drive the solution as far away as possible from the anti-ideal solution at the same time.

A DM is first needed for this technique. The matrix is normalized using vectors, followed by the identification of both the anti-ideal and ideal solutions defined within the normalized DM. The technique was invented by Hwang and Yoon in 1981. It chooses the alternatives with the shortest and most positive distance from the ideal solution and the most negative distance from the anti-ideal solution. This technique is adopted to select a solution from a set of finite options.

Ideally, the optimal solution has the shortest distance from the ideal solution and the farthest possible distance from the anti-ideal solution at the same time. The cumulative function produced by the TOPSIS technique builds up the distance to be as near as possible from the ideal solution and the opposite from the anti-ideal solution; However, a reference point must be set near the ideal solution (ur Rehman et al. 2012; San Cristóbal 2011; Whaiduzzaman et al. 2014; Oztaysi 2014; Cui-yun et al. 2009). The TOPSIS method includes the following steps:

Step 1: Construct the normalized DM

This process tries to transform the various attribute dimensions into non-dimensional attributes. This process allows a comparison across attributes. The matrix \(\left( {{\text{x}}_{\text{ij}} } \right)_{\text{m*n}}\) is then normalized from \(\left( {{\text{x}}_{\text{ij}} } \right)_{\text{m*n}}\) to matrix \({\text{R}} = \left( {{\text{r}}_{\text{ij}} } \right)_{\text{m*n}}\) by using the normalization method:

$$\varvec{r}_{{\varvec{ij}}} = \varvec{x}_{{\varvec{ij}}} /\sqrt {\mathop \sum \limits_{{\varvec{i} = 1}}^{\varvec{m}} \varvec{x}_{{\varvec{ij}}}^{2} }$$
(16)

This process will result in a new Matrix R:

$$\varvec{R} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\varvec{r}_{11} } & {\varvec{r}_{12} } \\ {\varvec{r}_{21} } & {\varvec{r}_{22} } \\ \end{array} } & {\begin{array}{*{20}c} \ldots & {\varvec{r}_{{1\varvec{n}}} } \\ \ldots & {\varvec{r}_{{2\varvec{n}}} } \\ \end{array} } \\ {\begin{array}{*{20}c} \vdots & \vdots \\ {\varvec{r}_{{\varvec{m}1}} } & {\varvec{r}_{{\varvec{m}2}} } \\ \end{array} } & {\begin{array}{*{20}c} \vdots & \vdots \\ \ldots & {\varvec{r}_{{\varvec{mn}}} } \\ \end{array} } \\ \end{array} } \right]\varvec{ }$$
(17)

Step 2: Construct the weighted normalized DM

In this process, a set of weights \(w = w_{1} , w_{2} , w_{3 } , \cdots ,w_{j} , \cdots , w_{n}\) from the decision maker is accommodated to the normalized DM. The resulting matrix can be calculated by multiplying each column from the normalized DM (R) with its associated weight \(w_{j}\). It should be noted that the set of the weights is equal to one

$$\mathop \sum \limits_{{\varvec{j} = 1}}^{\varvec{m}} \varvec{w}_{\varvec{j}} = 1$$
(18)

This process will result in a new Matrix V:

$${\mathbf{V}} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\varvec{v}_{11} } & {\varvec{v}_{12} } \\ {\varvec{v}_{21} } & {\varvec{v}_{22} } \\ \end{array} } & {\begin{array}{*{20}c} \ldots & {\varvec{v}_{{1\varvec{n}}} } \\ \ldots & {\varvec{v}_{{2\varvec{n}}} } \\ \end{array} } \\ {\begin{array}{*{20}c} \vdots & \vdots \\ {\varvec{v}_{{\varvec{m}1}} } & {\varvec{v}_{{\varvec{m}2}} } \\ \end{array} } & {\begin{array}{*{20}c} \vdots & \vdots \\ \ldots & {\varvec{v}_{{\varvec{mn}}} } \\ \end{array} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\varvec{w}_{1} \varvec{r}_{11} } & {\varvec{w}_{2} \varvec{r}_{12} } \\ {\varvec{w}_{1} \varvec{r}_{21} } & {\varvec{w}_{2} \varvec{r}_{22} } \\ \end{array} } & {\begin{array}{*{20}c} \ldots & {\varvec{w}_{\varvec{n}} \varvec{r}_{{1\varvec{n}}} } \\ \ldots & {\varvec{w}_{\varvec{n}} \varvec{r}_{{2\varvec{n}}} } \\ \end{array} } \\ {\begin{array}{*{20}c} \vdots & \vdots \\ {\varvec{w}_{1} \varvec{r}_{{\varvec{m}1}} } & {\varvec{w}_{2} \varvec{r}_{{\varvec{m}2}} } \\ \end{array} } & {\begin{array}{*{20}c} \vdots & \vdots \\ \ldots & {\varvec{w}_{\varvec{n}} \varvec{r}_{{\varvec{mn}}} } \\ \end{array} } \\ \end{array} } \right]$$
(19)

Step 3: Determining the ideal and negative ideal solutions

In this process, two artificial alternatives A * (ideal alternative) and A (negative ideal alternative) are defined as follows:

$$\begin{aligned} \varvec{A}^{*} & = \left\{ {\left( {\left( {\mathop {{\mathbf{max}}}\limits_{i} \varvec{v}_{ij} |\varvec{j} \in \varvec{J}} \right), \left( {\mathop {{\mathbf{min}}}\limits_{i} \varvec{v}_{ij} |\varvec{j} \in \varvec{J}^{ - } } \right)|\varvec{i} = 1,2, \ldots ,\varvec{m}} \right)} \right\} \\ & \quad = \left\{ {v_{1}^{*} , v_{2}^{*} , \ldots , v_{j}^{*} , \ldots v_{n}^{*} } \right\} \\ \end{aligned}$$
(20)
$$\begin{aligned} \varvec{A}^{ - } & = \left\{ {\left( {\left( {\mathop {\varvec{m}{\mathbf{in}}}\limits_{\varvec{i}} \varvec{v}_{{\varvec{ij}}} |\varvec{j} \in \varvec{J}} \right), \left( {\mathop {\hbox{max} }\limits_{\varvec{i}} \varvec{v}_{{\varvec{ij}}} |\varvec{j} \in \varvec{J}^{ - } } \right) |\varvec{i} = 1,2, \ldots ,\varvec{m}} \right)} \right\} \\ & \quad = \left\{ {\varvec{v}_{1}^{ - } ,\varvec{ v}_{2}^{ - } , \ldots ,\varvec{ v}_{\varvec{j}}^{ - } , \ldots \varvec{v}_{\varvec{n}}^{ - } } \right\} \\ \end{aligned}$$
(21)

J is a subset of \(\left\{ {i = 1,2, \ldots ,m} \right\}\), which presents the benefit attribute, whereas J is the complement set of J and can be noted as \(J^{c}\), which is the set of cost attribute.

Step 4: Separation measurement calculation based on the Euclidean distance

In the process, the separation measurement is conducted by calculating the distance between each alternative in V and ideal vector A * by using the Euclidean distance, which is expressed as follows:

$$\varvec{S}_{{\varvec{i}^{*} }} = \sqrt {\mathop \sum \limits_{{\varvec{j} = 1}}^{\varvec{n}} \left( {\varvec{v}_{{\varvec{ij}}} - \varvec{v}_{\varvec{j}}^{*} } \right)^{2} } , \quad \varvec{i} = \left( {1,2, \ldots \varvec{m}} \right)$$
(22)

Similarly, the separation measurement for each alternative in V from the negative ideal A is given by the following:

$$\varvec{S}_{{\varvec{i}^{ - } }} = \sqrt {\mathop \sum \limits_{{\varvec{j} = 1}}^{\varvec{n}} \left( {\varvec{v}_{{\varvec{ij}}} - \varvec{v}_{\varvec{j}}^{ - } } \right)^{2} } , \quad \varvec{i} = \left( {1,2, \ldots \varvec{m}} \right)$$
(23)

In the end of Step 4, two values, namely, \(S_{{i^{*} }}\) and \(S_{{i^{ - } }}\), for each alternative were counted. The two values represent the distance between each alternative and both alternative (the ideal and negative ideal).

Step 5: Closeness to the ideal solution calculation

In the process, the closeness of A i to ideal solution A * is defined as follows:

$$\varvec{C}_{{\varvec{i}^{*} }} = \varvec{S}_{{\varvec{i}^{ - } }} /\left( {\varvec{S}_{{\varvec{i}^{ - } }} + \varvec{S}_{{\varvec{i}^{*} }} } \right), \quad 0 < \varvec{C}_{{\varvec{i}^{*} }} < 1, \; \varvec{i} = \left( {1,2, \ldots \varvec{m}} \right)$$
(24)

Evidently, \(C_{{i^{*} }} = 1\) if and only if (A i  = A *). Similarly, \(C_{{i^{*} }} = 0\) if and only if (A i  = A )

Step 6: Ranking the alternative according to the closeness to the ideal solution

The set of alternative \(A_{i}\) can now be ranked according to the descending order of \(C_{{i^{*} }}\). The alternative with the highest value will have the highest performance.

VIKOR

The compromise ranking method, which is also known as VIKOR, is an effective technique with more than one criterion set for decision making. The acronym is derived from “Vise Kriterijumska Optimizacija I Kompromisno Resenje.” The multi-criteria ranking index is developed on the basis of the measurements of proximity to the ideal solution (usually in the form of distance). This technique was introduced by Opricovic in 2004 to optimize the evaluation dynamic and complicated processes through compromising. The technique uses linear normalization; however, the values are not dependent on just criterion evaluation. VIKOR also uses an aggregate function to balance the distance between both the ideal solution and its opposite. This helps the decision maker choose from a set of conflicting solutions (Alabool and Mahmood 2013; Whaiduzzaman et al. 2014; San Cristóbal 2011).

The VIKOR steps are as follows:

Step 1: Calculate x * i and x i

$$x_{i}^{*} = { \hbox{max} }[\left( {x_{ij} } \right)|j = 1,2, \ldots ,m]$$
(25)
$$x_{i}^{ - } = { \hbox{min} }[\left( {x_{ij} } \right)|j = 1,2, \ldots ,m]$$
(26)

where x ij is the value of the ith criterion function for alternative x i .

Step 2: Compute the values of Sj and Rj

$$s_{i} = \mathop \sum \limits_{i = 1}^{n} w_{i} \frac{{x_{i}^{*} - x_{ij}^{{}} }}{{x_{i}^{*} - x_{i}^{ - } }}$$
(27)
$$R_{j} = \hbox{max} \left[ {w_{i} \left( {\frac{{x_{i}^{*} - x_{ij} }}{{x_{i}^{*} - x_{i}^{ - } }}} \right)} \right]\quad i = 1,2, \ldots ,n$$
(28)

where s i and R j denote the utility measure and regret measure for alternative x j . Furthermore, w i is the weight of each criterion.

Step 3: Compute the values of S *, R *

$$S^{*} = \hbox{min} \left( {S_{j} } \right), \quad S^{ - } = \hbox{max} \left( {S_{j} } \right), \quad j = 1,2, \ldots ,m$$
(29)
$$R^{*} = \hbox{min} \left( {R_{j} } \right), \quad R^{ - } = \hbox{max} \left( {R_{j} } \right), \quad j = 1,2, \ldots ,m$$
(30)

Step 4: Determine the value of \(Q_{j} for j = 1,2, \ldots ,m\) and rank the alternatives by values of Q j

$$Q_{j} = v\left( {\frac{{S_{j} - S^{*} }}{{S^{ - } - S^{*} }}} \right) + \left( {1 - v} \right)\left( {\frac{{R_{j} - R^{*} }}{{R^{ - } - R^{*} }}} \right),$$
(31)

where v is the weight to maximize group utility and (1 − v) is the weight of the individual regret. Usually, v = 0.5; when v > 0.5, the index of Q i will tend to show majority agreement. When v < 0.5, the index of Q i will indicate a dominantly negative attitude.

Weighted scoring method

The weighted scoring method (WSM) is a technique used to evaluate and select software packages. Ease of use is the main advantage of this technique. Suppose m alternatives A1, A2,…, Am has n criteria C1, C2,…, Cn.

The alternatives are fully characterized by DM Sij. Suppose that weights W1, W2,…, Wk is the importance value of the criteria. The suitable alternative has the highest score. To calculate the final score for alternative Ai, the following equation is employed (Jadhav and Sonar 2009a, 2011):

$$S\left( {A_{i} } \right) = \sum W_{j} S_{ij}$$
(32)

where Wj is the importance value of the jth criterion; Sij is the score that measures how well alternative Ai performs on criterion Cj.

According to Refs. Zaidan et al. (2015), Jadhav and Sonar (2011), Triantaphyllou and Lin (1996), Whaiduzzaman et al. (2014), the characteristics of the above MCDM techniques can be summarized as follows. The WSM technique is easy to use and understandable. However, the weights of the attribute are assigned arbitrarily; thus, the task becomes difficult when the number of criteria is high. In Refs. Silva et al. (2013), Jadhav and Sonar (2009a, b), the AHP approach was utilized for software selection because it is a flexible and powerful tool for handling both qualitative and quantitative multi-criteria problems. Furthermore, AHP procedures are applicable to individual and group decision making. However, AHP is time consuming because the mathematical calculations and number of pairwise comparisons increase with the increasing number of alternatives and criteria. Another problem is that decision makers need to re-evaluate alternatives when the number of criteria or alternatives changes. However, ranking the alternatives depends on the alternatives considered for evaluation. Thus, adding or deleting alternatives can change the final rank (rank reversal problem). The ELECTRE technique can handle both qualitative and quantitative criteria. This technique provides a basis for a complete order of different options. The VIKOR technique uses linear normalization. However, the values are not dependent on just criterion evaluation but also on an aggregate function to balance the distance between both the ideal solution and its opposite. TOPSIS is functionally associated with the problems of discrete alternatives. It is one of most practical techniques for solving real-world problems. The relative advantage of TOPSIS is its ability to identify the best alternative quickly. The major weakness of TOPSIS is that it does not provide weight elicitation and consistency checking for judgments. From this viewpoint, TOPSIS meets the requirement of paired comparisons, and the capacity limitation may not significantly dominate the process. Hence, this method would be suitable for cases with a large number of criteria and alternatives, particularly for objective or quantitative data. In a fuzzy-based approach, decision makers can use linguistic terms to evaluate alternatives that improve the decision-making procedure by accommodating the vagueness and ambiguity in human decision making. However, computing fuzzy appropriateness index values and ranking values for all alternatives are difficult.

The limitation of the study in this report is multifaceted, we covered the subject by reviewing technical literature. We recognized numerous limitations in our study. First, the work in this paper applies only to the OSS-LMSs found on search engine databases. The list was selected in January 2014 by using several databases including ScienceDirect, IEEE Xplore, and Web of Science. The keywords used in the search included “open source software”/“learning management system” or “open source software”/“e-learning system” among others. The list of included software is not comprehensive but represents current active and popular projects at the time of study to support a manageable and valid software sample. Second, in an open source world, considerable change could be expected in the span of one-and-a-half years, including the rise and fall of projects. Moreover, more studies are required to identify the current evaluation criteria because many OSS-LMSs may be updated and/or added over the coming years.

There are some contributions in this paper listed as the following:

  • Outlined samples of selection and active OSS-LMS packages with brief description in education

  • Specified the criteria to evaluate OSS-LMS packages based on two aspects; identified and established then a crossover between them to highlight the gaps in the evaluation criteria used for OSS-LMS packages and selection problems.

  • Discussed the ability of MADM/MCDM methods as a recommended solution in the future that is suitable to solve the problem of OSS-LMS packages in multi-criteria evaluation and selection problem and select the best OSS-LMS packages.

Conclusions

Several aspects related to the OSS-LMS evaluation and selection were explored and investigated. In this paper, comprehensive insights are discussed on the basis of the following directions: ascertain available OSS-LMSs from published papers; specify the criteria of evaluating OSS-LMS packages on the basis of two aspects; identify and establish a crossover between them to highlight the gaps in the evaluation criteria used for OSS-LMS packages and selection problems. The ability of selection methods that are appropriate for solving the problem of OSS-LMS packages on multi-criteria evaluation and selection problem is discussed to select the best OSS-LMS packages. The outcomes from these directions are presented in list of active OSS-LMSs consisting of 23 systems. The open issues and challenges for evaluation and selection are highlighted. Other research directions include coverage and MADM/MCDM techniques that are related to the recommended solutions, which can be discussed on the basis of researchers’ opinion of the problem design and adoption of each technique. This research direction is significant because it will help administrators and decision makers in the field of education to select the most suitable and appropriate open source LMS for their needs.