1 Introduction

The planning of formation facility plays an important role in modern systems and it affects the competitiveness of a firm seriously. Most firms have to update the arrangement of the production cells because of the quantity change of the productions, mixed models and short life cycle of a product such as a cell phone. Moreover, due to creativity and innovation in production technologies, we need to use the facilities change. The efficient arrangement leads to a reduction in material handling costs by reducing the materials flow and decreasing the distance between cells, then decreasing production costs and increasing competition power of the organization will be achieved. An effective arrangement helps the other operations that are related to workflow, work better. According to the nature of the data of material flow, the arrangement classifies into two categories: static and dynamic. The arrangement where the flow of materials between sectors does not change over time is known as a static facility layout problem (SFLP). When the flow of material between sectors varies through the planning horizon, it changes to dynamic facility layout problem (DFLP). MacKendall and Hakobyan [1] stated that factors such as design changing to produce new products, add or remove products, change and relocation of existing production tools, short product life cycles, changes in production volume and the production planning may change the material flow. The solving of DFLS is as an arrangement plan. An arrangement plan for DFLP could be shown as a series of plans that each of them associated with a period. Therefore, the arrangement plan is determined on the Planning horizon and minimizes the total of the material handling costs and rearrangement costs of the units in consecutive periods.

Basic question about the two-objective DFLS is how to choose the layout of the machines and how to assign the transporters to the equipment in order to minimize the total cost of operation including the cost of materials handling between production cells and cost of movement production cells. The purpose of this study is to provide a model to answer this basic question.

Koopmans and Beckmann [2] were the first to define the statistic facility layout problem as a current industrial problem and declared the aim of this kind of production cell layout problem is minimizing the cost of material handling between units. Statistic facility layout is studied completely and it dated more than five decades but DFLP is studied in 1986 for the first time by Rosenblatt [3].

In general, different approaches to solving dynamic facility layout problems raised in the literature can be classified into four groups (Kulturel-Konak [4]):

  1. 1.

    Exact methods

  2. 2.

    Heuristics approaches

  3. 3.

    Metaheuristic approaches

  4. 4.

    Hybrid approaches.

Rosenblatt [3], Lacksonen and Enscere [5] used exact methods to solve. Urban [6] used a heuristic method to solve. For the first time, Conway and Venkataramanan [7] used genetic algorithms to solve DFLP. Kochhar and Heragu [8] solved the multi-level facilities layout problem by genetic algorithms and the function of proposed algorithm problems was tested with 6, 15 and 30 departments in 5 and 10 periods. Burkard and Rendl [9] were the first who used SA to solve the facility layout problems. In literature, it is proved that the genetic algorithm has less performance than the simulated annealing algorithm.

Balakrishnan et al. [10], Lee and Lee [11], Dunker et al. [12] used a hybrid genetic algorithm to solve. Kaku and Mazzola [13], Sahin and Turkbey [14] as well as McKendall and Liu [15] used a tabu search algorithm to solve. Baykosaglu and Gindy [16], McKendall et al. [17] used a simulated annealing algorithm to solve.

Azimi [18] used simulation as an efficient tool in the salesman problem and showed how simulation is able to solve such problems efficiently. Chen [19] provided a new representation way to display the answer. The proposed representation method can be used in all of the meta-heuristic algorithms and can enhance search capabilities. The performance of this display method is examined by the ant colony algorithm and it can lead to better answers in less calculation time. Xu and Song [20] implied the movement cost between machines in the form of phase parameters and considered facilities in the form of two-dimensional shapes, then the proposed problem was optimized by using the PSO algorithm. Finally, the efficiency of the proposed algorithm was evaluated and approved using a case study.

Evans et al. [21], Grobelny [22], Raoot and Rakshit [23], Gen et al. [24], Dweiri and Meier [25], Aiello and Enea [26] as well as Deb and Bhattacharyya [27] used fuzzy data.

Chen and Rogers presented a multi-objective dynamic model to explore several aspects of planning layout facility including time, distance-based objective and neighborhood-based objective. They applied ant colony optimization for solving the problem. Although our model and the model of Chen and Rogers are both multi-objectives, Chen and Rogers used Urban’ method for developing the model, and finally, only an objective function is defined as total.

Despite lots of researches have been done in this field, the researches need improvement in planning methods. According to the literature review, assuming unequal facilities and budget constraints, have been considered in previous studies, but the type of transportation vehicles, limited numbers and fixed costs of these vehicles have not considered in any of the studies. Also, almost in all the research, the material handling costs and the cost of moving machinery considered as an objective, but each of these costs in the objective function may have different importance for decision-makers. Therefore, it is better to consider the objective function as two separate functions. This type of formulation makes the decision makers be able to impose their views. On the other hand, due to the single-objective, most researchers used GA, TS and SA methods or hybrid algorithms to solve.

Therefore, we decide to consider the DFLP problem as a multi–objective problem, consider the cost of transporter means in the function objective, and simultaneously consider budget limits and kind of transportation vehicles as constraints in this paper. Despite the reputation of assignment quadratic problem, the problem is difficult to solve by the use of traditional optimization algorithms (Garey and Rogers [28]). As Francis et al. [29] stated, the assignment quadratic problem for more than 15 to 20 departments is NP-hard. In order to find the optimal solution in a DFLP with N production cells and T planning period, the (N!)T options should be evaluated. Therefore, we will design and apply a multi-objective metaheuristic method in this study.

2 Problem statement and modeling

The facility layout problems often are modeled to assignment quadratic problem.

Dynamic facility layout problem depends on the changes in material flow and it is predicted that it happens in the future. Future divides into the number of periods of time and it is the problem of the arrangement planning for some periods, so solutions are in the form of layout planning. A layout planning of the DFLP could be shown as a series of plans each related to a period. Therefore, layout planning is determined in a planning horizon and minimizes the total material handling cost and the rearrangement cost in consecutive periods. When the facility layout is planned in consecutive periods of time and production cells and machinery are moved from one location to another, the rearrangement cost appears. The rearrangement cost can produce another cost, too. For example, you may need special equipment and some may be employed or unemployed.

2.1 Decision variables and parameters

Decision variables and symbols and parameters considered in this model are as follows:

2.1.1 Symbols and parameters of the problem


(A) Index:

  • i, k = facility index i, k = {1,2, …, N}

  • j, l = locations index j, l = {1,2, …, N}

  • tr = transporters index tr = {1,2, …, M}

  • t = period index t = {1, 2, …, T}

  • N = the number of facilities and locations

  • T = number of planned periods

  • M = number of transporters type

(B) Parameters:

  • C t,i,k,m = the materials handling cost between facility i and facility k by transporter m at period t

  • FCtm = fixed cost of transporter m at period t

  • Atijl = cost of moving facility i from the location j to l at period of t

  • Dj,l = distance between location j and l

  • LBt = remaining budget from period t to period t + 1

  • Bt = available budget at the period t

  • ABt = allocated budget for the period t

  • A-TRt,m = maximum number of available transporter m at period t

2.1.2 Decision variables

$${\text{X}}_{\text{tij}} \left\{ {\begin{array}{*{20}l} 1 \hfill & {{\text{facility}}\;i\;{\text{assign}}\;{\text{to}}\;{\text{location}}\;j\;{\text{at}}\;{\text{the}}\;{\text{period}}\;t} \hfill \\ 0 \hfill & {\text{Otherwise}} \hfill \\ \end{array} } \right.$$
$${\text{Y}}_{\text{t,i,k,m}} \left\{ {\begin{array}{*{20}l} 1 \hfill & \begin{aligned} {\text{If}}\;{\text{transporter}}\;{\text{m}}\;{\text{has}}\;{\text{been}}\;{\text{selected}}\;{\text{to}}\;{\text{move}}\;{\text{the}}\;{\text{material}}\; \hfill \\ \quad {\text{from}}\;{\text{machine}}\;{\text{i}}\;{\text{to}}\;{\text{the}}\;{\text{machine }}\;{\text{k }}\;{\text{at}}\;{\text{the}}\;{\text{period}}\;t \hfill \\ \end{aligned} \hfill \\ 0 \hfill & {\text{Otherwise}} \hfill \\ \end{array} } \right.$$

In research, the assumptions are as follows:

  1. 1.

    In each situation can only be one facility.

  2. 2.

    Every facility can be in only one situation.

  3. 3.

    The amount of material flow is certain and pre-determined in all periods of time.

  4. 4.

    The movement cost of every facility is definite and certain in each period.

  5. 5.

    The number of facilities and situations are the same.

  6. 6.

    Each facility can be placed in any position.

  7. 7.

    Available transporter is predetermined.

2.2 Modeling

$$Min Z_{1} = \mathop \sum \limits_{t = 1}^{T} \mathop \sum \limits_{i = 1}^{N} \mathop \sum \limits_{j = 1}^{N} \mathop \sum \limits_{k = 1}^{N} \mathop \sum \limits_{l = 1}^{N} D_{j,l} *F_{t,i,k} *X_{tij} *X_{tkl} *Y_{t,i,k,m} *C_{t,i,k,m}$$
(1)
$$Min Z_{2} = \mathop \sum \limits_{t = 2}^{T} \mathop \sum \limits_{i = 1}^{N} \mathop \sum \limits_{j = 1}^{N} \mathop \sum \limits_{l = 1}^{N} A_{tijl} *X_{t - 1,ij} *X_{til} + \mathop \sum \limits_{t = 1}^{T} \mathop \sum \limits_{i = 1}^{N} \mathop \sum \limits_{k = 1}^{N} \mathop \sum \limits_{m = 1}^{M} Y_{t,i,k,m} *FC_{tm}$$
(2)

Subject to:

$$\mathop \sum \limits_{j = 1}^{N} X_{t,i,j} = 1\quad \begin{array}{*{20}l} {i = 1,2, \ldots ,N;} \hfill \\ {t = 1,2, \ldots ,T} \hfill \\ \end{array}$$
(3)
$$\mathop \sum \limits_{i = 1}^{N} X_{t,i,j} = 1\quad \begin{array}{*{20}l} {j = 1,2, \ldots ,N;} \hfill \\ {t = 1,2, \ldots ,T } \hfill \\ \end{array}$$
(4)
$$\begin{aligned} & \mathop \sum \limits_{m = 1}^{M} Y_{t,i,k,m} = R_{t.i.k} \\ & R_{t,i,k} = \left\{ {\begin{array}{*{20}l} {1\quad if\;F_{t,i,k} \ge 0 } \hfill \\ {0\quad else} \hfill \\ \end{array} \quad \begin{array}{*{20}l} {i = 1,2, \ldots ,N - 1 } \hfill \\ {k = i + 1,2, \ldots ,N} \hfill \\ \end{array} } \right. \\ \end{aligned}$$
(5)
$$\mathop \sum \limits_{i = 1}^{N - 1} \mathop \sum \limits_{k = i + 1}^{N} Y_{t,i,k,m} \le A \_ TR_{t,m} \quad \begin{array}{*{20}l} {m = 1,2, \ldots ,M} \hfill \\ { t = 1,2, \ldots ,T} \hfill \\ \end{array}$$
(6)
$$Y_{t,i,k,m} = Y_{t,k, i,m} \quad \begin{array}{*{20}l} {i,k = 1,2, \ldots ,N} \hfill \\ {t = 1,2, \ldots ,T} \hfill \\ {m = 1,2, \ldots ,M} \hfill \\ \end{array}$$
(7)
$$LB_{t} = B_{t} - \mathop \sum \limits_{i = 1}^{N} \mathop \sum \limits_{j = 1}^{N} \mathop \sum \limits_{l = 1}^{N} A_{tijl} *X_{t - 1,ij} *X_{til} - \mathop \sum \limits_{i = 1}^{N} \mathop \sum \limits_{k = 1}^{N} \mathop \sum \limits_{m = 1}^{M} Y_{t,i,k,m} *FC_{tm}, \quad t = 1,2, \ldots ,T$$
(8)
$$B_{t} = AB_{t} + LB_{t - 1} ,\quad t = 1,2, \ldots ,T$$
(9)
$$\mathop \sum \limits_{i = 1}^{N} \mathop \sum \limits_{j = 1}^{N} \mathop \sum \limits_{l = 1}^{N} A_{tijl} *X_{t - 1,ij} *X_{til} + \mathop \sum \limits_{i = 1}^{N} \mathop \sum \limits_{k = 1}^{N} \mathop \sum \limits_{m = 1}^{M} Y_{t,i,k,m} *FC_{tm} \le B_{t} ,\quad t = 1,2, \ldots ,T$$
(10)
$$X_{t,i,j} \in \left\{ {0,1} \right\}\quad \begin{array}{*{20}l} {i,j = 1,2, \ldots ,N} \hfill \\ {t = 1,2, \ldots ,T} \hfill \\ \end{array}$$
(11)
$$Y_{t,i,k,m} \in \left\{ {0,1} \right\}\quad \begin{array}{*{20}l} {i,k = 1,2, \ldots ,N} \hfill \\ {m = 1,2, \ldots ,M} \hfill \\ {t = 1,2, \ldots ,T} \hfill \\ \end{array}$$
(12)
$$LB_{t} ,B_{t} ,AB_{t} \ge 0,\quad t = 1,2, \ldots ,T$$
(13)

The first objective function (1) minimizes the summation of carrying material cost between cells. The second objective function (2) is associated with the costs at the beginning of the period and tries to minimize the costs of machinery movement and fixed cost of transporters. Constraint (3) ensures that every facility will only be in one situation and constraint (4) ensures that in each situation, only one facility takes place. Constraint (5) states that for a pair of machines if there is a material flow, one transporter should be assigned. Constraint (6) is to control the number of transporters used in each period. Constraint (7) ensures that for each pair of machines, one type of transporter would be assigned. The constraint (8) relates that the budget transferred to the next period is equal to the current budget minus the relocation and transportation costs in the current period. Constraint (9) shows that the available budget for every period is equal to the total budget allocated to the same period and the remaining budget from the previous period. Constraint (10) indicates a max budget for each period. Constraints (11), (12) and (13) controls the values of the decision variables.

After modeling, the validity of the model must be examined. For this purpose, a small example was resolved by Lingo software (version 8). The model runs without error and the answer was reasonable and any of the answers did not violate limitations. Examples including 6 departments, 6 situations and 3 types of the transporter and horizontal time also comprised of two periods.

In this paper, the maximum time suitable to solve a problem was considered 1000 s and the examples were solved with an exact algorithm. After solving the problems, it was determined that in the problem with 7 departments and 3 periods of time, it is impossible to achieve the answer within 1000 s. During this period of time of performance, the software can’t achieve the answer. Therefore, to solve problems with the size of more than 7 departments and 3 periods in an acceptable time, it is needed to develop heuristic and meta-heuristic methods. So, a new metaheuristic algorithm is proposed for solving the dynamic facility layout problem. The proposed algorithm is able to solve the model efficiently by using the annealing metals principle and cloud theory.

3 Solution approaches

Optimization algorithms presented in this paper are the following:

  1. 1.

    Non-dominated Sorting Genetic Algorithm (NSGA-II)

  2. 2.

    Non-dominated Ranked Genetic Algorithm (NRGA)

  3. 3.

    Multi-Objective Cloud Simulated Annealing Algorithm (MOCSA).

In this research, the binary tournament selection strategy is used. The selection strategy is used to select two parents from the population. In this method, for the selection of two parents, first of all, two members of the population are selected by random and then they are selected based on compared rank. Anyone who has less rank is selected and if they have the same rank then they are compared according to the crowding distance and each one has higher crowding distance is chosen as a parent.

Ranking of the population occurs using the following two steps:

  1. 1.

    Fast Non-dominated sort

  2. 2.

    Crowding distance: Less crowding distance shows that the answers have more density.

Operators which are used in order to produce generation:

  1. 1.

    Crossover operator

  2. 2.

    Mutation operator.

3.1 Non-dominated sorting genetic algorithm (NSGA-II)

This algorithm was presented in 2002 by Deb et al. To perform non-dominated sorting genetic algorithm the following steps should be done [30]:

  1. 1.

    The parents population (Pt) and offspring population (Qt) merged together in order to create a population (Rt). We must now select N top answers of R t and make new parents population Pt+1.

  2. 2.

    Non-dominated sorting performance on Rt and its different boundary is shown with (Fi: i = 1, 2, …) will be determined and identified.

  3. 3.

    The process of putting the answer in the population begins from the answers belonging to the first border that is F1. The increase of the members of every border will continue until the number of members of the population is still less than N.

  4. 4.

    If the sum of the members of i th border and the number of current members of the population is more than N i.e.: |Fi| + |Pt+1| > N, crowding distance of any answer will be calculated and the number of (N − |Pt+1|) from the answers which have the highest density are added into the population.

  5. 5.

    The population of new offspring (Qt+1) will be created by applying the selection, crossover and mutation operators on Pt+1.

In this research, designing the structure of chromosomes is so that, the produced answer satisfies constraints of the arrangement and transporter automatically. But about the constraints of the budget must be used in a different approach. To solve this problem and to evaluate the chromosomes in the algorithms actually by penalizing, the function objective changes into an appropriate fitness function and we balance the objective function by a linear transformation:

$${\text{Z}}_{\text{i}} = {\text{Z}}_{\text{i}} + {\text{a}}*{\text{b}}$$
(14)

In this research, the stop criteria of the algorithm is to reach the predefined maximum iteration. The amount of Iteration is calculated in the parameter adjustment discussion and by Taguchi experimental design.

3.2 Non-dominated ranked genetic algorithm (NRGA)

Al Jadaan et al. [31] used a modified selection algorithm based on the roulette wheel where it is assigned an amount for each answer equal to the rank of the answer in the population. The difference of NRGA with NSGA-II is in the selection strategy, population arranging and selection for the next generation.

First, we arrange the population answers in the non-dominated borders so that the first border will have the best answers in the population. So, if at this stage, we have 5 non-dominated borders for the population, we allocate 5 scores for the first border and 1 score to the fifth border. Therefore, the higher the score shows answers to that border are better. After ranking the borders, the answer in each border also will be ranked based on crowding distance. Therefore, after calculating the crowding distance for all responses present in every border, the response that has the most crowding distance will have the highest rank and for the response with the lowest crowded distance will have rank 1. In the selection section, we use the roulette wheel operator based on ranking. This selection methodology selects better members with more probability for reproduction and generates the next generation.

First, the roulette wheel is defined on the two intervals [0, S1] and [0, S2], \(S_{1} = \sum\nolimits_{i = 1}^{n} {p_{i} }\) and \(S_{2} = \sum\nolimits_{j = 1}^{m} {p_{0j } }\). Then, answers present in the border occupy some of these two intervals based on the probability of their selection. Then two random numbers are selected between 0 and 1 and the first random number is used for selecting the border in the range [0, S1] and the second random number is used for selecting one of the responses present in the selected border in the range [0, S2].

3.3 Multi-objective cloud simulated annealing algorithm (MOCSA)

In SA, in addition to accepting a solution that improves the objective function, worse solutions are accepted by probability too. The probability function is shown in Eq. (15):

$$P_{SA} = e^{{\frac{ - \Delta }{T}}} ;\quad \Delta = \frac{{f(w_{n} ) - f(w)}}{{f(w_{n} )}} \times 100$$
(15)

where ∆ is the rate of the changes of the objective function and T represents temperature. If PSA is more than a random number between zero and one, the bad answer is accepted. As a general rule, every iteration of SA algorithm creates a neighbor state likes and based on a probability, the problem goes from s state to s′ state or remain in the same s state. This step repeats until we achieve a relatively optimal solution or the maximum number of iterations have been done. In the case of multi-objective, first of all, some initial solutions were produced and this solution searches the solved space in parallel. At the end of each iteration, all solutions are put together and non-dominated solutions are removed. Remain solutions are considered as the initial solution of the next iteration.

Cloud theory is the innovation and development of membership function at the fuzzy theory which is obtained from the transformation of uncertainty between quantity and quality based on the time value concept. The concept of physical annealing, molecules as the dropped temperature move randomly from large-scale to small-scale. Cloud theory is easy in Linguistic description but difficult to simulate on the computer. However, because cloud theory can explain the transfer from a qualitative concept and present it as a numeric form, it is used as a guide for performance. Cloud theory is used for producing continuous annealing temperature. Cloud theory has an indiscriminate characteristic and tends to be stable. Although annealing temperature varies randomly, its rupture reduces variability in the search. In this condition and based on cloud theory when the system is in equilibrium, the temperature of the system will be generated around a point and accumulate.

The structure of the MOCSA algorithm solution presentation proposed is similar to the proposed NSGA-II presentation.

The generator of neighborhood structure has the task of producing the next states and the movement state of an algorithm is determined according to the calculation of current point cost and the next point cost. The movement generator changes the solution from the current state to the neighboring state. It is done by 2-opt replacement, in this way, first, one substring representing the layout plan related to a period, is selected then two situations are determined by random and their inner facilities are replaced.

Different component cooling programs include:

  1. A.

    Initiate temperature: we produce a large number of random solutions and determine their objective function; then we calculate the rate of the standard deviation of the obtained results and used to determine the initial temperature. In the proposed algorithm, 1.5 times of standard deviation present in the primary responses have been used to determine the initial solution. It should be noted that for each objective, a separate temperature will be considered.

  2. B.

    The final temperature

  3. C.

    Reduction of the temperature in each stage: temperature reduction can usually be achieved by a simple linear relation:

    $$T_{k} = \alpha T_{k - 1 }$$
    (16)

    The parameter α is determined through experiments analysis.

    Of course, as stated in the SA algorithm based on cloud theory, the temperature at each level is not stable and should have minor changes. In this paper the following linear equation is used to reduce the temperature:

    $$T_{A} = T_{k} + RAND\left[ { - 0.1*T_{k} , 0.1*T_{k} } \right]$$
    (17)

    In this equation, \({\text{T}}_{\text{k}}\) is the amount of base temperature in a certain balance and \({\text{T}}_{\text{A}}\) is the temperature used in the reception function. The RAND [a, b] function gives a random number in the range of a to b.

  4. D.

    iteration at any temperature: the number of iterations at each temperature will be determined through experiments analysis.

The probability of bad movement acceptation is calculated by Eq. (18):

$$\begin{array}{*{20}l} {P1 = \exp \left( {\frac{ - \Delta C1}{t1}} \right)\rangle r} \hfill \\ {P2 = \exp \left( {\frac{ - \Delta C2}{t2}} \right)\rangle r} \hfill \\ \end{array} \quad P = (P_{1} + P_{2} )/2$$
(18)

where ΔC1 and ΔC2 are differences between the current solution and the neighborhood solution in the first and second evaluation functions; t1 and t2 are the current temperatures of the system for each objective. r is a random number between zero and one and p, the probability of moving to a new solution.

The move will be to the new solution if the new solution is better than the current one or the amount of movement probability function is greater than a random number of the range (0, 1]. Otherwise, the explorer will generate and evaluate another new solution. The process continues step by step until stop criteria are achieved. In this algorithm, such as proposed NSGA-II, stop criteria is achieving a defined iteration.

In order to increase the efficiency of the proposed three algorithms for the proposed model, input parameters of algorithms are set in their best quantity by using the parameter setting method.

3.4 Setting parameters and calculation results

In this paper for parameter setting, at the first determine controllable factors, uncontrollable factors, performance evaluation Criterion and levels related to every factor, then select the appropriate orthogonal arrays and ultimately the optimum levels will be determined and data will be analyzed using the signal ratio to noise.

Table 1 shows the domain of searching input parameters levels of three algorithms.

Table 1 Controllable factors and their levels

The complete factorial experimental project for the above four factors needs 81 tests or treatment combination. But this type of project in terms of cost and time is not economical. On the other hand, statistically no need to experiment with all combinations of factor levels. So, we will use fractional repeat projects. In order to select an appropriate orthogonal array, we must calculate the required number of degrees of freedom. In this case, one degree of freedom for total mean and two degrees of freedom for each of three level factors are needed. Therefore, the total required degrees of freedom is equal to 9 and the orthogonal array is L27. Structure of designed arrays and the results of experiments for the NSGA-II, NRGA, and MOCSA algorithms were calculated. The intended response variable to solve the proposed model is the number of solutions present in the front of Algorithms (NOS).

The aim of these experiments is the optimization of the control factors levels combination. To achieve this purpose, the performance measurements provided by Taguchi, that is to say, the ratio S/N is considered as a response variable. The response variable must be as large as possible, so its variable matches with the state “bigger is better”. According to this context, the ratio of S/N for the mentioned variable is the following:

$$\eta = \left( {\frac{S}{N}} \right) = - 10\log \left( {\frac{1}{n}} \right)\mathop \sum \limits_{i} \left( {1/y_{i}^{2} } \right)$$
(19)

In the tables taken from the software, the manner how the index quantities S/N change at different levels of algorithms is studied, and the levels in which the index S/N achieved its peak are selected as optimal level and their quantities are given in the Table 2.

Table 2 Ideal levels of factors for the proposed algorithms

In this study, due to multi-objective structure of the proposed model, we applied two curves for convergence ability of the proposed solution approaches. The first curve indicates convergence of the Pareto front after the determined iterations for the best solution approach (MOCSA). The second curve shows the convergence of all solution methods in terms of optimality. Since the objectives of the model are minimization, so the method, which can find the solutions with minimal cost, is more favorable. For this purpose, a sample of example is generated according to the following data:

  • 6 Departments and 10 periods;

  • Distance matrix

 

1

2

3

4

5

6

1

0

1

2

1

2

3

2

1

0

1

2

1

2

3

2

1

0

3

2

1

4

1

2

3

0

1

2

5

2

1

2

1

0

1

6

3

2

1

2

1

0

  • Flow of materials matrix

 

1

2

3

4

5

6

1

2

3

4

5

6

1

0

90

689

194

165

494

0

257

1632

330

117

285

2

668

0

1324

811

241

206

159

0

1309

297

803

404

3

631

387

0

125

281

375

98

82

0

271

222

383

4

80

495

615

0

222

221

110

404

1174

0

750

386

5

276

204

1127

490

0

676

73

507

1679

190

0

107

6

109

409

1780

394

200

0

152

487

355

646

315

0

1

0

1348

490

447

186

169

0

159

1103

218

297

95

2

625

0

74

307

777

326

631

0

1618

95

253

109

3

114

1645

0

288

975

68

552

213

0

432

397

141

4

156

578

447

0

554

212

418

122

797

0

108

495

5

353

732

118

373

0

283

115

154

1610

425

0

158

6

328

1071

387

352

199

0

167

214

2092

471

323

0

1

0

315

456

2340

187

73

0

375

319

558

745

183

2

581

0

195

2370

162

207

703

0

209

789

428

502

3

431

179

0

1090

233

248

496

234

0

481

109

508

4

301

58

56

0

124

170

237

1008

439

0

508

451

5

173

286

396

575

0

189

533

848

394

570

0

96

6

123

159

143

1753

411

0

288

202

386

729

653

0

1

0

1112

505

422

414

132

0

191

1623

264

433

90

2

627

0

560

99

227

86

422

0

455

240

101

418

3

373

2007

0

235

384

205

269

127

0

131

339

584

4

482

1638

262

0

233

129

275

272

834

0

477

551

5

223

1196

520

55

0

75

434

326

1526

810

0

569

6

200

782

271

292

235

0

276

327

1040

245

331

0

1

0

191

390

239

215

107

0

379

141

116

321

39

2

1868

0

126

448

271

108

1167

0

194

186

434

224

3

1870

121

0

116

256

19

1399

99

0

498

247

205

4

517

249

574

0

168

111

1718

289

308

0

281

86

5

1701

172

249

457

0

91

2474

127

122

180

0

51

6

1761

317

482

471

318

0

1466

117

142

568

404

0

  • Handling cost = [898 911 627 538 738 977];

  • The number of handling systems type: 3;

  • Number of handling system 1: 14;

  • Number of handling system 2: 9;

  • Number of handling system 3: 4;

  • Variable cost = rand (0–10);

  • Fixed cost = 500 + rand (0–100)

  • Budget = 5000 + rand (0–1000);

Figure 1 shows the convergence of the best solutions approach (MOCSA) to the minimal Pareto front after the determined iterations. As can be seen, three Pareto fronts are illustrated by three colors. The blue front is the first front, the green front is the second one, and red front is the third front. It can be found that after the required iterations, the Pareto front is converged to the red front due to the minimization type of the model. Therefore, convergence of the method can be validated.

Fig. 1
figure 1

The Pareto solutions of MOCSA algorithm for a test problem

In addition, three proposed solution approaches are compared in terms of convergence to the optimal front. From Fig. 2, three Pareto fronts are illustrated for the proposed methods. As it can be seen, the Pareto front that is found by MOCSA method has the better performance against other methods due to finding the Pareto front with minimal costs.

Fig. 2
figure 2

Comparison of Pareto front in problem number 1

4 Results analysis

In this section, we analyze the calculated results of the presented solution methods. To analyze the results, 32 samples of the problem were performed. After that, to evaluate the performance of algorithms, four criteria of comparison are presented. At last, the performance of the proposed algorithms is compared with each other based on the indices. Spacing, maximum spread, algorithm running time, and the number of Pareto solutions measures have been used for analyzing and comparing.

4.1 Comparison criteria

  1. 1.

    Spacing criteria: the algorithm is better which final non-dominate solutions have small spacing.

    $$S = \sqrt {\frac{1}{n - 1}\sum\nolimits_{i = 1}^{n} {\left( {d_{i} - \bar{d}} \right)^{2} } }$$
    (20)
  2. 2.

    Maximum diversity: in the case of the two objectives, this criterion is equal to the Euclidean distance between two border solutions in target space. The bigger this criterion, the better it will be.

    $$D = \sqrt {\sum\nolimits_{j = 1}^{m} {(\hbox{max} f_{i}^{j} - \hbox{min} f_{i}^{j} )^{2} } }$$
    (21)
  3. 3.

    The number of Pareto’s solution criterion: the quantity of NOS Criterion indicates the number of Pareto’s optimal solutions that could be found in every algorithm.

  4. 4.

    Algorithm running time criterion

Pareto-based multi-objective algorithms were calculated and compared with above criterions for all generated experimental problems which results are shown in Tables 3, 4, 5, 6 and Figs. 3, 4, 5, 6. For evaluating the algorithms, first, all results have been normalized by using the relative percentage division (RPD). The RDP shows the distance between solutions and obtained the optimal solution at each algorithm.

Table 3 Results and RPD values for spacing criteria
Table 4 Results and RPD values for maximum diversity criteria
Table 5 Results and RPD values for number of Pareto’s solution criteria
Table 6 Results and RPD values for running time criteria
Fig. 3
figure 3

Efficiency comparison of the proposed algorithms in a spacing criterion

Fig. 4
figure 4

Efficiency comparison of the proposed algorithms in the maximum spread criterion

Fig. 5
figure 5

Efficiency comparison of the proposed algorithms in the number of Pareto’s solutions criterion

Fig. 6
figure 6

Efficiency comparison of run time of proposed algorithms

After defining the standard criteria for comparing Pareto-based multi-objective algorithms, that criteria are calculated for all generated experimental samples in Table 3, 4, 5 and 6. To evaluate the algorithms, all the results are normalized by using the relative percentage division (RPD). The RPD shows the distance between solutions and obtained optimal solution at each algorithm. RDP is calculated according to the formula:

RDP is calculated according to the formula:

$$RPD_{ij} = \left| {\left( {sol_{ij} {-}sol_{j,best} } \right)} \right|/sol_{j,best}$$
(22)

i is the number of the algorithm and j is the problem number. The Tables 3, 4, 5 and 6 show the RPD values for all algorithms.

Figure 3 shows the performance of Algorithms in the spacing index. It can be concluded that the NSGA-II algorithm has the best performance among other algorithms because this criterion should be as small as possible. Furthermore, in some cases, the MOCSA had high quality. Figure 4 shows the algorithm efficiency in the index of the maximum spread. In this index, any of the algorithms is not superior to the other. Figure 5 shows the performance of proposed algorithms in the index of NOS. It is obvious that almost in all of the samples, NSGA-II performs better and can find the most solution at the first forefront of Pareto. Figure 6 shows the superiority of the proposed algorithm (MOCSA) to NSGA-II and NRGA at the run time. According to this figure, the MOCSA algorithm in the small size problems performs the same as the other two algorithms, but as the problem size grows, the performance of the MOCSA algorithm highly increases so that it creates a significant difference with two other algorithms, especially NSGA-II.

In order to evaluate and compare precisely, the statistical analyses were used especially variance analysis. In such a way that the algorithms were variance analyzed by software in relation to any criteria and the results were analyzed. Less than 0.05% of P value indicates a significant difference between the responses of the two algorithms in relation to specific criteria, otherwise, it can be said that there is no significant difference between the performances of two algorithms in relation to that criteria. The report of software output of variance analysis, it was observed that the quantities of P values are less than 0.05% in all criteria, so according to statistical output, there are significant differences between the algorithms in the Spacing, Diversity, NOS and run time criteria.

Also in order to determine the efficiency of algorithms in the indexes that have a significant difference, the 95% confidence intervals test are used and the results of that are shown at the Figs. 7, 8, 9 and 10 in the form of confidence intervals. We analyze the charts considering the fact that the performance of the algorithm at the concerned index are better when RPD amount is less and as much nearer to zero.

Fig. 7
figure 7

The results of variance analysis for the spacing indicator

Fig. 8
figure 8

The results of variance analysis for the maximum spread

Fig. 9
figure 9

Results of variance analysis to Pareto’s solution number index

Fig. 10
figure 10

The results of variance analysis for run time index

As shown in Fig. 7, the average of the results of the NSGA-II and MOCSA algorithm is close together, but it is far away from the average results of the NRGA. Thus, according to the analysis results, the performance of the NSGA-II and MOCSA algorithm is better at the spacing index.

According to Fig. 8, the results obtained from 95% confidence interval test for the maximum spread criteria showed the performance of the proposed algorithm MOCSA is better than the two other algorithms.

Also according to the Fig. 9, the performance of the NSGA-II is much better than the other two algorithms in optimal Pareto’s queue solutions number. The performance of the proposed algorithm is also slightly better than NRGA.

According to the results of variance analysis in Fig. 10, the performance of the proposed algorithm is much better than the two other algorithms in the run time index.

The applicability of the research in this paper can be discussed in two ways in terms of model and methodology as follows:

The Proposed model Facility layout issues are often modeled as a Quadratic Assignment Problem (QAP which is a problem that uses not only for the planning of the firm layout, but also uses for the planning of hospitals and universities layout, and even the keyboard layout. According to Francis and his colleagues’ research [29], the QAP computation is Intractable for a problem with more than 15 to 20 departments. The demand changes in different periods, so we need to develop the model for the dynamic state which the flow of materials is predicted on each period (Balakrishnan and Cheng [10]) and also the transportation costs are determined based on that. To determine these costs exactly, it is necessary to consider the different handling machines type., in this model, we are going to create conditions for decision-makers to they have all the options ahead and can easily make their own choices based on their goals. A large budget is required for changing layout. Therefore, regarding the budget in modeling and solving the problem is one of the realistic aspects of the problem. On the other hand, re-arrangement cost and the material handling cost are a different matter and it should be considered in the manager decision. So we considered those as two distinct functions in the proposed model. If these two types of goals have not the same Currency value, the higher cost will cover the lower cost and will not have a significant impact on decision-makers views. Also, in this modeling, decision-makers can choose best carriers by analyzing the variable and fixed costs of the handling system during the planning horizon. For example, a robot may have a higher fixed cost than a forklift, but its variable cost is much lower, so managers and decision makers can decide easily by considering their planning and priorities.

The Proposed methodology DCFP is an intractable problem, so because of that complexity, developing a Meta-heuristic algorithm is necessary. Simulated annealing is a local search algorithm that can pass the local optimal. This Meta-heuristic algorithm has become more used in recent years because it is used easily and it can cross from the local optimal by using the Hill-climbing. Also it applicable to discrete problems and a few continuous problems. Although the annealing temperature changes randomly which this discrepancy reduces the search variation, the cloud theory is used to produce a continuous annealing temperature. The cloud theory has a random attribute and tends to be stable. So, in the real world, this solving method gives better solutions.

In today’s manufacturing environment, the facility layout needs to be sufficiently adaptable to changes in product design, process design, and flexible scheduling in order to stay in the competition. Therefore, I think this model and method are not only applied to actual problems but also it has high flexibility. I hope this method be useful in real world.

5 Conclusion

As we have seen, the main objective of this research is developing the mathematical model as a two-objective problem by considering the transportation costs at objective function, also applying budget and type of transporter limitation simultaneously and presenting a solving method for dynamic layout facilities problem. Since the exact methods often are not suitable for large size problems, proposed heuristic and meta-heuristic approaches have been developed. The proposed algorithm is a combination of simulated annealing and cloud theory. The performance of the proposed algorithm has been compared with the algorithms present in the literature and the results indicate the effectiveness of the proposed algorithm. In order to compare this issue, first of all the proposed algorithm parameters placed on its best quantities by using the Taguchi method, so that algorithm will have the maximum efficiency; then the random sample problems have been generated and they have been solved by the proposed algorithm and results were compared with the results of two other algorithms. The results show that the NSGA-II performs significantly better than the other two algorithms in the number of Pareto’s solutions criteria. While the performance of MOCSA and NRGA are very close together. NSGA-II and MOCSA perform the same way at the spacing criteria too. However, NRGA performs weaker than two other algorithms. At the maximum spread criterion, MOCSA performs significantly better than NRGA. In this criterion, NSGA-II and NRGA have not a significant difference. Also, although two NSGA-II and MOCSA algorithms perform statistically the same, we can say that the MOCSA algorithm performs better. MOCSA performs highly better than the two other algorithms at run time criterion.

Finally, even though this study concern directly about the production sites, but its results can be used to other arrangements especially Office Layout because it is a generic search.

This research concerns only a small portion of the combined problems about facility layout, so further researches in this field can be done. Among the fields to be continued in this research are as follows:

  • Considering the size of the facility unequally and solving it by the proposed algorithm

  • Taking account the intracellular flows and entry and exit points of each cell

  • Taking account random parameters of layout problem in a dynamic state

  • Considering the time value of money in various periods as well as the fuzzy costs.