Advertisement

Clustered Queuing Model for Task Scheduling in Cloud Environment

  • Sridevi S.
  • Rhymend Uthariaraj V.
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 645)

Abstract

With the advent of big data and Internet of Things (IoT), optimal task scheduling problem for heterogeneous multi-core virtual machines (VMs) in cloud environment has garnered greater attention from researchers around the globe. The queuing model used for optimal task scheduling is to be tuned according to the interactions of the task with the cloud resources and the availability of cloud processing entities. The queue disciplines such as First-In First-Out, Last-In First-Out, Selection In Random Order, Priority Queuing, Shortest Job First, Shortest Remaining Processing Time are all well-known queuing disciplines applied to handle this problem. We propose a novel queue discipline which is based on k-means clustering called clustered queue discipline (CQD) to tackle the above-mentioned problem. Results show that CQD performs better than FIFO and priority queue models under high demand for resource. The study shows that: in all cases, approximations to the CQD policies perform better than other disciplines; randomized policies perform fairly close to the proposed one and, the performance gain of the proposed policy over the other simulated policies, increase as the mean task resource requirement increases and as the number of VMs in the system increases. It is also observed that the time complexity of clustering and scheduling policies is not optimal and hence needs to be improved.

Keywords

Cloud computing Load balancing Clustering Queuing discipline 

1 Introduction

The task scheduling problem in cloud environment is a well-known NP hard problem where the queuing strategy adopted to schedule tasks plays a vital role. The existing queuing disciplines follow strategies that are not well suited for cloud environments. Based on the extensive study carried out, it is found that the queuing discipline that is best suitable for cloud environment should posses the following characterization: must possess multiple queues that can be quickly forwarded to appropriate nodes [1]; must have single entry point for tasks; multiple points of VM buffer and after processing finally exiting the system; joint probability distributions are to be computable; fair queuing with minimum waiting time of tasks and maximum utilization of virtual machines (VMs).

The primary contribution of this paper is the novel queuing model proposed to further improve task scheduling performance in cloud data centers. The steps to derive the joint probability distribution based on the proposed CQD model are given in detail. The model is further compared with other disciplines such as FIFO and priority queuing disciplines to reveal the improved performance by adopting CQD. It is rarely found that the probability distribution measures are computed for such real-time complex queuing systems. [2]

2 Literature Study

Few of the well-known service disciplines in queuing models are First-In First-Out (FIFO), Last-In First-Out (LIFO), Selection In Random Order (SIRO), Priority, Shortest Job First (SJF), and Shortest Remaining Processing Time (SRPT). In cloud systems, priority of jobs, job lengths, job interdependencies, processing capacity and the current load on the VMs are the factors on deciding the next task to be scheduled [3], whereas above disciplines do not consider these.

In [4], the authors modeled the cloud center as a classic open network; they obtained the distribution of response time based on assumption that exponential inter-arrival time and service time distributions occurred. The response time distribution revealed the relationship between maximal number of tasks and minimal resource requirements against the required level of service. Generally, the literature in this area elaborately discusses M/G/m queuing systems, as outlined in [5, 6, 7]. The response time distributions and queue length analysis for M/G/m systems are insufficient for cloud environment as it requires two stages of processing where workload characterization and characterized task allocations are a must. As solutions for distribution of response time and queue length in M/G/m systems cannot be obtained in closed form, suitable approximations were sought in these papers. However, most of the above models lead to proper estimates of mean response time with constraint that lesser VMs are present [8]. Approximation errors are particularly large when the offered load is small and the number of servers m are more [9]. Hence, these results are not directly applicable to performance analysis of cloud computing environments where generally number of VMs are large and service distributions and arrival distributions are not known.

3 Clustered Queuing Model

As observed, the existing queuing models do not directly fit in cloud environment [10]. Hence, a suitable queuing model that best depicts the cloud scenario is developed. In M/G/∞ networks, the analysis of waiting time and response time distributions is already known and well established, but the determination of the joint distribution of the queue lengths at the various servers at the arrival epochs of a submitted task in those nodes presents an important problem. This paper is devoted to this problem. The following subsection discusses the proposed queueing model in detail.

3.1 Statistical Model

The focus is to derive a queuing model with the above characteristics in order to effectively schedule incoming task requests in cloud environment. Jockeying is allowed when load imbalance among VMs becomes high. Here, balking or reneging scenarios are not considered to maintain simplicity.

The queuing model as shown in Fig. 1 involves a global and several local queues. The global queue is one-way entry into the system, and all the tasks submitted to the cloud environment pass through this queue. Arrival rate of incoming requests is taken as λ, and μ1 and μ2 are the clustering and scheduling service rates, respectively. Considering that departure process of each queue is again a Poisson process, the following discussion is put forth.
Fig. 1

Illustration of the queuing model

Clustering mechanism is based on each task’s resource requirement rate. Workload characterization of incoming task requests is done here. Task requests are buffered for Δt units of time, and these task requests are clustered according to their resource requirement. Considering the cloud environment, the queuing model best suited is proposed. The queuing model prescribed for this problem is given in Kendall’s notation as,
$$\left( {{\text{M}}/{\text{G}}_{2} /{\text{m}}} \right):\left( {{\infty }/{\text{CQD}}} \right)$$

It represents that arrival follows Markovian arrival distribution (M) and the service follows two-stage general service distribution (G2) with m number of VMs in an infinite capacity system (∞). The model follows clustered queue discipline (CQD) as the queue discipline which is the proposed work. Here, general distribution means an arbitrary distribution with E(x) and E(x2) and the service times are independent and identically distributed (IID) [11].

3.2 Performance Parameters

Generally, performance parameters involve terms such as server utilization, throughput, waiting time, response time, queue length, and number of customers in the system at any given time [2]. Here, λ is the arrival rate; μ1 and μ2 are clustering and scheduling service rates.

Server utilization factor for clustering server Uc with Poisson arrival and general service distribution is given by,
$${\text{U}}_{\text{c}} =\uplambda/\upmu_{1}$$
(1)
Server utilization factor scheduling server Us with Poisson arrival and general service distribution with m number of VMs in the system is given by,
$$U_{s} =\uplambda/{\text{m}}\upmu_{2}$$
(2)
Throughput of the system is defined as the mean number of requests serviced during a time unit. λ(i) denotes the arrival of tasks at the parallel queues in stage II at queues i ranging from 1 to m. It is denoted using ψ and is given by,
$$\uppsi = {\text{U}}_{\text{c}}\upmu_{1} + {\text{U}}_{\text{s}}\upmu_{2} \left( {\sum\uplambda_{{({\text{i}})}} /{\text{m}}\upmu_{{2({\text{i}})}} } \right)$$
(3)

3.3 Joint Probability Distribution for Tandem Queues in Parallel

We approach the model as two queuing phases in series. The first phase is considered to follow single-server single-queue model with infinite service capacity, whereas the second phase involves tandem queues in parallel with single global scheduler as the server with infinite capacity. As the first part is a well-known single-server single-queue model, it does not require any further investigation. The second phase of the model with tandem queues in parallel is of major concern.

Some notations used to model the queuing discipline: [12].

LQ1, LQ2,…LQk are K queues in parallel and the tasks arrive at the queues in Poisson fashion with λ as arrival rate and service times at LQ1, LQ2,…LQk are independent, identically distributed stochastic variables with distribution B1(.), B2(.), …, Bk(.) with first moment β1, β2,… βk. In the following, it will be assumed that B1 (0 +) = 0 and β1 < ∞, i = 1, …,k.

On deriving the queue characteristics of the second phase of the model, we shall compound the results with the already known parameters of M/M/1 model [13].

Now, the derivation steps for tandem queues in parallel are discussed below. Our approach initially considers 2 queues in parallel and then extends the results to k queues in parallel.

The proposed model is derived from supplanting tandem queues in series derivation [1] into queues in parallel. The following three steps outline the derivation method adopted from the approach followed by O.J. Boxma:
  • Determine a product-form expression of the type determining joint stationary queue length distribution of a submitted task at its arrival epochs at two queues of a general network of M/G/∞ queues;

  • Apply the PASTA property which states that ‘Poisson Arrivals See Time Averages’ [14];

  • Decompose the queue length term Xm(t) into independent terms corresponding to the position of a task at time instant 0.
    Fig. 2

    2—M/G/m queues in parallel

In Fig. 2, let x1(t) and x2(t) denote the queue length of LQ1 and LQ2 at time t such that x1(t) = l1 and x2(t) = l2. Let σ 1 (1) , σ 2 (1) , …, σ l1 (1) , σ 1 (2) , σ 2 (2) , …, σ l2 (2) denote residual service times of the tasks in service, i.e., remaining service time required by each task to complete. Hence, (x1(t), x2(t), σ 1 (1) , σ 2 (1) , …, σ l1 (1) , σ 1 (2) , σ 2 (2) , …, σ l2 (2) ) is evidently a Markov process.

Theorem

At equilibrium, the joint stationary distribution of queue lengths l 1 and l 2 when a particular task arrives at either LQ 1 or LQ 2 at its arrival epoch is given by Eq.  4
$$\begin{aligned} { \Pr }\{ x_{1} \left( t \right) & = l_{1} ,x_{2} \left( t \right) = l_{2} , \sigma_{1}^{(1)} \le x_{1} ,\sigma_{2}^{(1)} \le x_{2} , \ldots ,\sigma_{l1}^{(1)} \le x_{l1} ,\sigma_{1}^{(2)} \le y_{1} ,\sigma_{2}^{(2)} \\ & \le y_{2} , \ldots ,\sigma_{l2}^{(2)} \le y_{t2} / x_{1} \left( 0 \right) = 0,x_{2} \left( 0 \right) = 0\} \\ & = \exp \left( { - \lambda \int_{0}^{l} {\left( {1 - B_{1} \left( x \right)* B_{2} \left( x \right)} \right) \cdot dx } } \right. \\ & + \frac{{\lambda^{l1} }}{{l_{1} !}}\mathop \prod \limits_{i = 0}^{l1} \left\{ {\int_{0}^{t} {\left( {B_{1} \left( {x + x_{i} } \right) - B_{1} \left( x \right)} \right) \cdot dx} } \right\} \\ & + \left. {\frac{{\lambda^{l2} }}{{l_{2} !}}\mathop \prod \limits_{j = 1}^{l2} \left\{ {\int_{0}^{t} {\left( {B_{2} \left( {x + y_{j} } \right) - B_{2} \left( x \right)} \right) \cdot dx} } \right\}} \right), t = 0, l_{1} ,l_{2} = 0 \\ \end{aligned}$$
(4)

Proof

Assuming that in the interval (0, t), \(n\) tasks arrive where \(n \ge l_{1} + l_{2}\). It is trivial that in a Poisson process of arrival between (0, t) the joint probability distribution of the epochs of these arrivals agrees with the joint distribution of \(n\) independent random points distributed uniformly in (0, t).

As shown in Fig. 3, if a task arrives at epoch (t-x) then the task is clustered into either LQ1 or LQ2. Then, \(B_{1} \left( {x + x_{i} } \right) - B_{1} \left( x \right)\) will be the distribution at LQ1 with residual service time at most xi. \(\left( {B_{2} \left( {x + y_{j} } \right) - B_{2} \left( x \right)} \right)\) will be the distribution at LQ2 with residual service time at most yi. If the task has left the local queue, then the distribution will be \(B_{1} \left( x \right)* B_{2} \left( x \right)\). Now, LHS can be written as,
Fig. 3

Timeframe ∆t

$$\begin{aligned} {\text{LHS}} & = \mathop \sum \limits_{{n = l_{1} + l_{2} }}^{x} \left( {e^{ - \lambda t} \frac{{(\lambda {\text{t}})^{n} }}{{{\text{n}}!}} \frac{{{\text{n}}!}}{{l_{1} !l_{2} !\left( {n - l_{1} - l_{2} } \right)!}} \mathop \prod \limits_{i = 1}^{{l_{1} }} \left\{ {\frac{1}{t} \int_{0}^{t} {\left( {B_{1} \left( {x + x_{i} } \right)} \right.} } \right.} \right. \\ & \left. {\left. { - B_{1} \left( x \right)} \right) \cdot dx} \right\} + \mathop \prod \limits_{j = 1}^{{l_{2} }} \left\{ {\frac{1}{t} \int_{0}^{x} {\left( {B_{2} \left( {x + y_{j} } \right) - B_{2} \left( x \right)} \right) \cdot dx} } \right\} \\ & \left. { + \mathop \prod \limits_{k = 1}^{{n - l_{1} \,or \,n - l_{2} }} \left\{ {\frac{1}{t}\int_{0}^{t} {\left( {B_{1} \left( x \right)* B_{2} \left( x \right)} \right) \cdot dx} } \right\}} \right) \\ \end{aligned}$$
(5)

If we put \(t \to \infty\), we obtain Eq. 6 after integration rearrangements. The argument can be extended to generalize that limiting distribution is independent of the initial distribution.

If \(\upbeta_{1} < \infty\), \(\upbeta_{2} < {\infty }\), then,
$$\begin{aligned} \Pr \{ x_{1} \left( t \right) = l_{1} ,x_{2} \left( t \right) & = l_{2} , \sigma_{1}^{\left( 1 \right)} \le x_{1} ,\sigma_{2}^{\left( 1 \right)} \le x_{2} , \ldots ,\sigma_{l1}^{\left( 1 \right)} \le x_{l1} ,\sigma_{1}^{\left( 2 \right)} \\ & \le y_{1} ,\sigma_{2}^{\left( 2 \right)} \le y_{2} , \ldots ,\sigma_{l2}^{\left( 2 \right)} \le y_{t2} \} \\ & = e^{{ - \lambda\upbeta_{1} }} \frac{{\left( {\lambda\upbeta_{1} } \right)^{{l_{1} }} }}{{l_{1} !}} \mathop \prod \limits_{i = 0}^{{t_{1} }} \left\{ {\int_{0}^{{{\text{x}}_{1} }} {\frac{{1 - B_{1} \left( x \right)}}{{\upbeta_{1} }}\; \cdot \;dx} } \right\} \\ & + e^{{ - \lambda\upbeta_{2} }} \frac{{\left( {\lambda\upbeta_{2} } \right)^{{l_{2} }} }}{{l_{2} !}} \mathop \prod \limits_{j = 1}^{{t_{2} }} \left\{ {\int_{0}^{{{\text{y}}_{1} }} {\frac{{1 - B_{2} \left( y \right)}}{{\upbeta_{2} }}\; \cdot \;dy} } \right\} \\ \end{aligned}$$
(6)

Remarks

The above result is a limiting case of one of the models proposed by Cohen [15]. for processor sharing discipline. Equation 6 yields the well-known joint probability distribution of two stationary parallel queues. Now, on applying PASTA property, it follows that joint stationary distribution of queue lengths and residual service times just before the arrival of tagged task at LQs is given by the following. Then, the generating function of the joint stationary distribution of the queue lengths \(l_{1}\) and \(l_{2}\) is given by,
$$\begin{aligned} & {\text{E }}\left[ {{\text{z}}_{1}^{{l_{1} }} } \right] + {\text{E }}\left[ {{\text{z}}_{2}^{{l_{2} }} } \right] \\ & = \int_{t = 0}^{\infty } {dB_{1} \left( t \right)} \sum\limits_{{l_{1} = 0}}^{\infty } {{\text{z}}_{1}^{{l_{1} }} } \sum\limits_{{l_{2} = 0}}^{\infty } {{\text{z}}_{2}^{{l_{2} }} } \left\{ {\int_{{y_{n2} = 0}}^{\infty } {{\text{Pr\{ }}x_{2} \left( t \right) = l_{2} /x_{1} \left( 0 \right) = l_{1} ,x_{2} \left( 0 \right) = n_{2} ,\tau } } \right. \\ & = t, \sigma_{1}^{\left( 1 \right)} = x_{1} , \ldots ,\sigma_{l1}^{\left( 1 \right)} = x_{l1} ,\sigma_{1}^{\left( 2 \right)} = y_{1} , \ldots ,\sigma_{n2}^{\left( 2 \right)} \\ & = y_{n2} \} e^{{ - \lambda\upbeta_{1} }} \frac{{\left( {\lambda\upbeta_{1} } \right)^{{l_{1} }} }}{{l_{1} !}} \prod\limits_{i = 0}^{{l_{1} }} {\left\{ {\frac{{1 - B_{1} \left( {x_{i} } \right)}}{{\upbeta_{1} }}} \right\}} \\ & + e^{{ - \lambda\upbeta_{2} }} \frac{{\left( {\lambda\upbeta_{2} } \right)^{{n_{2} }} }}{{l_{2} !}} \prod\limits_{j = 1}^{{n_{2} }} {\left\{ {\frac{{1 - B_{2} \left( {y_{j} } \right)}}{{\upbeta_{2} }}} \right\}} \; \cdot \;dx_{l1} \; \cdot \;dy_{n2} \\ & { + }\int_{{x_{n1} = 0}}^{\infty } {{\text{Pr\{ }}x_{1} \left( t \right) = l_{1} \,x\,hew\, 1\,by\, the\, following \,proof///x_{1} \left( 0 \right)} \\ & = n_{1} ,x_{2} \left( 0 \right) = 0,\tau = t, \sigma_{1}^{\left( 1 \right)} = x_{1} , \ldots ,\sigma_{n1}^{\left( 1 \right)} = x_{n1} ,\sigma_{1}^{\left( 2 \right)} = y_{1} , \ldots ,\sigma_{l2}^{\left( 2 \right)} \\ & = y_{l2} \} \,e^{{ - \lambda\upbeta_{1} }} \frac{{\left( {\lambda\upbeta_{1} } \right)^{{n_{1} }} }}{{n_{1} !}} \prod\limits_{i = 0}^{{n_{1} }} {\left\{ {\frac{{1 - B_{1} \left( {x_{i} } \right)}}{{\upbeta_{1} }}} \right\}} \\ & \left. { + e^{{ - \lambda\upbeta_{2} }} \frac{{\left( {\lambda\upbeta_{2} } \right)^{{l_{2} }} }}{{l_{2} !}} \prod\limits_{j = 1}^{{l_{2} }} {\left\{ {\frac{{1 - B_{2} \left( {y_{j} } \right)}}{{\upbeta_{2} }}} \right\}} \; \cdot \;dx_{n1} \; \cdot \;dy_{l2} } \right\} \quad \left| {{\text{z}}_{1} } \right| \le 1, \left| {{\text{z}}_{2} } \right| \le 1 \\ \end{aligned}$$
(7)
Now, independent terms for \(x_{1} \left( t \right)\) and \(x_{2} \left( t \right)\) for both the local queues are to be derived following term decomposition technique given by Cohen [15]. Combining the independent terms for both the queues gives the following equation.
$$\begin{aligned} {\text{E }}\left[ {{\text{z}}_{1}^{{l_{1} }} } \right] + {\text{E }}\left[ {{\text{z}}_{2}^{{l_{2} }} } \right] = & e^{{ - \lambda\upbeta_{1} \left( {{\text{z}}_{1} - 1} \right)}} \int_{0}^{\infty } {e^{{ - \lambda\upbeta_{1} {\text{z}}_{1} {\text{p}}_{0} \left( {1 - {\text{z}}_{1} } \right)}} \; \cdot \;dB_{1} \left( t \right)} \\ + & e^{{ - \lambda\upbeta_{2} \left( {{\text{z}}_{2} - 1} \right)}} \int_{0}^{\infty } {e^{{ - \lambda\upbeta_{2} \left( {1 - {\text{p}}_{2} } \right)}} \; \cdot \;dB_{2} \left( t \right) \left| {{\text{z}}_{1} } \right| \le 1, \left| {{\text{z}}_{2} } \right| \le 1} \\ \end{aligned}$$
(8)

The above equation gives the joint stationary distribution of the two local queues considered. On extending the above argument to n arbitrary local queues, we can arrive at the final joint distribution. The above scenario of n different classes of parallel queues is simulated, and analysis is given in the following section.

4 Experimental Analysis

The CQD policy is analyzed against other well-known policies such as priority and FIFO mechanisms. Here, M/G/m parallel queues are considered for experimentation. Existing literature [16, 17] deals with performance analysis of queuing systems based mainly on mean response time which is highly critical in cloud environment to provide necessary quality of service (QoS) [18].

4.1 Performance Metrics

The mean task response time for tasks that are queued following FIFO, priority, and CQD disciplines is analyzed as follows: The number of VMs is fixed in this case as 10. For FIFO and priority disciplines, the mean response time function steeply increases as requirement increases. But CQD exhibits a smooth incremental change as the task requirement increases. This smooth increase is due to the clustering of appropriate class of tasks to appropriate VM, thereby reducing excess waiting time (Fig. 4).
Fig. 4

Mean task response time analysis based on processing requirements

The trend in Fig. 5 shows that, as the number of VMs increases, there is a sharp increase in CQD performance compared to FIFO and priority. Queuing based on priority and CQD is almost similar in performance as evident from Fig. 5. As the number of VMs increases, the mean task response time is found to decrease in both priority and CQD in a similar fashion. Hence, the model proves to work in par with priority queue discipline for high number of VMs and lesser number of task requests. But as the task requests increases, CQD significantly outperforms priority and FIFO queue mechanisms.
Fig. 5

Mean task response time analysis based on number of VMs with number of tasks fixed as 500

4.2 Analysis and Discussion

The space complexity depends on the buffer space in each and every VM. It has to be effectively planned to use optimum buffer space. The average waiting time of a task [19] in this type of queuing system will comprise of waiting time in stage 1 and in stage 2, time order of clustering algorithm and time order of scheduling policy adopted by the global scheduler. It is given as,
$$WT_{i} = W_{s1} + O\left( {clustering} \right) + W_{s2} + O\left( {scheduling} \right)$$
(4)

Amortized analysis. Not each of the n tasks takes equally much time. Basic idea in CQM is to do a lot of ‘prework’ by clustering a-priori. This pays off as a result of the prework done, and the scheduling operation can be carried out so fast that a total time of O(g(n)) is not exceeded where g(n) is a sub-linear function of n tasks. So, the investment in the prework amortizes itself.

5 Conclusion and Future Work

This paper outlines the need for efficient queuing model which is best suited for cloud computing. A novel method involving clustering technique is proposed. The queuing model derivation steps are outlined and validated against existing queues in series derivation. Analytical discussion proves the efficiency of the above method. The proposed work is found to perform better than existing disciplines such as FIFO and priority in situations such as high resource requirement and when large number of VMs are present. Major work in the future shall be devoted to applying the model in real time and mathematically deriving and analyzing the efficiency in terms of energy complexity.

Notes

Acknowledgements

We acknowledge Visvesvaraya PhD scheme for Electronics and IT, DeitY, Ministry of Communications and IT, Government of India’s fellowship grant through Anna University, Chennai for their support throughout the working of this paper.

References

  1. 1.
    Boxma, O.J.: M/G/∞ tandem queues. Stoch. Process. Appl. 18, 153–164 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Sztrik, J.: Basic Queueing Theory. University of Debrecen (2012)Google Scholar
  3. 3.
    Buyya, R., Sukumar, K.: Platforms for Building and Deploying Applications for Cloud Computing, pp. 6–11. CSI Communication (2011)Google Scholar
  4. 4.
    Xiong, K., Perros, H.: Service performance and analysis in cloud computing. In: Proceedings of the 2009 Congress on Services—I, Los Alamitos, CA, USA, pp. 693–700 (2009)Google Scholar
  5. 5.
    Ma, B.N.W.: Mark. J.W.: Approximation of the mean queue length of an M/G/c queueing system. Oper. Res. 43, 158–165 (1998)CrossRefGoogle Scholar
  6. 6.
    Miyazawa, M.: Approximation of the queue-length distribution of an M/GI/s queue by the basic equations. J. Appl. Probab. 23, pp. 443–458 (1986)Google Scholar
  7. 7.
    Yao, D.D.: Refining the diffusion approximation for the M/G/m queue. Oper. Res. 33, 1266–1277 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Tijms, H.C., Hoorn, M.H.V., Federgru, A.: Approximations for the steady-state probabilities in the M = G=c queue. Adv. Appl. Probab. 13, 186–206 (1981)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Kimura, T.: Diffusion approximation for an M = G=m queue. Oper. Res. 31, 304–321 (1983)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Vilaplana, Jordi, Solsona, Francesc, Teixidó, Ivan, Mateo, Jordi, Abella, Francesc, Rius, Josep: A queuing theory model for cloud computing. J Supercomput. 69(1), 492–507 (2014)CrossRefGoogle Scholar
  11. 11.
    Boxma, O.J., Cohen, J.W., Huffel, N.: Approximations of the Mean waiting time in an M = G=s queueing system. Oper. Res. 27, 1115–1127 (1979)CrossRefzbMATHGoogle Scholar
  12. 12.
    Kleinrock, L.: Queueing Systems: Theory, vol. 1. Wiley-Interscience, New York (1975)zbMATHGoogle Scholar
  13. 13.
    Adan, I.J.B.F., Boxma, O.J., Resing, J.A.C.: Queueing models with multiple waiting lines. Queueing Syst Theory Appl 37(1), 65–98 (2011)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Wolff, R.W.: Poisson arrivals see time averages. Oper. Res. 30, 223–231 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Cohen, J.W.: The multiple phase service network with generalized processor sharing. Acta Informatica 12, 245–284 (1979)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Khazaei, H., Misic, J., Misic, V.: Performance analysis of cloud computing centers using M/G/m/m + r. Queuing Systems. IEEE Trans. Parallel Distrib. Syst. 23(5) (2012)Google Scholar
  17. 17.
    Slothouber, L.: A model of web server performance. In: Proceedings of the Fifth International World Wide Web Conference (1996)Google Scholar
  18. 18.
    Yang, B., Tan, F., Dai, Y., Guo, S.: Performance evaluation of cloud service considering fault recovery. In: Proceedings of the First International Conference on Cloud, Computing (CloudCom’09), pp. 571–576 (2009)Google Scholar
  19. 19.
    Borst, S., Boxma, O.J., Hegde, N.: Sojourn times in finite-capacity processor-sharing queues. Next Gener. Internet Netw IEEE 55–60 (2005)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Anna UniversityChennaiIndia

Personalised recommendations