Keywords

1 Introduction

A traditional Internet server processes requests in FIFO manner. During a high load period, each task has to wait in a queue for a long time before getting serviced. Overhead from tasks competing for limited resources, such as open connections and network bandwidth, is increased. Retry from impatient clients worsens the load situation and causes the snowball effect. More elaborate resource allocation schemas, rather than best-effort service model, need to be adopted to provide predictable services during high load periods. The existing best-effort service with the FIFO scheduling mode and dropping tasks (requests) if the queue is full of the Internet servers leads to misallocation of scarce and expensive network and CPU resources during heavy load periods, causing unpredictable response delay.

The next generation Internet will demand differentiated services from Internet servers. The most important function of the future Internet is to support the provisioning of reliable real-time services on a wide scale. In order to achieve this goal, per-aggregate resource management is, nowadays, regarded as a mandatory choice. The classical example of the modern approach to service users tasks is divided they service processes onto several separated servers (for example, as mechanisms for resolving some security problems), which processes some independent parts of incoming tasks. In this paper, we analyze two different tandem configurations, which consists of with two separated servers, in specific way connected each other. At each server, FIFO multiplexing is in place, meaning that all tasks traversing the server are buffered in a single queue First-Come-First-Served. Nowadays the tandem networks have been studied extensively and applied in the evaluation of various systems as in design, capacity planning and performance evaluation of computer and communication systems, call centers, flexible manufacturing systems, etc. Some examples of their application in real systems (two transmitter communication networks with Dynamic Bandwidth Allocation, service facility with front and back room operations) can be found in [28], [1] respectively. The behavior of various systems, including communication and computer systems, as well as production and manufacturing procedures, can be represented and analyzed through queuing network models to evaluate their performance [11,12,13, 29, 30]. System performance analysis usually includes the queue length distribution and various performance indicators such as response time, throughput and utilization [2, 6, 7, 10].

The theory behind tandem queues is well developed, see, e.g. [3,4,5, 14, 26, 27]. However, there is still a great interest around more complicated setups involving blocking phenomena as well as different mechanisms for offering services. An excellent survey may be found in the well known of Perros [26] and Balsamo [3] books. Over the years high quality research has appeared in diverse journals and conference proceeding in the field of computer science, traffic engineering and communication engineering [4, 14, 26]. In particular, the two-node tandem queuing model with the Batch Markovian Arrival Process input flow and non-exponential service time distribution described in the paper [8]. Additionally, systems with finite capacity queues under various blocking mechanisms and scheduling constraints are analyzed by the author in [15,16,17,18,19,20,21,22,23,24,25]. In [15, 16], the closed type, multi-center computer networks with different blocking strategies are investigated and measures of effectiveness based on Quality of Service (QoS) are studied. Markovian and semi-Markovian approaches for analysis of open tandem networks with blocking are presented in [17, 19, 21, 23, 24, 25]. Some two-stage tandem queues with blocking and an optional feedback are presented in [20, 22]. In such systems, feedback is the likelihood of a task return, with fixed probability to the first server of the tandem immediately after the service at the second one [8]. Tandems with feedback are usually more complex than the ones without and they are mostly investigated given stationary Poisson arrival process and exponential service time distribution [1, 28]. Blocking and deadlocking phenomena in an open series, linked network model with HOL (head-of-line) priority feedback service was investigated and presented by the author in [18].

The remainder of this paper is organized as follows: Sect. 2 presents and explains the models specification and description. Section 3, analyzes a tandem as an open linked series three-station network with blocking. Section 4 explains a tandem as a linked rerouting two-server network the same with blocking. Section 5 describes the numerical results obtained using our solution technique, followed by the concluding remarks in Sect. 6.

2 Models Specification and Description

In this research, two different configuration of Internet tandem is presented. Each of these kinds of tandem networks has a single service line at the main server, and the other (additional) server with a single service line. Between these servers is a common waiting buffer with finite capacity, for example equal to m2. When this buffer is full, the accumulation of new tasks from the main server is temporarily suspended and a phenomenon called blocking occurs until the queue empties and allows new inserts. Similarly, if the first buffer (with capacity m1) ahead of the main server is full, then the Internet source node (station) is blocked. This is the classical mechanism for controlling the intensity of an arriving task stream, which comes from the Internet users to the servicing tandem. In this kind of network configuration, no more than m1 + m2 + 2 tasks can be processed simultaneously and the Internet tandem network becomes idle, if there are no tasks in both servers.

Let us consider the Internet two-server tandems with blocking, both configurations, as shown in Figs. 1 and 2. On these figures we present models of generalized Internet tandem networks and analyze the feasibility of service with blocking and rerouting. These networks contain of four major logical components:

Fig. 1.
figure 1

Tandem configuration as open linked series servers.

Fig. 2.
figure 2

Tandem configuration as rerouting networks.

  1. 1.

    Main server, for the first or the first and the second stage tasks processing with a task initiator, a task controller (realization ON-OFF strategy for incoming requests) and a task dispatcher.

  2. 2.

    Second sever – for the second stage tasks processing.

  3. 3.

    Communication channels.

  4. 4.

    Internet source and λ represents the request arrival process from the clients.

After processing the responses are sent back to the clients through the communication channels. To simplify the model, we assume that the tandem of servers connect to the client through a high speed network. We have ignored the channel delay and flow control on a client side as they are beyond the scope of the proposed study.

The input task stream comes from the Internet source to the main server. This server has a finite capacity buffer and it can accept only m1 + 1 tasks. A new task, which arrives at the full main server buffer, is forced to wait in the Internet source station and blocks it. For linked in series servers configuration of tandem (see Fig. 1), each task at the main server is processed on the service line and upon service completion sent to the second server. If there is a free service line on this server, the service process starts immediately, if not, the task must wait in the buffer. If the buffer is full, any task upon service completion at the main server is forced to wait and blocks this server. In case of tandem with rerouting configuration (see Fig. 2), after service completion at the main server, the task proceeds to the second server with probability 1 − σ, and with probability σ the task departs from tandem. Tasks leaving the second server are always feed back to the main server. If the main server buffer is full, similarly as in the previous configuration, the tasks block the second server.

The general assumptions for these two-server tandem models are:

  1. 1.

    Internet tasks stream arriving to the main server is assumed to be a Poisson stream, with rate λ = 1/a, where a is the mean inter-arrival time,

  2. 2.

    a single service line is on the main server,

  3. 3.

    a single service line is available on the second server,

  4. 4.

    in both servers the service time represents exponentially distributed random variables, with mean s A = 1/μ A and s B = 1/μ B, where μ is the mean service rate,

  5. 5.

    the buffers capacities are finite, for example equal to m1 and m2, respectively to the main and the second servers,

  6. 6.

    service strategies restrict and forbid tasks truncations, if the buffers are full in any case (strategy network drop-tail is forbidden) and tasks lose.

In this special type of multi-stage network with blocking a deadlock may occur. We assume that a deadlock is detected instantaneously and resolved without any delay time by simultaneously exchanging both blocked tasks [25].

Generally, blocking phenomena is a very important mechanism for controlling and regulating intensity of arriving tasks from the Internet source (users) to the Internet tandem servers. A blocking strategies in the main server are realized by a Controller mechanism, it means that controller temporarily suspend and resume (ON-OFF strategy) transfer process from users, because “network drop-tail strategy is forbidden” must be realized. The arrival rate to the main server depends on the state of the tandem and blocking factor that reduces the rate at which users are sending tasks to the tandems.

3 Tandem as an Open Linked Series Servers

Let us consider the Internet two-server tandem shown in Fig. 1 as the three-station queuing network with blocking. Each queuing system can, in principle, be mapped onto an instance of a Markov process and then mathematically evaluated in terms of this process. According to general assumptions, a continuous-time homogeneous Markov chain can represent a tandem network. The queuing network model reaches a steady-state condition and the underlying Markov chain has a stationary state distribution. Also, such queuing network with finite capacity queues has finite state space. The solution of the Markov chain representation may then be computed and the desired performance characteristics, as queue length distribution, utilizations, and throughputs, obtained directly from the stationary probability vector.

In theory, any Markov model can be solved numerically. In particular, solution algorithm for Markov queuing networks with blocking is a five-step procedure:

  1. 1.

    Definition of the series network state space (choosing a state space representation).

  2. 2.

    Enumerating all the transitions that can possible occur among the states.

  3. 3.

    Definition of the transition rate matrix Q that describes the network evaluation (generating the transition rate).

  4. 4.

    Solution of linear system of the global balance equations to derive the stationary state distribution vector (computing appropriate probability vector).

  5. 5.

    Computation from the probability vector of the average performance indices.

In this type of an open series network we may denote its state by the pair (i,j), where i represents the number of tasks in main server and j denotes the number in second server, including all tasks in service and in blocking. Here some blocked tasks, physically, are located on the Internet source station or on the main server, but the nature of the service process in both servers, allows one to treat them as located in additional places in the buffers and they belong to the main server or the second server. In this case, there can be a maximum of m1 + 2 tasks assigned to the main server including the task in the Internet source that can be blocked. Similarly, there can be a maximum of m2 + 2 tasks assigned to second server with a task blocked in the main server. For any non-negative integer values of i and j, (i,j) represent a feasible state of this queuing network, and p i,j denotes the probability for that state in equilibrium. These states and the possible transitions among them are shown in Fig. 3. This state diagram of the series server network contains all possible non-blocked states (marked by ovals) as well as the blocking states (marked by rectangles).

Fig. 3.
figure 3

State transmission diagram for linked series servers

Based on an analysis the state space diagrams, the process of constructing the corresponding differential-difference equations, differential equations in t, and difference in (i,j) state, can be divided into several independent steps, which describe some similar, repeatable schemas (see Fig. 3).

These equations are:

  1. (a)

    for states without blocking factor:

    $$ \begin{array}{*{20}l} {p^{{\prime }}_{0,0} \left( t \right) = - \lambda {\cdot}p_{0,0} \left( t \right) + \mu^{B} {\cdot}p_{0,1} \left( t \right)} \hfill \\ {p^{{\prime }}_{0,j} \left( t \right) = - \left( {\lambda + \mu^{B} } \right){\cdot}p_{0,j} \left( t \right) + \mu^{A} {\cdot}p_{1,j - 1} \left( t \right) + \mu^{B} {\cdot}p_{0,j + 1} \left( t \right)\quad {\text{for}}\,\,j = 1, \ldots ,m2 + 1} \hfill \\ {p^{{\prime }}_{i,0} \left( t \right) = - \left( {\lambda + \mu^{A} } \right){\cdot}p_{i,0} \left( t \right) + \lambda {\cdot}p_{i - 1,0} \left( t \right) + \mu^{B} {\cdot}p_{i,1} \left( t \right)\quad {\text{for}}\,\,i = 1, \ldots ,m1 + 1} \hfill \\ {p^{{\prime }}_{i,j} \left( t \right) = - \left( {\lambda + \mu^{B} + \mu^{A} } \right){\cdot}p_{i,j} \left( t \right) + \lambda {\cdot}p_{i - 1,j} \left( t \right) + \mu^{A} {\cdot}p_{i + 1,j - 1} \left( t \right) + \mu^{B} {\cdot}p_{i,j + 1} \left( t \right)\quad {\text{for}}\,\,i = 1, \ldots ,m1 + 1,j = 1, \ldots ,m2 + 1} \hfill \\ \end{array} $$
    (1)
  2. (b)

    for states with Main Server blocking:

    $$ \begin{aligned} p^{{\prime }}_{0,m2 + 2} \left( t \right) & = - \left( {\lambda + \mu^{B} } \right){\cdot}p_{0,m2 + 2} \left( t \right) + \mu^{A} {\cdot}p_{1,m2 + 1} \left( t \right) \\ p^{{\prime }}_{i,m2 + 2} \left( t \right) & = - \left( {\lambda + \mu^{B} } \right){\cdot}p_{i,m2 + 2} \left( t \right) + \lambda {\cdot}p_{i - 1,m2 + 2} \left( t \right) + \mu^{A} {\cdot}p_{i + 1,m2 + 1} \left( t \right)\quad {\text{for}}\,\,i = 1, \ldots ,m1 \\ \end{aligned} $$
    (2)
  3. (c)

    for states with both Internet source and Main Server simultaneous blocking:

    $$ p^{{\prime }}_{m1 + 1,m2 + 2} \left( t \right) = - \mu^{B} {\cdot}p_{m1 + 1,m2 + 2} \left( t \right) + \lambda {\cdot}p_{m1,m2 + 2} \left( t \right) + \mu^{A} {\cdot}p_{m1 + 2,m2 + 1} \left( t \right) $$
    (3)
  4. (d)

    for states with Internet source blocking:

    $$ \begin{aligned} p^{{\prime }}_{m1 + 2,0} \left( t \right) & = - \mu^{A} {\cdot}p_{m1 + 2,0} \left( t \right) + \lambda {\cdot}p_{m1 + 1,0} \left( t \right) + \mu^{B} {\cdot}p_{m1 + 2,1} \left( t \right) \\ p^{{\prime }}_{m1 + 2,j} \left( t \right) & = - \left( {\mu^{A} + \mu^{B} } \right){\cdot}p_{m1 + 2,j} \left( t \right) + \lambda {\cdot}p_{m1 + 1,j} \left( t \right) + \mu^{B} {\cdot}p_{m1 + 2,j + 1} \left( t \right)\quad {\text{for}}\,\,j = 1, \ldots ,m2 \\ p^{{\prime }}_{m1 + 2,m2 + 1} \left( t \right) & = - \left( {\mu^{A} + \mu^{B} } \right){\cdot}p_{m1 + 2,m2 + 1} \left( t \right) + \lambda {\cdot}p_{m1 + 1,m2 + 1} \left( t \right) \\ \end{aligned} $$
    (4)

The solution for the equilibrium states (or the stationary states), if it exists, must satisfy:

$$ \mathop { \lim }\limits_{t \to \infty } p_{i,j}^{'} (t) = 0 $$
(5)

and if we let

$$ p_{i,j} = \mathop { \lim }\limits_{t \to \infty } p_{i,j}^{'} (t) $$
(6)

this leads to the set of equilibrium equations.

Here, a queuing network with blocking linked in series, under appropriate assumptions, is formulated as a Markov process and the stationary probability vector can be obtained using numerical methods for linear systems of equations. If there is a model network with finite number of states, its steady-state probabilities can be found directly from Eqs. (1)–(6) by using some iteration method and the normalizing condition for the sum of state probabilities.

Some specialized software for the solution of nonsymmetrical linear systems of equations by using iterative methods was created by the author. The package is written entirely in the C programming language, and data structures are managed dynamically. This package allows efficiently calculate the steady-state probability vectors in Markovian models and automatically generates more than twenty different performance measures for analyzed linked series queuing network.

4 Tandem as Rerouting Networks

The same, as in the Sect. 3, let us consider the Internet two-server tandem shown in Fig. 2 as the three-station queuing network with blocking. The state diagram of this rerouting network is presented on the Fig. 4. Here are shown all possible non-blocked states (marked by ovals) as well as the blocking states (marked by rectangles). Based on an analysis this state space diagrams, the process of constructing the corresponding differential-difference equations, differential equations in t, and difference in (i,j) state, can be divided into several independent steps, which describe some similar, repeatable schemas.

Fig. 4.
figure 4

State transmission diagram for rerouting network

These equations are:

  1. (a)

    for states without blocking factor:

    $$ \begin{aligned} p^{{\prime }}_{0,0} \left( t \right) & = - \lambda {\cdot}p_{0,0} \left( t \right) + \mu^{A} \sigma {\cdot}p_{1,0} \left( t \right) \\ p^{{\prime }}_{0,j} \left( t \right) & = - \left( {\lambda \, + \mu^{B} } \right){\cdot}p_{0,j} \left( t \right) + \mu^{A} \left( {1 - \sigma } \right){\cdot}p_{1,j - 1} \left( t \right) + \mu^{A} \sigma {\cdot}p_{1,j} \left( t \right)\quad {\text{for}}\,\,j = 1, \ldots ,m2 + 1 \\ p^{{\prime }}_{i,0} \left( t \right) & = - \left( {\lambda + \mu^{A} \sigma + \mu^{A} \left( {1 - \sigma } \right)} \right){\cdot}p_{i,0} \left( t \right) + \lambda {\cdot} \, p_{i - 1,0} \left( t \right) + \mu^{B} {\cdot}p_{i - 1,1} \left( t \right) + \mu^{A} \sigma {\cdot}p_{i + 1,0} \left( t \right)\quad {\text{for}}\,\,i = 1, \ldots ,m1 \\ p^{{\prime }}_{i,j} \left( t \right) & = - \left( {\lambda + \mu^{A} \sigma + \mu^{A} \left( {1 - \sigma } \right) + \mu^{B} } \right){\cdot}p_{i,j} \left( t \right) + \lambda {\cdot}p_{i - 1,j} \left( t \right) + \mu^{B} {\cdot}p_{i - 1,j + 1} \left( t \right) \\ & \quad + \mu^{A} \sigma {\cdot} \, p_{i + 1,j} \left( t \right) \, + \mu^{A} \left( {1 - \sigma } \right){\cdot} \, p_{i + 1,j - 1} \left( t \right)\quad {\text{for}}\,\,i = 1, \ldots ,m1,j = 1, \ldots , m2 + 1 \\ p^{{\prime }}_{m1 + 1,0} \left( t \right) & = - \left( {\lambda + \mu^{A} \sigma + \mu^{A} \left( {1 - \sigma } \right)} \right){\cdot}p_{m1 + 1,0} \left( t \right) + \lambda {\cdot}p_{m1,0} \left( t \right) + \mu^{B} {\cdot} \, p_{m1,1} \left( t \right) \\ & \quad + \mu^{A} \sigma {\cdot}p_{m1 + 2,0} \left( t \right) + \mu^{A} \sigma {\cdot}p_{m1 + 3,0} \left( t \right) \\ p^{{\prime }}_{m1 + 1,j} \left( t \right) & = - \left( {\lambda + \mu^{A} \sigma + \mu^{A} \left( {1 - \sigma } \right) + \mu^{B} } \right){\cdot}p_{m1 + 1,j} \left( t \right) + \lambda {\cdot}p_{m1,j} \left( t \right) + \mu^{B} {\cdot}p_{m1,j + 1} \left( t \right) \\ & \quad + \mu^{A} \sigma {\cdot}p_{m1 + 2,j} \left( t \right) + \mu^{A} \sigma {\cdot}p_{m1 + 3,j} \left( t \right) + \mu^{A} \left( {1 - \sigma } \right){\cdot}p_{m1 + 3,j - 1} \left( t \right)\quad {\text{for}}\,\,j = 1, \ldots ,m2 \\ p^{{\prime }}_{m1 + 1,m2 + 1} \left( t \right) & = - \left( {\lambda + \mu^{A} \sigma + \mu^{A} \left( {1 - \sigma } \right) + \mu^{B} } \right){\cdot}p_{m1 + 1,m2 + 1} \left( t \right) + \lambda {\cdot}p_{m1,m2 + 1} \left( t \right) \\ & \quad + \mu^{B} {\cdot}p_{m1,m2 + 2} \left( t \right) + \mu^{A} \sigma {\cdot}p_{m1 + 2,m2 + 1} \left( t \right) + \mu^{A} \left( {1 - \sigma } \right){\cdot}p_{m1 + 3,m2} \left( t \right) \\ \end{aligned} $$
    (7)
  2. (b)

    for states with Main Server blocking:

    $$ \begin{aligned} p^{{\prime }}_{0,m2 + 2} \left( t \right) & = - \left( {\lambda + \mu^{B} } \right){\cdot}p_{0,m2 + 2} \left( t \right) + \mu^{A} \left( {1 - \sigma } \right){\cdot}p_{1,m2 + 1} \left( t \right) \\ p^{{\prime }}_{i,m2 + 2} \left( t \right) & = - \left( {\lambda + \mu^{B} } \right){\cdot}p_{i,m2 + 2} \left( t \right) + \lambda {\cdot}p_{i - 1,m2 + 2} \left( t \right) + \mu^{A} \left( {1 - \sigma } \right){\cdot}p_{i + 1,m2 + 1} \left( t \right)\quad {\text{for}}\,\,i = 1, \ldots ,m1 \\ \end{aligned} $$
    (8)
  3. (c)

    for states with both Internet source and Main Server simultaneous blocking:

    $$ p^{{\prime }}_{m1 + 1,m2 + 2} \left( t \right) = - \mu^{B} {\cdot}p_{m1 + 1,m2 + 2} \left( t \right) + \lambda {\cdot}p_{m1,m2 + 2} \left( t \right) + \mu^{A} \left( {1 - \sigma } \right){\cdot}p_{m1 + 2,m2 + 1} \left( t \right) $$
    (9)
  4. (d)

    for states with Source blocking:

    $$ \begin{aligned} p^{{\prime }}_{m1 + 2,j} \left( t \right) & = - (\mu^{A} \sigma + \mu^{A} \left( {1 - \sigma } \right)){\cdot}p_{m1 + 2,j} \left( t \right) + \lambda {\cdot}p_{m1 + 1,j} \left( t \right)\quad {\text{for}}\,\,j = 0, \ldots ,m2 \\ p^{{\prime }}_{m1 + 2,m2 + 1} \left( t \right) & = - (\mu^{A} \sigma + \mu^{A} \left( {1 - \sigma } \right)){\cdot}p_{m1 + 2,m2 + 1} \left( t \right) + \lambda {\cdot}p_{m1 + 1,m2 + 1} \left( t \right) \\ & \quad + \mu^{B} {\cdot}p_{m1 + 1,m2 + 2} \left( t \right) \\ \end{aligned} $$
    (10)
  5. (e)

    for states with Second Server blocking:

    $$ p^{{\prime }}_{m1 + 3,j} \left( t \right) = - (\mu^{A} \sigma + \mu^{A} \left( {1 - \sigma } \right)){\cdot}p_{m1 + 3,j} \left( t \right) + \mu^{B} {\cdot}p_{m1 + 1,j + 1} \left( t \right) \quad {\text{for}}\,\,j = 0, \ldots ,m2 $$
    (11)

The solution of these equations for the equilibrium states, if it exists, must satisfy Formulas (5) and (6) and this leads to the set of stationary equations. In this case a queuing rerouting network with blocking is formulated as a Markov process and the stationary probability vector can be obtained using numerical methods for linear systems of equations using Eqs. (7)–(11) with (5) and (6) and dedicated software for the solution of linear systems of equations was created by the author. This software allows efficiently calculate the steady-state probability vectors and automatically generates more than twenty different performance measures for analyzed queuing network.

5 Numerical Results

In this section, to demonstrate our analysis of two service strategies in the different Internet two-serves tandem configurations presented in Sects. 3 and 4, we have performed numerous calculations. We will concentrate on the several important performance descriptors, such as the probabilities that the different tandem configurations are blocked, it means how are blocked separated servers or blocked Internet sources or how are changed various time measures and how are changed buffers filling parameters if the inter-arrival rate λ from the Internet source (users) to tandems are changed within a range from 1.0 to 10.0 per time unit. To demonstrate this, the following tandem parameters were chosen: the service rates in main server and second server are equal to: μ A = 8.0, μ B = 5.0 respectively. The buffer capacities are chosen as equal to: m1 = 40, m2 = 30. The depart probability σ for rerouting tandem configuration is chosen as 0.5 and in this configuration the service rates for main server is chosen as equal to: μ A = 16.0 per time unit. This is necessary for guarantee the same main server utilization parameter, as in the series tandem model. In this case the mean service rate for tandem rerouting model, must be equal to previous service rate divided by depart probability, e.g. μ A .

Based on such chosen parameters the following results were obtained and the majority of them are presented in Tables 1, 2 and 3. In all of these tables the columns marked as “italic” contain the results for the first tandem configuration, e.g. the open linked series servers (first service strategy). The columns marked as “bold” contains the results for the second tandem configuration, e.g. the rerouting tandem (second service strategy).

Table 1. Measures of effectiveness – the probabilities
Table 2. Measures of effectiveness – the responses (the time parameters)
Table 3. Measures of effectiveness – the occupation parameters

In the first table, the λ is the inter-arrival rate from the Internet source to the both tandems, Idle-s and Idle-r are the idle tandem probabilities for series and rerouting tandems, Bl-M-s and Bl-M-r are the main server blocking probabilities for series and rerouting tandems, Bl-S-s and Bl-S-r are the source blocking probabilities for series and rerouting tandems.

In the second table, the λ-decl is the declared inter-arrival input stream intensities from the Internet source to the tandems, λ-s-eff and λ-r-eff are effective input stream intensities for series and rerouting tandems, w-s and w-r are the mean waiting times for series and rerouting tandems, ro-s and ro-r are the second server utilization factors for series and rerouting tandems.

In the next table, the λ is the inter-arrival rate from the Internet source to the tandems, v-M-s and v-M-r are the number of tasks in the Main server buffer for series and rerouting tandems, v-S-s and v-S-r are the number of tasks in the Second server buffer for series and rerouting tandems, ro-s and ro-r are the main server utilization factor for series and rerouting tandems.

The results of the experiments clearly show that the effect of the properly chosen service strategy in Internet tandem network must be taken into account when analyzing performance such computer networks. Also, results of these calculations evidently show that the blocking phenomena must be taken into account because variation of inter-arrival rate drastically changes the main performance parameters. Which tandem configurations and service strategies are better? It all depends, which measures or time parameters are more preferable during our analysis and obtained results applications.

6 Conclusions

An approach to compare the effectiveness of two service strategies in Internet servers linked to tandem with blocking has been presented. Tasks blocking probabilities and some other fundamental performance characteristics of such networks are derived, followed by numerical examples. The results confirm importance of a special treatment for the models with blocking, which justifies this research. Moreover, our proposal is useful in designing buffer sizes or channel capacities for a given blocking probability requirement constraint. The results can be used for capacity planning and performance evaluation of real-time computer networks where blocking are present.