Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Queueing systems with different types of restrictions in access to the service station (server) are being intensively studied nowadays, in view of their use in modeling many phenomena occurring in technical sciences and economics. Particularly important here are models with a limited maximal number of customers (packets, calls, jobs, etc.), which naturally can describe systems with losses due to buffer overflows (buffers of input/output interfaces in TCP/IP routers, accumulating buffers in production systems). In many practical systems, which can be described by queueing models, a mechanism of turning off the server at the time when the system becomes empty is implemented; the server is being activated when the first customer arrives after the period of inactivity. The use of such a mechanism is often being forced to save energy that the server uses to remain on standby despite the lack of applications in the system (wireless networks, automated production lines, etc.). It happens quite often that the waking up of service station (restart) is not simultaneous with the start of processing in “normal” mode. The server may indeed need some time (usually random) to achieve full readiness to work. Assuming randomness of setup time, such a mechanism could be called probabilistic waking up the server. For example, a node of wireless network working under the Wi-Fi standard (IEEE 802.11) wakes thereby regularly just before sending the beacon frame from the access point [7, 8]. In [6] M / G / 1-type queuing system with server vacations and setup times is used to model sleeping mode in cellular network. A similar phenomenon can also be observed e.g. in production lines: after restarting, a machine needs a certain, often random, time to achieve its full readiness to work. Furthermore, the formula relating with waiting time in stationary state of GI / G / 1-type queues with setup times can be found in [2, 3].

2 Mathematical Model

In this section we state mathematical description of the considered queueing model and introduce necessary notation and definitions. So, we deal with the finite \(M/G/1/K-\)type model in which packets (calls, jobs, customers, etc.) arrive according to a Poisson process with rate \(\lambda \) and are processed individually, basing on the FIFO service discipline, with a CDF (=cumulative distribution function) \(F(\cdot ).\) The system capacity is bounded by a non-random value K, i.e. we have a finite buffer with \(K-1\) places and one place reserved for service. Every time when the system becomes empty the server is being switched off (an idle period begins). Simultaneously with the arrival epoch of the packet incoming into the empty system, a server setup time begins, which is generally distributed random variable with a CDF \(G(\cdot ).\) The setup time is needed for the server to reach full ability for job processing, hence during setup times the service process is suspended. Let \(f(\cdot )\) and \(g(\cdot )\) be LSTs (=Laplace-Stieltjes transforms) of CDFs \(F(\cdot )\) and \(G(\cdot ),\) respectively, i.e. for \(\mathrm{Re}(s)>0\)

$$\begin{aligned}&f(s)\mathop {=}\limits ^{def}\int _{0}^{\infty }e^{-st}dF(t),\quad g(s)\mathop {=}\limits ^{def}\int _{0}^{\infty }e^{-st}dG(t). \end{aligned}$$
(1)

Let us denote by X(t) the number of packets present in the system at time t (including the one being processed, if any) and by v(t) the queueing delay (virtual waiting time) at time t, i.e. the time needed for the server to process all packets present at time t or, in other words, waiting time of hypothetical (virtual) packet arriving exactly at time t. Introduce the following notation:

$$\begin{aligned}&V_{n}(t,x)\mathop {=}\limits ^{def}{\mathbf {P}}\{v(t)>x\,|\,X(0)=n\}dt,\quad t,x>0,\,0\le n\le K, \end{aligned}$$
(2)

for the transient queueing delay (tail) distribution, conditioned by the initial level of buffer saturation. We are interested in the explicit formula for the LT (=Laplace transform) of \(V_{n}(t,x)\) in terms of “input” characteristics of the system, namely arrival rate \(\lambda ,\) system capacity K, and transforms \(f(\cdot )\) and \(g(\cdot )\) of service and setup time distributions. We end this section with some additional notation which will be used throughout the paper. So, let

$$\begin{aligned}&F^{0*}(t)=1,F^{k*}(t)=\int _{0}^{t}F^{(k-1)*}(t-y)dF(y),\quad k\ge 1,\,t>0, \end{aligned}$$
(3)

and introduce the notation \(\overline{H}(t)\mathop {=}\limits ^{def}1-H(t),\) where \(H(\cdot )\) is an arbitrary CDF. Moreover, let be the indicator of random event .

3 Integral Equations for Transient Queueing Delay Distribution

In this section, by using the paradigm of embedded Markov chain and the formula of total probability we build the system of equations for conditional time-dependent virtual delay distribution defined in (2). Next, we build the system for Laplace transforms corresponding to the original one.

Assume, firstly, that the system is empty before the opening, so its evolution begins with idle period and the setup time begins simultaneously with the arrival epoch of the first batch of packets. We can, in fact, distinguish three mutually exclusive random events:

  1. (1)

    the first arrival occurs before t and the setup time also completes before t (we denote this event by \(E_{1}(t)\));

  2. (2)

    the first packet (call, job, customer, etc.) arrives before t but the setup time completes after t (\(E_{2}(t)\));

  3. (3)

    the first arrival occurs after time t (\(E_{3}(t)\)).

Let us define

$$\begin{aligned}&V_{0}^{(i)}(t,x)\mathop {=}\limits ^{def}{\mathbf {P}}\{\bigl (v(t)>x\bigr )\cap E_{i}(t)\,|\,X(0)=0\}, \end{aligned}$$
(4)

where \(t,x>0\), \(0\le m\le K\) and \(i=1, 2, 3.\) So, for example, \(V_{0}^{(3)}(t,x)\) denotes the probability that queueing delay at time t exceeds x and the first arrival occurs after t, on condition that the system is empty at the opening (at time \(t=0\)). Obviously, we have

$$\begin{aligned} {}&V_{0}(t,x)={\mathbf {P}}\{v(t)>x\,|\,X(0)=0\}=\sum _{i=1}^{3}V_{0}^{(i)}(t,x) \end{aligned}$$
(5)

Let us note that the following representation is true:

$$\begin{aligned} V_{0}^{(1)}(t,x)=&\int _{y=0}^{t}\lambda e^{-\lambda y}dy\int _{u=0}^{t-y}\Biggl [\sum _{i=0}^{K-2}\frac{(\lambda u)^{i}}{i!}e^{-\lambda u}V_{i+1}(t-y-u,x)\nonumber \\&+V_{K}(t-y-u,x)\sum _{i=K-1}^{\infty }\frac{(\lambda u)^{i}}{i!}e^{-\lambda u}\Biggr ]dG(y). \end{aligned}$$
(6)

Let us comment on (6) briefly. Indeed, the first summand on the right side describes the situation in which the buffer does not become saturated during the setup time, while the second one relates to the case in which a buffer overflow occurs during the setup time. Similarly, taking into consideration the random event \(E_{2},\) we find

$$\begin{aligned}&\!\!\!\!\!\! V_{0}^{(2)}\!(t,x)\!=\!\!\!\int _{y=0}^{t}\!\!\!\!\!\!\lambda e^{-\!\lambda y}\!\!\!\int _{u=t-y}^{\infty } \sum _{i=0}^{K\!-2}\!\frac{\bigl [\lambda (t\!-\!y)\bigr ]^{i}}{i!}e^{-\!\lambda (t-y)}\overline{F}^{(i+1)*}\!(x\!-\!y\!-\!u\!+\!t)dG(u)dy.\!\!\! \end{aligned}$$
(7)

Finally we have, obviously,

$$\begin{aligned}&V_{0}^{(3)}(t,x)=0. \end{aligned}$$
(8)

Referring to (5), we obtain from (6)–(8)

$$\begin{aligned} \begin{aligned} V_{0}(t,x)=&\!\!\int _{y=0}^{t}\!\!\lambda e^{-\lambda y}dy\int _{u=0}^{t-y}\Biggl [\sum _{i=0}^{K-2}\!\frac{(\lambda u)^{i}}{i!}e^{-\lambda u}V_{i+1}(t\!-\!y\!-\!u,x)\!\\&+\!V_{K}(t\!-\!y\!-\!u,x)\!\!\sum _{i=K-1}^{\infty }\!\!\frac{(\lambda u)^{i}}{i!}e^{-\lambda u}\Biggr ]dG(y)\\&\!+\!\int _{y=0}^{t}\!\!\!\!\!\lambda e^{-\lambda y}\int _{u=t-y}^{\infty } \sum _{i=0}^{K\!-\!2}\frac{\bigl [\lambda (t\!-\!y)\bigr ]^{i}}{i!}e^{-\lambda (t-y)}\!\overline{F}^{(i+1)*}(x\!-\!y\!-\!u\!+\!t)dG(u)dy. \end{aligned} \end{aligned}$$
(9)

Now, let us take into consideration the situation in which the system is not empty primarily (at time \(t=0\)), i.e. \(1\le n\le K\). Due to the fact that successive departure moments are Markov times in the evolution of the M / G / 1-type system (see e.g. [1]), then, applying the continuous version of Total Probability Law with respect to the first departure moment after \(t=0,\) we get the following system of integral equations:

$$\begin{aligned} V_{n}(t,x) =&\int _{0}^{t}\!\Biggl [\sum _{i=0}^{K\!-n-\!1}\!\frac{(\lambda y)^{i}}{i!}e^{-\lambda y} V_{n+i-1}(t\!-\!y,x)\!+\!V_{K\!-\!1}(t\!-\!y,x)\!\!\!\!\!\sum _{i=K\!-n}^{\infty }\!\!\!\!\frac{(\lambda y)^{i}}{i!}e^{-\lambda y}\Biggr ]\!dF(y)\nonumber \\&+I\{1\le n\le K-1\}\sum _{i=0}^{K-n-1}\frac{(\lambda t)^{i}}{i!}e^{-\lambda t}\int _{t}^{\infty }\overline{F}^{(n+i-1)*}(x-y+t)dF(y), \end{aligned}$$
(10)

where \(1\le n\le K.\) The interpretation of the first two summands on the right side of (10) is similar to (6)-(7). The last summand on the right side relates to the situation in which the first service completion epoch occurs after time t; in such a case, if \(n=K\), the queueing delay at time t equals 0, since the “virtual” packet arriving at this time is lost because of the buffer overflow. Let us introduce the following notation:

$$\begin{aligned}&\widehat{v}_{n}(s,x)\mathop {=}\limits ^{def}\int _{0}^{\infty }e^{-st}V_{n}(t,x)dt,\quad \mathrm{{Re}}(s)>0, 0\le n\le K. \end{aligned}$$
(11)

where \(\mathrm{Re}(s)>0\) and \(0\le n\le K.\) By the fact that for \(\mathrm{Re}(s)>0\) we have

$$\begin{aligned}&\int _{t=0}^{\infty }e^{-st}dt\int _{y=0}^{t}\lambda e^{-\lambda y}dy\int _{u=0}^{t-y}\frac{(\lambda u)^{i}}{i!}e^{-\lambda u} V_{j}(t-y-u,x)dG(u)\nonumber \\&\quad =\int _{y=0}^{\infty }\lambda e^{-(\lambda +s)y}dy\int _{u=0}^{\infty }e^{-(\lambda +s)u}\frac{(\lambda u)^{i}}{i!}e^{-\lambda u} dG(u)\int _{t=y+u}^{\infty }e^{-s(t-y-u)}\nonumber \\&\qquad \times V_{j}(t-y-u,x)dt=a_{i}(s)\widehat{v}_{j}(s,x), \end{aligned}$$
(12)

where

$$\begin{aligned}&a_{i}(s)\mathop {=}\limits ^{def}\frac{\lambda }{\lambda +s}\int _{0}^{\infty }\frac{(\lambda y)^{i}}{i!}e^{-(\lambda +s)y}dG(y), \end{aligned}$$
(13)

we obtain from (9)

$$\begin{aligned}&\widehat{v}_{0}(s,x)=\sum _{i=0}^{K-2}a_{i}(s)\widehat{v}_{i+1}(s,x)+\widehat{v}_{K}(s,x)\sum _{i=K-1}^{\infty }a_{i}(s) +\eta (s,x), \end{aligned}$$
(14)

where we denote

$$\begin{aligned}&\eta (s,x)\mathop {=}\limits ^{def}\int _{0}^{\infty }e^{-st}V_{0}^{(2)}(t,x)dt\nonumber \\&\quad =\!\!\int _{t=0}^{\infty }\!e^{-(s+\lambda )t}dt\!\int _{y=0}^{t} \sum _{i=0}^{K-2}\frac{\lambda ^{i+1}(t\!-\!y)^{i}}{i!}dy\!\!\int _{u=t-y}^{\infty }\!\!\!\overline{F}^{(i+1)*}(x\!-\!y\!-\!u\!+\!t)dG(u). \end{aligned}$$
(15)

Similarly, denoting

$$\begin{aligned}&\alpha _{i}(s)\mathop {=}\limits ^{def}\int _{0}^{\infty }e^{-(\lambda +s)x}\frac{(\lambda x)^{i}}{i!}dF(x) \end{aligned}$$
(16)

and

$$\begin{aligned}&\!\! \kappa _{n}(s,x)\!\mathop {=}\limits ^{def}\!\!I\{1\!\le \! n\!\le \! K\!-\!1\}\!\!\int _{t=0}^{\infty }\sum _{i=0}^{K\!-n\!-\!1}\!\!\! e^{-(s+\!\lambda )t}\frac{(\lambda t)^{i}}{i!}\!\!\int _{t}^{\infty }\!\!\overline{F}^{(n+i\!-\!1)*}\!(x\!-\!y\!+\!t)dF(y)dt, \end{aligned}$$
(17)

where \(\mathrm{Re}(s)>0,\) we transform the equations (10) as follows:

$$\begin{aligned}&\widehat{v}_{n}(s,x)=\sum _{i=0}^{K-n-1}\alpha _{i}(s) \widehat{v}_{n+i-1}(s,x)+\widehat{v}_{K-1}(s,x)\sum _{i=K-n}^{\infty } \alpha _{i}(s)+\kappa _{n}(s,x), \end{aligned}$$
(18)

where \(1\le n\le K.\) Let us define

$$\begin{aligned}&z_{n}(s,x)\mathop {=}\limits ^{def} \widehat{v}_{K-n}(s,x),\quad 0\le n\le K. \end{aligned}$$
(19)

After introducing (19), we obtain from (18) the following equations:

$$\begin{aligned}&\sum _{i=-1}^{n}\alpha _{i+1}(s)z_{n-i}(s,x)-z_{n}(s,x)=\psi _{n}(s,x), \end{aligned}$$
(20)

where \(0\le n\le K-1,\) and the sequence \(\psi _{n}(s,x)\) is defined as follows:

$$\begin{aligned}&\psi _{n}(s,x)\mathop {=}\limits ^{def}\alpha _{n+1}(s)z_{0}(s,x)-z_{1}(s,x) \sum _{i=n+1}^{\infty }\alpha _{i}(s)-\kappa _{K-n}(s,x). \end{aligned}$$
(21)

Similarly, utilizing (19) in (14), we get

$$\begin{aligned}&z_{K}(s,x)=\sum _{i=0}^{K-2}a_{i}(s)z_{K-i-1}(s,x)+z_{0}(s,x) \sum _{i=K-1}^{\infty }a_{i}(s) +\eta (s,x). \end{aligned}$$
(22)

In the next section we obtain a compact-form solution of the system (20) and (22) written in terms of “input” system characteristics and a certain functional sequence defined recursively by coefficients \(\alpha _{i}(s)\), \(i\ge 0.\)

4 Compact Solution for Queueing Delay Transforms

In [4] (see also [5]) the following linear system of equations is investigated:

$$\begin{aligned}&\sum _{i=-1}^{n}\alpha _{i+1}z_{n-i}-z_{n}=\psi _{n},\quad n\ge 0, \end{aligned}$$
(23)

where \(z_{n}\), \(n\ge 0,\) is a sequence of unknowns and \(\alpha _{n}\) and \(\psi _{n}\), \(n\ge 0,\) are known coefficients, where \(\alpha _{0}\ne 0.\) It was proved (see [4]) that each solution of (23) can be written in the following way:

$$\begin{aligned}&z_{n}=CR_{n+1}+\sum _{i=0}^{n}R_{n-i}\psi _{i},\quad n\ge 0, \end{aligned}$$
(24)

where C is a constant and terms of the sequence \((R_{n})\), \(n\ge 0,\) can be computed in terms of \(\alpha _{n}\), \(n\ge 0,\) recursively in the following way:

$$\begin{aligned}&R_{0}=0,R_{1}=\alpha _{0}^{-1},R_{n+1}=R_{1}\bigl (R_{n}-\sum _{i=0}^{n}\alpha _{i+1}R_{n-i}\bigr ),\quad n\ge 1. \end{aligned}$$
(25)

Observe that the system (20) has the same form as (23) but with coefficients \(\alpha _{i}\) and \(\psi _{i}\), \(i\ge 0,\) depending on s and (sx), respectively. Thus, the solution of (20) can be derived by using (24). The fact that the number of equations in (24) (comparing to (20)) is finite, allows for finding \(C=C(s,x)\) in the explicit form, treating the equation (22) as a boundary condition. Hence, we obtain the following formula (see (23)–(25)):

$$\begin{aligned}&z_{n}(s,x)=C(s,x)R_{n+1}(s)+\sum _{i=0}^{n}R_{n-i}(s)\psi _{i}(s,x),\quad n\ge 0, \end{aligned}$$
(26)

where the functional sequence \(\bigl (R_{n}(s)\bigr )\), \(n\ge 0,\) is defined by

$$\begin{aligned}&R_{0}(s)\!=\!0,R_{1}(s)\!=\!\alpha _{0}^{-1}(s),R_{n+1}(s)\!=\!R_{1}(s)\bigl (R_{n}(s)\!-\!\sum _{i=0}^{n}\!\alpha _{i+\!1}(s)R_{n-i}(s)\bigr ), \end{aligned}$$
(27)

where \(n\ge 1\) and \(\alpha _{i}(s)\) is stated in (16). Taking \(n=0\) in (26), we obtain the following representation:

$$\begin{aligned}&z_{0}(s,x)=C(s,x)R_{1}(s) \end{aligned}$$
(28)

and substituting \(n=1,\) we get

$$\begin{aligned} z_{1}(s,x)&=C(s,x)R_{2}(s)+R_{1}(s)\psi _{0}(s,x)\nonumber \\&=C(s,x)R_{2}(s)+R_{1}(s)\Bigl (\alpha _{1}(s)R_{1}(s)C(s,x)-z_{1}(s,x)\sum _{i=1}^{\infty } \alpha _{i}(s)\Bigr ), \end{aligned}$$
(29)

since \(\kappa _{K}(s,x)=0.\) From (29) we obtain

$$\begin{aligned}&z_{1}(s,x)=\theta (s)C(s,x)\bigl (R_{2}(s)+\alpha _{1}(s)R_{1}^{2}(s)\bigr ), \end{aligned}$$
(30)

where

$$\begin{aligned}&\theta (s)\mathop {=}\limits ^{def}\Bigl [1+R_{1}(s)\sum _{i=1}^{\infty }\alpha _{i}(s)\Bigr ]^{-1} =\frac{f(\lambda +s)}{f(s)}. \end{aligned}$$
(31)

Now the formulae (28) and (30)–(31) allows for writing terms of the functional sequence \(\bigl (\psi _{n}(s,x)\bigr )\), \(n\ge 0\) (see (21)), as a function of C(sx). In order to find the representation for C(sx), we must rewrite the formula (22), utilizing identities (21), (26), (28) and (30). We obtain

$$\begin{aligned} z_{K}(s,x)=&\sum _{i=1}^{K-1}a_{K-i-1}(s)\Bigl [C(s,x)R_{i+1}(s)+\sum _{j=0}^{i}R_{i-j}(s)\psi _{j}(s,x)\Bigr ]\nonumber \\&+C(s,x)R_{1}(s)\sum _{i=K-1}^{\infty }a_{i}(s)+\eta (s,x) =\sum _{i=1}^{K-1}a_{K-i-1}(s)\Bigl [C(s,x)R_{i+1}(s)\nonumber \\&+\sum _{j=0}^{i}R_{i-j}(s)\Bigl (\alpha _{j+1}(s)z_{0}(s,x)-z_{1}(s,x)\sum _{r=j+1}^{\infty } \alpha _{r}(s)-\kappa _{K-j}(s,m)\Bigr )\Bigr ]\nonumber \\&+C(s,x)R_{1}(s)\!\!\!\sum _{i=K-1}^{\infty }\!\!\!a_{i}(s)\!+\!\eta (s,x) \!=\!C(s,x)\Biggl \{\sum _{i=1}^{K-1}\!\!a_{K\!-i-1}(s)\Bigl [R_{i+1}(s)\!+\!\sum _{j=0}^{i}\!R_{i-j}(s)\nonumber \\&\times \!\Bigl (R_{1}(s)\alpha _{j+1}(s)\! -\!\theta (s)\bigl (R_{2}(s)\!+\!\alpha _{1}(s)R_{1}^{2}(s)\bigr )\!\sum _{r=j+1}^{\infty }\!\alpha _{r}(s)\Bigr )\Bigr ] +R_{1}(s)\sum _{i=K-1}^{\infty }a_{i}(s)\Biggr \}\nonumber \\&-\sum _{i=1}^{K-1}a_{K-i-1}\sum _{j=1}^{i}R_{i-j}(s)\kappa _{K-j}(s,x) +\eta (s,x)=\Phi _{1}(s)C(s,x)+\chi _{1}(s,x), \end{aligned}$$
(32)

where we denote

$$\begin{aligned} \varPsi _{1}(s)\mathop {=}\limits ^{def}&\sum _{i=1}^{K-1}a_{K-i-1}(s) \Bigl [R_{i+1}(s)+\sum _{j=0}^{i}R_{i-j}(s)\Bigl (R_{1}(s)\alpha _{j+1}(s)\nonumber \\&-\theta (s)\bigl (R_{2}(s)+\alpha _{1}(s)R_{1}^{2}(s)\bigr ) \sum _{r=j+1}^{\infty }\alpha _{r}(s)\Bigr ] +R_{1}(s)\sum _{i=K-1}^{\infty }a_{i}(s) \end{aligned}$$
(33)

and

$$\begin{aligned}&\chi _{1}(s,x)\mathop {=}\limits ^{def}-\sum _{i=1}^{K-1}a_{K-i-1} \sum _{j=1}^{i}R_{i-j}(s)\kappa _{K-j}(s,x)+\eta (s,x). \end{aligned}$$
(34)

Finally, let us substitute \(n=K\) in (26) and apply the formulae (21), (28) and (30). We get

$$\begin{aligned} z_{K}(s,x)&=C(s,x)R_{K+1}(s)+\sum _{i=0}^{K}R_{K-i}(s)\Biggl \{\alpha _{i+1}(s) R_{1}(s)C(s,x)\nonumber \\&-\theta (s)C(s,x)\bigl (R_{2}(s)+\alpha _{1}(s)R_{1}^{2}(s)\bigr ) \sum _{j=i+1}^{\infty }\alpha _{j}(s)-\kappa _{K-i}(s,x)\Biggr \}\nonumber \\&=C(s,x)\Biggl \{R_{K+1}(s)+\sum _{i=0}^{K}R_{K-i}(s)\Bigl [\alpha _{i+1}(s)R_{1}(s) -\theta (s)\bigl (R_{2}(s)+\alpha _{1}(s)R_{1}^{2}(s)\bigr )\nonumber \\&\times \sum _{j=i+1}^{\infty }\alpha _{j}(s)\Bigr ]\Biggr \}-\sum _{i=1}^{K}R_{K-i}(s) \kappa _{K-i}(s,x)\Bigr ) =\varPsi _{2}(s)C(s,x)+\chi _{2}(s,x), \end{aligned}$$
(35)

where

$$\begin{aligned}&\!\!\varPsi _{2}(s)\!\mathop {=}\limits ^{def}\!\!\!R_{K\!+\!1}(s)\!+\! \sum _{i=0}^{K}\!R_{K-i}(s)\Bigl [\alpha _{i+1}(s)R_{1}(s)\!-\!\theta (s) \bigl (R_{2}(s)\!+\!\alpha _{1}(s)R_{1}^{2}(s)\bigr )\!\!\!\sum _{j=i+1}^{\infty }\!\!\!\alpha _{j}(s)\Bigr ] \end{aligned}$$
(36)

and

$$\begin{aligned}&\chi _{2}(s,x)\mathop {=}\limits ^{def}-\sum _{i=1}^{K}R_{K-i}(s)\kappa _{K-i}(s,x). \end{aligned}$$
(37)

Comparing the right sides of (32) and (35), we eliminate C(sx) as follows:

$$\begin{aligned}&C(s,x)=\bigl [\varPsi _{1}(s)-\varPsi _{2}(s)\bigr ]^{-1}\bigl [\chi _{2}(s,x)-\chi _{1}(s,x)\bigr ]. \end{aligned}$$
(38)

Now, from the formulae (19), (26) and (38), we obtain the following main result:

Theorem 1

The representation for the LT of the conditional transient queueing delay distribution in the M / G / 1 / K-type model with generally distributed setup times is the following:

$$\begin{aligned} \begin{aligned} \widehat{v}_{n}(s,x)=&\int _{0}^{\infty }e^{-st}{\mathbf {P}}\{v(t)>x\,|\,X(0)=n\}dt =\frac{\chi _{2}(s,x)-\chi _{1}(s,x)}{\varPsi _{1}(s)-\varPsi _{2}(s)} \Biggl \{R_{K-n+1}(s)\\&+\sum _{i=0}^{K-n}R_{K-n-i}(s)\Bigl [\alpha _{i+1}(s)R_{1}(s) -\theta (s)\Bigl (R_{2}(s)+\alpha _{1}(s)R_{1}^{2}(s)\Bigr )\sum _{j=i+1}^{\infty } \alpha _{j}(s)\Bigr ]\Biggr \}\\&-\sum _{i=0}^{K-n}R_{K-n-i}(s)\kappa _{K-i}(s,m), \end{aligned} \end{aligned}$$
(39)

where the formulae for \(\alpha _{i}(s)\), \(\kappa _{i}(s,x)\), \(R_{i}(s)\), \(\theta (s)\), \(\varPsi _{1}(s)\), \(\chi _{1}(s,x)\), \(\varPsi _{2}(s)\) and \(\chi _{2}(s,x)\) are given in (16), (17), (27), (31), (33), (34), (36) and (37), respectively.

5 Numerical Example

Let us take into consideration a node of the wireless sensor network with buffer of size 6 packets, with the stream of packets of average size 100 B arriving to the node according to a Poisson process with intensity 300 Kb/s. Hence, the \(\lambda =375\) packets per second arrive to the node and interarrival time between successive packets is equal to 2, 7 ms. Subsequently, assume, that packets are being transmitted with speed 400 Kb/s according to a 2-Erlang distribution with parameter \(\mu = 1000\), that gives the mean processing time 2 ms. Moreover, let us consider that the radio transmitter of the node is switched off during an idle period and needs an exponentially distributed setup time to become ready for processing. Consider cases in which the mean times are equal to 1, 10, and 100 ms, respectively. The probabilities of \({\mathbf {P}}\{v(t)>x|X(0)=0\}\) for \(x=0.001\) and \(x=0.01\) are presented in Fig. 1. The figures show that the analytical results are compatible with process-based discrete-event simulations (DES).

Fig. 1.
figure 1

Probabilities \({\mathbf {P}}\{v(t)>x|X(0)=0\}\) for \(x=0.001\) (a) and \(x=0.01\) (b), where mean setup time is equal to 1 (solid line), 10 (dashed line) and 100 (dot dashed line) ms. Bold black lines and thin green lines correspond with analytical and DES results, respectively (Color figure online)