On packet scheduling with adversarial jamming and speedup
 57 Downloads
Abstract
In Packet Scheduling with Adversarial Jamming, packets of arbitrary sizes arrive over time to be transmitted over a channel in which instantaneous jamming errors occur at times chosen by the adversary and not known to the algorithm. The transmission taking place at the time of jamming is corrupt, and the algorithm learns this fact immediately. An online algorithm maximizes the total size of packets it successfully transmits and the goal is to develop an algorithm with the lowest possible asymptotic competitive ratio, where the additive constant may depend on packet sizes. Our main contribution is a universal algorithm that works for any speedup and packet sizes and, unlike previous algorithms for the problem, it does not need to know these parameters in advance. We show that this algorithm guarantees 1competitiveness with speedup 4, making it the first known algorithm to maintain 1competitiveness with a moderate speedup in the general setting of arbitrary packet sizes. We also prove a lower bound of \(\phi +1\approx 2.618\) on the speedup of any 1competitive deterministic algorithm, showing that our algorithm is close to the optimum. Additionally, we formulate a general framework for analyzing our algorithm locally and use it to show upper bounds on its competitive ratio for speedups in [1, 4) and for several special cases, recovering some previously known results, each of which had a dedicated proof. In particular, our algorithm is 3competitive without speedup, matching both the (worstcase) performance of the algorithm by Jurdzinski et al. (Proceedings of the 12th workshop on approximation and online algorithms (WAOA), LNCS 8952, pp 193–206, 2015. http://doi.org/10.1007/9783319182636_17) and the lower bound by Anta et al. (J Sched 19(2):135–152, 2016. http://doi.org/10.1007/s109510150451z).
Keywords
Packet scheduling Adversarial jamming Online algorithms Throughput maximization Resource augmentation1 Introduction
We study an online packet scheduling model recently introduced by Anta et al. (2016) and extended by Jurdzinski et al. (2015). In our model, packets of arbitrary sizes arrive over time and they are to be transmitted over a single communication channel. The algorithm can schedule any packet of its choice at any time, but cannot interrupt its subsequent transmission. In the scheduling jargon, there is a single machine and no preemptions. There are, however, instantaneous jamming errors or faults at times chosen by the adversary, which are not known to the algorithm. A transmission taking place at the time of jamming is corrupt, and the algorithm learns this fact immediately. The packet whose transmission failed can be retransmitted immediately or at any later time, but the new transmission needs to send the whole packet, i.e., the algorithm cannot resume a transmission that failed.
The objective is to maximize the total size of packets successfully transmitted. In particular, the goal is to develop an online algorithm with the lowest possible competitive ratio, which is the asymptotic worstcase ratio between the total size of packets in an optimal offline schedule and the total size of packets completed by the algorithm on a large instance. (See the next subsection for a detailed explanation of competitive analysis.)
We focus on algorithms with resource augmentation, namely on online algorithms that transmit packets \(s\ge 1\) times faster than the offline optimal solution they are compared against. Such algorithm is often said to be speeds, running at speed s, or having a speedup of s. As our problem allows constant competitive ratio already at speed 1, we consider the competitive ratio as a function of the speed. This deviates from previous work, which focused on the case with no speedup or on the speedup sufficient for ratio 1, ignoring intermediate cases.
1.1 Competitive analysis and its extensions
Competitive analysis focuses on determining the competitive ratio of an online algorithm ALG, which is the supremum over all valid instances I of \(\textsf {OPT}(I)/\textsf {ALG}(I)\), where \(\textsf {OPT}(I)\) is the optimal profit and \(\textsf {ALG}(I)\) is the profit of ALGon instance I.^{1}
Note that the optimal solution is to the whole instance. Thus, it can be thought of as being determined by an algorithm that knows the whole instance in advance and has unlimited computational power. For this reason, the optimal solution is sometimes called “offline optimum”. The name “competitive analysis” was coined by Karlin et al. (1988) but this kind analysis was applied even before (Graham 1966; Sleator and Tarjan 1985). Since then, competitive analysis was employed to the study of many online optimization problems, as evidenced by (now somewhat dated) textbook by Borodin and ElYaniv (1998). A nice overview of competitive analysis and its many extensions in the scheduling context can be found in a survey by Pruhs (2007).
1.1.1 Asymptotic ratio and additive constant
In some discrete optimization problems, such as bin packing or various coloring problems, the standard notion of competitive analysis is too restrictive. The issue is that in order to attain a competitive ratio relatively close to 1 (or even any ratio), an online algorithm must behave in a predictable way when the current optimal value is still small, which makes the algorithm more or less trivial and the ratio somewhat large. To remedy this, the “asymptotic competitive ratio” is often considered, which means essentially that only instances with a sufficiently large optimal value are considered. This is often captured by stating that an algorithm is Rcompetitive if (in our convention) there exists a constant c such that \(R \cdot \textsf {ALG}(I) + c \ge \textsf {OPT}(I)\) holds for every instance I. The constant c is typically required not to depend on the class of instances considered, which makes sense for aforementioned problems where the optimal value corresponds to the number of bins or colors used, but is still sometimes too restrictive.
This is the case in our problem. Specifically, using an example we show that a deterministic algorithm running at speed 1 can be (constant) competitive only if the additive term in the definition of the competitive ratio depends on the values of the packet sizes, even if there are only two packet sizes. Suppose that a packet of size \(\ell \) arrives at time 0. If the algorithm starts transmitting it immediately at time 0, then at time \(\varepsilon > 0\) a packet of size \(\ell 2\varepsilon \) arrives, the next fault is at time \(\ell \varepsilon \) and then the schedule ends (i.e., it is not possible to transmit anything later). Thus the algorithm does not complete the packet of size \(\ell \), while the adversary completes a slightly smaller packet of size \(\ell 2\varepsilon \). Otherwise, the algorithm is idle till some time \(\varepsilon > 0\), no other packet arrives and the next fault is at time \(\ell \), which is also the end of the schedule. In this case, the packet of size \(\ell \) is completed in the optimal schedule, while the algorithm completes no packet again.
1.1.2 Resource augmentation
Moreover, some problems do not admit competitive algorithms at all or yield counterintuitive results. Again, our problem is an example of the former kind if no additive constant depending on packet sizes is allowed (cf. aforementioned example). The latter can be observed in the paging problem, where the optimal ratio equals the cache size, seemingly suggesting that the larger the cache size, the worse the performance, regardless of the caching policy. Perhaps for this reason, already Sleator and Tarjan (1985) considered resource augmentation for the paging problem, comparing an online algorithm with cache capacity k to the optimum with cache capacity \(h \le k\). The “resource(s)” depend on the problem at hand. In particular, in case of scheduling problems, the machine speed is a natural choice and was introduced in the seminal paper of Kalyanasundaram and Pruhs (2000). The name “resource augmentation” itself was coined in Phillips et al. (2002).
The article of Kalyanasundaram and Pruhs (2000) gives online algorithms that are constantcompetitive when their machine runs at a constant speed \(s>1\) for two fundamental single machine scheduling problems that do not admit constant competitive algorithms in the standard setting with the machine running at speed 1. One of the two problems is a preemptive variant of realtime scheduling where each job has a release time, deadline, processing time, and a weight, and the objective is to maximize the weight of jobs completed by their deadlines. This was followed by numerous studies of similar problems, where one particularly interesting line of research (for multiple machine setting) aims at determining the minimum speedup which suffices for competitive ratio 1 (Phillips et al. 2002; Lam et al. 1999, 2004; Chrobak et al. 2003). An uptodate overview of these still open problems can be found in the thesis of Schewior et al. (2016).
1.2 Previous and related results
Packet Scheduling with Adversarial Jamming was introduced by Anta et al. (2016), who resolve it for two packet sizes: If \(\gamma >1\) denotes the ratio of the two sizes, then the optimal competitive ratio for deterministic algorithms is \((\gamma +\lfloor \gamma \rfloor ){/}\lfloor \gamma \rfloor \), which is always in the range [2, 3). Jurdzinski et al. (2015) extend this by proving that the optimal ratio for the case of multiple (though fixed) packet sizes is given by the same formula for the two packet sizes which maximize it.
Moreover, Jurdzinski et al. (2015) give further results for divisible packet sizes, i.e., instances in which every packet size divides every larger packet size. In particular, they prove that on such instances speed 2 is sufficient for 1competitiveness in the resource augmentation setting. (Note that the above formula for the optimal competitive ratio without speedup gives 2 for divisible instances.)
In another work, Anta et al. (2018) consider popular scheduling algorithms and analyze their performance under speed augmentation with respect to three efficiency measures, which they call completed load, pending load, and latency. The first is precisely the objective that we aim to maximize, the second is the total size of the available but not yet completed packets (which we minimize in turn), and finally, the last one is the maximum time elapsed from a packet’s arrival till the end of its successful transmission. We note that a 1competitive algorithm (possibly with an additive constant) for any of the first two objectives is also 1competitive for the other, but there is no similar relation for larger ratios.
We note that Anta et al. (2016) demonstrate the necessity of instantaneous error feedback by proving that discovering errors upon completed transmission rules out a constant competitive ratio. They also provide improved results for a stochastic online setting.
1.2.1 Multiple channels or machines
The problem we study has been generalized to multiple communication channels, machines, or processors, depending on particular application. The standard assumption, in communication jargon, is that the jamming errors on each channel are independent, and that any packet can be transmitted on at most one channel at any time.
For divisible instances, Jurdzinski et al. (2015) extend their (optimal) 2competitive algorithm to an arbitrary number of channels. The same setting is studied by Anta et al. (2015), who consider both the completed load and the pending load objectives, and investigate what speedup is necessary and sufficient for 1competitiveness with respect to either objective.
Recall that 1competitiveness for minimizing the total size of pending packets is equivalent to 1competitiveness for our objective of maximizing the total size of completed packets. In particular, for either objective, Anta et al. (2015) obtain a tight bound of 2 on speedup for 1competitiveness for two packet sizes. Moreover, they claim a 1competitive algorithm with speedup 7 / 2 for a constant number of sizes and pending (or completed) load, but the proof is incorrect. See Sect. 3.3 for a (singlechannel) counterexample.
Georgiou and Kowalski (2015) consider the same problem in a distributed setting, distinguishing between different information models. As communication and synchronization pose new challenges, they restrict their attention to jobs of unit size only and no speedup. On top of efficiency measured by the number of pending jobs, they also consider the standard (in distributed systems) notions of correctness and fairness.
Finally, Garncarek et al. (2017) consider “synchronized” parallel channels that all suffer errors at the same time. Their work distinguishes between “regular” jamming errors and “crashes”, which also cause the algorithm’s state to reset, losing any information stored about the past events. They prove that for two packet sizes, as the number of channels tends to infinity, the optimal ratio tends to 4 / 3 in the former setting and to \(\phi = (\sqrt{5}+1)/2 \approx 1.618\) in the latter.
1.2.2 Randomization
All aforementioned results, as well as our work, concern deterministic algorithms. In general, randomization often allows an improved competitive ratio. The idea is simply to replace the algorithm’s cost or profit with its expectation in the competitive ratio, but a proper definition is subtle. One may consider the adversary’s “strategies” for creating and solving an instance separately, possibly limiting their powers. Formal considerations lead to more than one adversary model, which may be confusing. As a case in point, Anta et al. (2016) note that their lower bound strategy for two sizes (in our model) applies to randomized algorithms as well, which would imply that randomization provides no advantage. However, their argument requires that the adversary acts based on the previous behavior of the algorithm, which depends on the algorithm’s random bits. This is permitted in the adaptive adversary model but not in the far more common oblivious adversary model, where the adversary needs to fix the instance in advance and cannot change it according to the decisions of the algorithm. To our best knowledge, randomized algorithms for our problem were never considered in the oblivious adversary model. For more details and formal definitions of these adversary models, we refer to the article that first distinguished them (BenDavid et al. 1994) or the textbook on online algorithms (Borodin and ElYaniv 1998).
1.3 Our results
The major contribution of this paper is a uniform algorithm that we call PrudentGreedy (PG) and describe in Sect. 2.1. Our main result concerns the analysis of the general case with speedup where we show that speed 4 is sufficient for our algorithm PG to be 1competitive. The proof is by a complex (nonlocal) charging argument described in Sect. 4.
However, we start by formulating a simpler (local) analysis framework and applying it to several settings in Sect. 3. In particular, we prove that on general instances, PG achieves the optimal competitive ratio of 3 without speedup and we also get a tradeoff between the competitive ratio and the speedup for speeds in [1, 4).
To recover the 1competitiveness at speed 2 and also 2competitiveness at speed 1 for divisible instances, we have to modify our algorithm slightly as otherwise, we can guarantee 1competitiveness for divisible instances only at speed 2.5 (see Sect. 3.2.3). This is to be expected as divisible instances are a very special case. The definition of the modified algorithm for divisible instances and its analysis by our local analysis framework is in Sect. 3.4.
On the other hand, we prove that our algorithm PG is 1competitive on far broader class of “wellseparated” instances at sufficient speed: If the ratio between two successive packet sizes (in their sorted list) is no smaller than \(\alpha \ge 1\), our algorithm is 1competitive if its speed is at least \(S_\alpha \) which is a nonincreasing function of \(\alpha \) such that \(\lim _{\alpha \rightarrow \infty } S_\alpha = 2\) (see Sect. 3.2.2 for the precise definition of \(S_\alpha \)).
In Sect. 3.3, we demonstrate that the analyses of our algorithm are mostly tight, i.e., that (a) on general instances, the algorithm is no better than \((1+2/s)\)competitive for \(s < 2\) and no better than 4 / scompetitive for \(s\in [2,4)\), (b) on divisible instances, it is no better than 4 / 3competitive for \(s<2.5\), and (c) it is at least 2competitive for \(s < 2\), even for two divisible packet sizes [example (c) is in Sect. 3.4.1]. See Fig. 1 for a graph of our bounds.
In Sect. 5, we complement these results with two lower bounds on the speed that is sufficient to achieve 1competitiveness by a deterministic algorithm. The first one proves that even for two divisible packet sizes, speed 2 is required to attain 1competitiveness, establishing optimality of our modified algorithm and that of Jurdzinski et al. (2015) for the divisible case. The second lower bound strengthens the previous construction by showing that for nondivisible instances with more packet sizes, speed \(\phi +1 \approx 2.618\) is needed for 1competitiveness. Both results hold even if all packets are released simultaneously.
1.3.1 Comparison to previous work
Summarizing, our algorithm PG works well in many settings, which we prove using a versatile local analysis framework (except for our main result in Sect. 4 which requires a more intricate analysis). This contrasts with the results of Jurdzinski et al. (2015), where each upper bound is attained by a dedicated algorithm with independently crafted analysis. In a sense, this means that their algorithms require the knowledge of speed they are running at. Moreover, algorithms in Jurdzinski et al. (2015) do require the knowledge of all admissible packet sizes. Our algorithm has the advantage that it is completely oblivious, i.e., requires no such knowledge. Furthermore, our algorithm is more appealing as it is significantly simpler and “workconserving” or “busy”, i.e., transmitting some packet whenever there is one pending, which is desirable in practice. In contrast, algorithms in Jurdzinski et al. (2015) can be unnecessarily idle if there is a small number of pending packets.
2 Algorithms, preliminaries, notations
We start by some notations. We assume there are k distinct nonzero packet sizes denoted by \(\ell _i\) and ordered such that \(\ell _1<\cdots <\ell _k\). For convenience, we define \(\ell _0=0\). We say that the packet sizes are divisible if \(\ell _i\) divides \(\ell _{i+1}\) for all \(i=1,\ldots ,k1\). For a packet p, let \(\ell (p)\) denote the size of p. For a set of packets P, let \(\ell (P)\) denote the total size of all the packets in P.
During the execution of an algorithm, at time t, a packet is pending if it is released before or at t and not completed before or at t. At time t, if no packet is running, the algorithm may start any pending packet. As a convention of our model, if a fault (jamming error) happens at time t and this is the completion time of a previously scheduled packet, this packet is considered completed. Also, at the fault time, the algorithm may start any packet, including the one whose transmission has been jammed.
Let \(L_{\textsf {ALG}}(i,Y)\) denote the total size of packets of size \(\ell _i\) completed by an algorithm ALG during a time interval Y. Similarly, \(L_{\textsf {ALG}}({\ge \,}i,Y)\) (resp. \(L_{\textsf {ALG}}({<\,}i,Y)\)) denotes the total size of packets of size at least \(\ell _i\) (resp. less than \(\ell _i\)) completed by an algorithm ALG during a time interval Y. Formally, we define \(L_{\textsf {ALG}}({\ge \,}i,Y)=\sum _{j=i}^k L_{\textsf {ALG}}(j,Y)\) and \(L_{\textsf {ALG}}({<\,}i,Y)=\sum _{j=1}^{i1} L_{\textsf {ALG}}(j,Y)\). We use notation \(L_{\textsf {ALG}}(Y)\) with a single parameter to denote the size \(L_{\textsf {ALG}}({\ge \,}1,Y)\) of packets of all sizes completed by ALG during Y and notation \(L_\textsf {ALG}\) without parameters to denote the size of all packets of all sizes completed by ALG at any time.
By convention, the schedule starts at time 0 and ends at time T, which is a part of the instance unknown to an online algorithm until it is reached. (This is similar to the times of jamming errors as one can also alternatively say that after T the errors are very frequent and no packet is completed.) Algorithm ALG is called Rcompetitive if there exists a constant A, possibly dependent on k and \(\ell _1\), ..., \(\ell _k\), such that for any instance and its optimal schedule \(\textsf {OPT}\), we have \(L_{\textsf {OPT}}\le R \cdot L_\textsf {ALG}+A\). We remark that in our analyses we show only a crude bound on A.
We denote the algorithm ALG with speedup \(s\ge 1\) by \({\textsf {ALG}(s)}\). The meaning is that in ALG(s), packets of size L need time L / s to process. In the resourceaugmentation variant, we are mainly interested in finding the smallest s such that ALG(s) is 1competitive, compared to \(\textsf {OPT}=\textsf {OPT}(1)\) that runs at speed 1.
2.1 Algorithm PrudentGreedy (PG)
The general idea of the algorithm is that after each error, we start by transmitting packets of small sizes, only increasing the size of packets after a sufficiently long period of uninterrupted transmissions. It turns out that the right tradeoff is to transmit a packet only if it would have been transmitted successfully if started just after the last error. It is also crucial that the initial packet after each error has the right size, namely to ignore small packet sizes if the total size of remaining packets of those sizes is small compared to a larger packet that can be transmitted. In other words, the size of the first transmitted packet is larger than the total size of all pending smaller packets and we choose the largest such size. This guarantees that if no error occurs, all currently pending packets with size equal to or larger than the size of the initial packet are eventually transmitted before the algorithm starts a smaller packet.
We now give the description of our algorithm PrudentGreedy (PG) for general packet sizes, noting that the other algorithm for divisible sizes differs only slightly. We divide the execution of the algorithm into phases. Each phase starts by an invocation of the initial step in which we need to carefully select a packet to transmit as discussed above. The phase ends by a fault, or when there is no pending packet, or when there are pending packets only of sizes larger than the total size of packets completed in the current phase. The periods of idle time, when no packet is pending, do not belong to any phase.
We first note that the algorithm is welldefined, i.e., that it is always able to choose a packet p in Step (2) if it has any packets pending. Moreover, if it succeeds in sending p, the length of thus started phase can be related to the total size of the packets completed in it.
Lemma 1
In Step (2), PG always chooses some packet if it has any pending. Moreover, if PG completes the first packet in the phase, then \(L_{\textsf {PG}(s)}((t_B,t_E])>s\cdot (t_Et_B)/2\), where \(t_B\) denotes the start of the phase and \(t_E\) its end (by a fault or Step (4)).
Proof
For the first property, note that a pending packet of the smallest size is eligible. For the second property, note that there is no idle time in the phase, and that only the last packet chosen by PG in the phase may not complete due to a jam. By the condition in Step (3), the size of this jammed packet is no larger than the total size of all the packets PG previously completed in this phase (including the first packet chosen in Step (2)), which yields the bound. \(\square \)
The following lemma shows a crucial property of the algorithm. Namely, if packets of size \(\ell _i\) are pending, the algorithm schedules packets of size at least \(\ell _i\) most of the time. Its proof also explains the reasons behind our choice of the first packet in a phase in Step (2) of the algorithm.
Lemma 2
 (i)
If a packet of size \(\ell _i\) is pending at time u and no fault occurs in (u, t), then the phase does not end before t.
 (ii)
Suppose that \(v>u\) is such that any time in [u, v) a packet of size \(\ell _i\) is pending and no fault occurs. Then the phase does not end in (u, v) and \(L_{\textsf {PG}(s)}(<i,(u,v])< \ell _i+\ell _{i1}\). (Recall that \(\ell _0=0\).)
Proof
(i) Suppose for a contradiction that the phase started at u ends at time \(t'<t\). We have \({\text{ rel }}(t')<{\text{ rel }}(t)=\ell _i\). Let \(\ell _j\) be the smallest packet size among the packets pending at \(t'\). As there is no fault, the reason for a new phase has to be that \({\text{ rel }}(t')<\ell _j\), and thus Step (3) does not choose a packet to be scheduled. Also note that any packet started before \(t'\) is completed. This implies, first, that there is a pending packet of size \(\ell _i\), as there was one at time u and there was insufficient time to complete it, thus j is welldefined and \(j\le i\). Second, all packets of sizes smaller than \(\ell _j\) pending at u are completed before or at \(t'\), implying that their total size is at most \({\text{ rel }}(t')<\ell _j\). Third, the phase started by a packet smaller than \(\ell _j\) at time u. However, this is a contradiction as a pending packet of the smallest size equal to or larger than \(\ell _j\) satisfied the condition in Step (2) at time u and a packet of size \(\ell _i\ge \ell _j\) was pending at u. (Note that it is possible that no packet of size \(\ell _j\) was pending at u.)
(ii) By (i), the phase that started at u does not end before time t if no fault happens. A packet of size \(\ell _i\) is always pending by the assumption of the lemma, and it is always a valid choice of a packet in Step (3) from time t on. Thus, the phase that started at u does not end in (u, v), and moreover, only packets of sizes at least \(\ell _i\) are started in [t, v). It follows that packets of sizes smaller than \(\ell _i\) are started only before time t and their total size is thus less than \({\text{ rel }}(t)+\ell _{i1}=\ell _i+\ell _{i1}\). \(\square \)
3 Local analysis and results
In this section, we formulate a general method for analyzing our algorithm by comparing locally within each phase the size of “large” packets completed by the algorithm and by the adversary. This method simplifies a complicated induction used in Jurdzinski et al. (2015), letting us obtain the same upper bounds of 2 and 3 on competitiveness for divisible and unrestricted packet sizes, respectively, at no speedup. Furthermore, we get several new results for the nondivisible cases.
For the analysis, let \(s\ge 1\) be the speedup. We fix an instance and its schedules for \({\textsf {PG}(s)}\) and OPT.
3.1 Critical times and master theorem
The common scheme is the following. We introduce a sequence of critical times \(C_k\le C_{k1}\le \cdots \le C_1\le C_0\), where \(C_0=T\) is the end of the schedule, that satisfy the following two informally stated properties: (1) till time \(C_i\) the algorithm has completed almost all packets of size \(\ell _i\) released before \(C_i\), and (2) in \((C_i,C_{i1}]\), a packet of size \(\ell _i\) is always pending. Properties (1) and (2) allow us to relate \(L_\textsf {OPT}(i,(0,C_i])\) and \(L_\textsf {OPT}(\ge i,(C_i,C_{i1}])\), respectively, to their “PG counterparts”. Note that each packet of size \(\ell _i\) completed by OPT belongs to exactly one of these sets. Specifically, such packet of size \(\ell _i\) belongs to exactly one of \(L_\textsf {OPT}(i,(0,C_i])\), \(L_\textsf {OPT}(\ge i,(C_i,C_{i1}])\), \(L_\textsf {OPT}(\ge i1,(C_{i1},C_{i2}])\), ..., \(L_\textsf {OPT}(\ge 1,(C_{1},C_{0}])\). See Fig. 2 for an illustration. Hence, summing aforementioned bounds yields Rcompetitiveness of the algorithm for appropriate R and speed s.
We first define the notion of igood times such that they satisfy property (1), and then choose the critical times among their suprema such that those satisfy property (2) as well.
Definition 1
 (i)
At time t, no packet of size \(\ell _i\) is pending for \({\textsf {PG}(s)}\),
 (ii)
at time t, algorithm \({\textsf {PG}(s)}\) starts a new phase by scheduling a packet of size larger than \(\ell _i\), or
 (iii)
\(t=0\).

\(C_0=T\), i.e., it is the end of the schedule.

For \(i=1,\ldots ,k\), \(C_i\) is the supremum of igood times t such that \(t\le C_{i1}\).

\(C_i\) is igood and \(C_i=C_{i1}\),

\(C_i\) is igood according to condition (ii) or (iii) in Definition 1, which implies that a phase starts at \(C_i\), or

there exists a packet of size \(\ell _i\) pending at \(C_i\), however, any such packet was released at \(C_i\).
First, we bound the total size of packets of size \(\ell _i\) completed before \(C_i\). The proof actually only uses the fact that each \(C_i\) is the supremum of igood times and justifies the definition above.
Lemma 3
Let \(s\ge 1\) be the speedup. Then, for any i, it holds that \(L_{\textsf {OPT}}(i,(0,C_i])\le L_{{\textsf {PG}(s)}}(i,(0,C_i])+\ell _k\).
Proof
If \(C_i\) is igood and satisfies condition (ii) in Definition 1, then by the description of Step (2) of the algorithm, the total size of pending packets of size \(\ell _i\) is less than the size of the scheduled packet, which is at most \(\ell _k\), and the lemma follows.
In all the remaining cases, it holds that PG(s) has completed all the packets of size \(\ell _i\) released before \(C_i\), thus the inequality holds trivially even without the additive term. \(\square \)
Our remaining goal is to bound \(L_\textsf {OPT}(\ge i,(C_i,C_{i1}])\). We divide \((C_i,C_{i1}]\) into isegments by the faults. We prove the bounds separately for each isegment. For the first isegment, only a loose bound suffices as we can use the additive constant. It is the bound for isegments started by a fault that is critical, as it determines the competitive ratio. Hence, the latter bound depends on the particular setting. We summarize the general method by the following definition and master theorem.
Definition 2
The interval (u, v] is called the initial isegment if \(u=C_i\) and v is either \(C_{i1}\) or the first time of a fault after u, whichever comes first.
The interval (u, v] is called a proper isegment if \(u\in (C_i,C_{i1})\) is a time of a fault and v is either \(C_{i1}\) or the first time of a fault after u, whichever comes first.
Theorem 1
(Master Theorem)
 1.For each \(i=1,\ldots ,k\) and each proper isegment (u, v] with \(vu\ge \ell _i\), it holds that$$\begin{aligned} (R1)L_{{\textsf {PG}(s)}}((u,v])+L_{{\textsf {PG}(s)}}(\ge i,(u,v]) \;\ge \; L_\textsf {OPT}(\ge i,(u,v]). \end{aligned}$$(3.1)
 2.For the initial isegment (u, v], it holds that$$\begin{aligned} L_{\textsf {PG}(s)}(\ge i,(u,v]) \;>\; s(vu)4\ell _k. \end{aligned}$$(3.2)
Proof
First, note that for a proper isegment (u, v], u is a fault time. Thus if \(vu<\ell _i\), then \(L_\textsf {OPT}(\ge i,(u,v])=0\) and (3.1) is trivial. It follows that (3.1) holds even without the assumption \(vu\ge \ell _i\).
Consider the initial isegment (u, v]. We have \(L_\textsf {OPT}(\ge i,(u,v])\le \ell _k+ vu\), as at most a single packet started before u can be completed. Combining this with (3.2) and using \(s\ge 1\), we get \(L_{\textsf {PG}(s)}(\ge i,(u,v])\;>\;s(vu)4 \ell _k\;\ge \;vu4\ell _k\;\ge \;L_\textsf {OPT}(\ge i,(u,v])5\ell _k\).
3.2 Local analysis of PrudentGreedy (PG)
We prove a general lemma which is useful in establishing the preconditions of the Master Theorem. Namely, the first part of the lemma directly implies the precondition (3.2) for the initial isegments. The second part of the lemma forms the foundation for proving the precondition (3.1) for proper isegments, for appropriate R and s, which may depend on the instance class. To obtain tight bounds, this part has to be applied carefully, and sometimes be strengthened by leveraging additional structure of the instance class under consideration.
Lemma 4
 (i)
If (u, v] is the initial isegment, then \(L_{\textsf {PG}(s)}(\ge i,(u,v])> s(vu)4\ell _k\).
 (ii)
If (u, v] is a proper isegment and \(vu\ge \ell _i\), then \(L_{\textsf {PG}(s)}((u,v])>s(vu)/2\) and \(L_{\textsf {PG}(s)}(\ge i,(u,v])>s(vu)/2\ell _i\ell _{i1}\). (Recall that \(\ell _0=0\).)
Proof
(i) If the phase that starts at u or contains u ends before v, let \(u'\) be its end. Otherwise, let \(u'=u\). We have \(u'\le u+\ell _i/s\) as otherwise, any packet of size \(\ell _i\), pending throughout the isegment by definition, would be an eligible choice in Step (3) of the algorithm, and the phase would not end before v. Using Lemma 2(ii), we have \(L_{\textsf {PG}(s)}(<i,(u',v])<\ell _i+\ell _{i1}<2\ell _k\). Since at most one packet at the end of the segment is unfinished, we have \(L_{\textsf {PG}(s)}(\ge i,(u,v]) \ge L_{\textsf {PG}(s)}(\ge i,(u',v]) > s(vu')3\ell _k \ge s(vu)4\ell _k\).
(ii) Let (u, v] be a proper isegment. Thus, u is a start of a phase that contains at least the whole interval (u, v] by Lemma 2(ii). By the definition of \(C_i\), u is not igood, implying that the phase starts by a packet of size at most \(\ell _i\). If \(vu\ge \ell _i\), then the first packet finishes (as \(s\ge 1\)) and thus \(L_{\textsf {PG}(s)}((u,v])>s(vu)/2\) by Lemma 1. The total size of completed packets smaller than \(\ell _i\) is less than \(\ell _i+\ell _{i1}\) by Lemma 2(ii), and thus \(L_{\textsf {PG}(s)}(\ge i,(u,v])>s(vu)/2\ell _i\ell _{i1}\).\(\square \)
3.2.1 General packet sizes
The next theorem gives a tradeoff of the competitive ratio of \({\textsf {PG}(s)}\) and the speedup s using our local analysis. While Theorem 6 shows that \({\textsf {PG}(s)}\) is 1competitive for \(s\ge 4\), here we give a weaker result that reflects the limits of the local analysis. However, for \(s=1\), our local analysis is tight as already the lower bound from Anta et al. (2016) shows that no algorithm is better than 3competitive (for packet sizes 1 and \(2\varepsilon \)). See Fig. 1 for an illustration of our upper and lower bounds on the competitive ratio of \({\textsf {PG}(s)}\).
Theorem 2
\({\textsf {PG}(s)}\) is \(R_s\)competitive where:
\(R_s=1+2/s\) for \(s\in [1,4)\),
\(R_s=2/3+2/s\) for \(s\in [4,6)\), and
\(R_s=1\) for \(s\ge 6\).
Proof
Lemma 4(i) implies the condition (3.2) for the initial isegments. We now prove (3.1) for any proper isegment (u, v] with \(vu\ge \ell _i\) and appropriate R. The bound then follows by the Master Theorem.
Since there is a fault at time u, we have \(L_\textsf {OPT}(\ge i,(u,v])\le vu\).
For \(s\in [1,4)\), by Lemma 4(ii) we get \((2/s)\cdot L_{\textsf {PG}(s)}((u,v])>vu\ge L_\textsf {OPT}(\ge i,(u,v])\),
which implies (3.1) for \(R=1+2/s\). \(\square \)
3.2.2 Wellseparated packet sizes
We can obtain better bounds on the speedup sufficient for 1competitiveness if the packet sizes are substantially different. Namely, we call the packet sizes \(\ell _1,\ldots ,\ell _k\)\(\alpha \)separated if \(\ell _i\ge \alpha \ell _{i1}\) holds for \(i=2,\ldots ,k\).
Note that \(S_\alpha \) is decreasing in \(\alpha \), with a single discontinuity at \(\alpha _1\). We have \(S_1=6\), matching the upper bound from Theorem 2. Moreover, \(S_2=3\), i.e., \(\textsf {PG}(3)\) is 1competitive for 2separated packet sizes, which includes the case of divisible packet sizes. (However, only \(s\ge 2.5\) is needed in the divisible case, as we show later.) The limit of \(S_\alpha \) for \(\alpha \rightarrow +\infty \) is 2. For \(\alpha <(1+\sqrt{3})/2\approx 1.366\), we get \(S_\alpha >4\), while Theorem 6 shows that \({\textsf {PG}(s)}\) is 1competitive for \(s\ge 4\). The weaker result of Theorem 3 below reflects the limits of the local analysis.
Theorem 3
Let \(\alpha >1\). If the packet sizes are \(\alpha \)separated, then \({\textsf {PG}(s)}\) is 1competitive for any \(s\ge S_\alpha \).
Proof
Let \(X=L_\textsf {OPT}(\ge i,(u,v])\). Note that \(X\le vu\).
Lemma 4(ii) together with \(\ell _{i1}\le \ell _i/\alpha \) gives \(L_{\textsf {PG}(s)}(\ge i,(u,v])>M\) for \(M=sX/2(1+1/\alpha )\ell _i\).
We use the fact that both X and \(L_{\textsf {PG}(s)}(\ge i,(u,v])\) are sums of some packet sizes \(\ell _j\), \(j\ge i\), and thus only some of the values are possible. However, the situation is quite complicated, as for example, \(\ell _{i+1}\), \(\ell _{i+2}\), \(2\ell _i\), \(\ell _i+\ell _{i+1}\) are possible values, but their ordering may vary.
We distinguish several cases based on X and \(\alpha \). We note in advance that the first five cases suffice for \(\alpha <\alpha _1\). Only after completing the proof for \(\alpha <\alpha _1\), we analyze the additional cases needed for \(\alpha \ge \alpha _1\).
Case (i): \(X=0\). Then (3.4) is trivial.
Case (ii): \(X=\ell _i\). Using \(s\ge 2+2/\alpha \), we obtain \(M\ge (1+1/\alpha )\ell _i(1+1/\alpha )\ell _i = 0\). Thus, \(L_{\textsf {PG}(s)}(\ge i,(u,v])>M\ge 0\), which implies \(L_{\textsf {PG}(s)}(\ge i,(u,v])\ge \ell _i=X\). Hence, (3.4) holds.
 If \(\alpha \le \phi \), we havewhere we use \(2/\alpha \ge 1+1/\alpha ^3\) or equivalently \(\alpha ^3 + 1  2\alpha ^2\le 0\), which is true as$$\begin{aligned} s\ge \frac{4\alpha +2}{\alpha ^2}=2\left( \frac{2}{\alpha }+ \frac{1}{\alpha ^2}\right) \ge 2\left( 1+\frac{1}{\alpha ^2}+\frac{1}{\alpha ^3}\right) , \end{aligned}$$where the last inequality holds for \(\alpha \in [1,\phi ]\).$$\begin{aligned} \alpha ^3 + 1  2\alpha ^2= & {} \alpha ^3  \alpha ^2 + 1  \alpha ^2 = \alpha ^2(\alpha  1)  (\alpha + 1)(\alpha  1)\\= & {} (\alpha 1)(\alpha ^2\alpha 1)\le 0, \end{aligned}$$

If on the other hand \(\alpha \ge \phi \), then \(s\ge 2(1+1/\alpha )\ge 2(1+1/\alpha ^2+1/\alpha ^3)\), as \(1/\alpha \ge 1/\alpha ^2 + 1/\alpha ^3\) holds for \(\alpha \ge \phi \).
Proof for\(\alpha <\alpha _1\): We now observe that for \(\alpha <\alpha _1\), we have exhausted all the possible values of X. Indeed, if (v) does not apply, then at most a single packet contributes to X, and one of the cases (i)–(iv) applies, as (iv) covers the case when \(X\ge \ell _{i+2}\), and as \(X=\ell _{i+1}\) is covered by (iii) or (v). Thus (3.4) holds and the proof is complete.
Proof for\(\alpha \ge \alpha _1\): We now analyze the remaining cases for \(\alpha \ge \alpha _1\).
Case (viii): \(X=\ell _{i+1}\) and \(\ell _{i+1}>2\ell _i\). We distinguish two subcases depending on the size of the unfinished packet of PG(s) in this phase.
We now observe that we have exhausted all the possible values of X for \(\alpha \ge \alpha _1\). Indeed, if at least two packets contribute to X, either (vi) or (vii) applies. Otherwise, at most a single packet contributes to X, and one of the cases (i)–(iv) or (viii) applies, as (iv) covers the case when \(X\ge \ell _{i+2}\). Thus (3.4) holds. \(\square \)
3.2.3 Divisible packet sizes
Now, we turn briefly to even more restricted divisible instances considered by Jurdzinski et al. (2015), which are a special case of 2separated instances. Namely, we improve upon Theorem 3 in Theorem 4 presented below in the following sense: While the former guarantees that PG(s) is 1competitive on (more general) 2separated instances at speed \(s \ge 3\), the latter shows that speed \(s \ge 2.5\) is sufficient for (more restricted) divisible instances.
Moreover, we note that by an example in Sect. 3.3, the bound of Theorem 4 is tight, i.e., PG(s) is not 1competitive for \(s<2.5\), even on divisible instances.
Theorem 4
If the packet sizes are divisible, then \({\textsf {PG}(s)}\) is 1competitive for \(s\ge 2.5\).
Proof
Lemma 4(i) implies (3.2). We now prove (3.1) for any proper isegment (u, v] with \(vu\ge \ell _i\) and \(R=1\). The bound then follows by the Master Theorem. Since there is a fault at time u, we have \(L_\textsf {OPT}(\ge i,(u,v])\le vu\).
By divisibility, we have \(L_\textsf {OPT}(\ge i,(u,v])=n\ell _i\) for some nonnegative integer n. We distinguish two cases based on the size of the last packet started by PG in the isegment (u, v], which is possibly unfinished due to a fault at v.
Otherwise, by divisibility, the size of the unfinished packet is at least \((n+1)\ell _i\) and the size of the completed packets is larger by the condition in Step (3) of the algorithm. Here, we also use the fact that \({\textsf {PG}(s)}\) completes the packet started at u, as its size is at most \(\ell _i\le vu\) (otherwise, u would be igood, thus \(C_i\ge u\) and (u, v] is not a proper isegment). Thus \(L_{\textsf {PG}(s)}(\ge i,(u,v])>(n+1)\ell _i3\ell _i/2 \ge (n1/2)\ell _i\). Divisibility again implies \(L_{\textsf {PG}(s)}(\ge i,(u,v])\ge n\ell _i=L_\textsf {OPT}(\ge i,(u,v])\), which shows (3.1). \(\square \)
3.3 Some examples for PG
3.3.1 General packet sizes
Speeds below 2 We show an instance on which the performance of PG(s) matches the bound of Theorem 2.
Remark 1
PG(s) has competitive ratio at least \(1 + 2/s\) for \(s < 2\).
Proof
Choose a large enough integer N. At time 0, the following packets are released: 2N packets of size 1, one packet of size 2, and N packet of size \(4/s  \varepsilon \) for a small enough \(\varepsilon > 0\) such that \(2 < 4/s  \varepsilon \). These are all packets in the instance.
First, there are N phases, each of length \(4/s  \varepsilon \) and ending by a fault. OPT completes a packet of size \(4/s  \varepsilon \) in each phase, while PG(s) completes 2 packets of size 1, and then it starts a packet of size 2 which is not finished.
Then, there is a fault every 1 unit of time, thus OPT completes all packets of size 1, while the algorithm has no pending packet of size 1. As \(s < 2\), the length of the phase is not sufficient to finish a longer packet.
Overall, OPT completes packets of total size \(2 + 4/s  \varepsilon \) per phase, while the algorithm completes packets of total size only 2 per phase. The ratio thus tends to \(1 + 2/s\) as \(\varepsilon \rightarrow 0\).
\(\square \)
Speeds between 2 and 4 We show an instance which proves that PG(s) is not 1competitive for \(s<4\). In particular, this implies that the speed sufficient for 1competitiveness in Theorem 6, which we prove later, cannot be improved.
Remark 2
PG(s) has competitive ratio at least \(4/s>1\) for \(s \in [2,4)\).
Proof
Choose a large enough integer y. There are four packet sizes: 1, x, y, and z such that \(1< x< y < z\), \(z = x+y1\), and \(x = y\cdot (s2) / 2 + 2\). Note that \(s \in [2,4)\) implies both \(x \ge 2\) and \(x \le y1\), the latter for large enough y.
We have N phases again. At time 0, the adversary releases all \(N(y1)\) packets of size 1, all N packets of size y, and a single packet of size z (never completed by either OPT or PG(s)). The packets of size x are released one per phase.
In each phase, PG(s) completes, in this order, \(y1\) packets of size 1 and then a packet of size x which has arrived just after the \(y1\) packets of size 1 are completed. Next, it starts a packet of size z and fail due to a jam. We show that OPT completes a packet of size y. To this end, it is required that \(y < 2(x+y1) / s\), or equivalently \(x > y\cdot (s2) / 2 + 1\) which holds by the choice of x.
This example also disproves the claim of Anta et al. (2015) that their \((m,\beta )\)LAF algorithm is 1competitive at speed 3.5, even for one channel (i.e., \(m=1\)), where it behaves almost exactly as PG(s). The sole difference is that LAF starts a phase by choosing a “random” packet. As this algorithm is deterministic, we understand this to mean “arbitrary”, in particular, the same as chosen by PG(s).
3.3.2 Divisible case
We give an example that shows that PG is not very good for divisible instances, namely, it is not 1competitive for any speed \(s<2.5\) and thus the bound in Theorem 4 is tight.
Remark 3
PG(s) has competitive ratio at least 4 / 3 on divisible instances if \(s < 2.5\).
Proof
We use packets of sizes 1, \(\ell \), and \(2\ell \) and we take \(\ell \) sufficiently large compared to the given speed or competitive ratio. There are many packets of size 1 and \(2\ell \) available at the beginning, whereas the packets of size \(\ell \) arrive at specific times where PG schedules them immediately.
The faults occur at times divisible by \(2\ell \), thus the optimum schedules one packet of size \(2\ell \) in each phase between two faults. We have N such phases, \(N(2\ell 1)\) packets of size 1, and N packets of size \(2\ell \) available at the beginning. In each phase, \({\textsf {PG}(s)}\) schedules \(2\ell 1\) packets of size 1, then a packet of size \(\ell \) arrives and is scheduled, and finally, a packet of size \(2\ell \) is started. The algorithm needs speed \(2.51/(2\ell )\) to complete it. Hence, for \(\ell \) large, the algorithm completes only packets of total size \(3\ell 1\) per phase. After these N phases, we have faults every 1 unit of time, thus the optimum schedules all packets of size 1, but the algorithm has no packet of size 1 pending and it is unable to finish a longer packet. Therefore, the optimum finishes all packets of size \(2\ell \) plus all small packets, a total of \(4\ell 1\) per phase. The ratio tends to 4 / 3 as \(\ell \rightarrow \infty \). \(\square \)
3.4 Algorithm PGDIV and its analysis
Lemma 5
 (i)
If PGDIV starts or completes a packet of size \(\ell _i\) at time t, then \(\ell _i\) divides \({\text{ rel }}(t)\).
 (ii)
Let t be a time with \({\text{ rel }}(t)\) divisible by \(\ell _i\) and \({\text{ rel }}(t)>0\). If a packet of size \(\ell _i\) is pending at time t, then PGDIV starts or continues running a packet of size at least \(\ell _i\) at time t.
 (iii)
If at the beginning of the phase at time u, a packet of size \(\ell _i\) is pending and no fault occurs before time \(t=u+\ell _i/s\), then the phase does not end before t.
Proof
(i) follows trivially from the description of the algorithm.
(ii) Suppose that PGDIV continues running a packet of size \(\ell _j\) at t. By (i), the packet is started at time \(t'<t\) with \({\text{ rel }}(t')\) divisible by \(\ell _j\). Observe that \(\ell _j>\ell _i\). Indeed, supposing otherwise, \(\ell _j\) divides t by the assumption, which implies \(t' \le t  \ell _j\). However, this is a contradiction, since the packet of size \(\ell _j\) would be completed by time t.
Next, suppose that PGDIV starts a new packet. Then the packet of size \(\ell _i\), which is pending by the assumption, satisfies all the conditions from Step 3 of the algorithm, as \({\text{ rel }}(t)\) is divisible by \(\ell _i\) and \({\text{ rel }}(t)\ge \ell _i\) (from \({\text{ rel }}(t)>0\) and divisibility). Thus, the algorithm starts a packet of size at least \(\ell _i\).
(iii) We proceed by induction on i. Assume that no fault happens before t. If the phase starts by a packet of size at least \(\ell _i\), the claim holds trivially, as the packet is not completed before t. This also proves the base of the induction for \(i=1\).
It remains to handle the case when the phase starts by a packet smaller than \(\ell _i\). Let \(P^{<i}\) be the set of all packets of size smaller than \(\ell _i\) pending at time u. By Step (2) of the algorithm, \(\ell (P^{<i})\ge \ell _i\). We show that all packets of \(P^{<i}\) are completed if no fault happens, which implies that the phase does not end before t.
Let j be such that \(\ell _j\) is the maximum size of a packet in \(P^{<i}\). Note that j exists as the phase starts by a packet smaller than \(\ell _i\). By the induction assumption, the phase does not end before time \(t'=u+\ell _j/s\). From time \(t'\) on, the conditions in Step (3) guarantee that the remaining packets from \(P^{<i}\) are processed from the largest ones, possibly interleaved with some of the newly arriving packets of larger sizes. The reason is that if a packet is completed at time \(\tau \ge t'\), then \({\text{ rel }}(\tau )\) is divisible by the size of the largest pending packet from \(P^{<i}\). This shows that the phase cannot end before all packets from \(P^{<i}\) are completed if no fault happens and (iii) follows from \(\ell (P^{<i})\ge \ell _i\). \(\square \)
Now we prove a stronger analogue of Lemma 4.
Lemma 6
 (i)If (u, v] is the initial isegment, then$$\begin{aligned} L_{\textsf {PGDIV}(s)}(\ge i,(u,v])>s(vu)3\ell _k. \end{aligned}$$
 (ii)If (u, v] is a proper isegment and \(vu\ge \ell _i\), thenMoreover, \(L_{\textsf {PGDIV}(s)}((u,v])>s(vu)/2\) and \(L_{\textsf {PGDIV}(s)}((u,v])\) is divisible by \(\ell _i\).$$\begin{aligned} L_{\textsf {PGDIV}(s)}(\ge i,(u,v])>s(vu)/2\ell _i. \end{aligned}$$
Proof
We begin with an observation that we use to prove both (i) and (ii): Suppose that time \(t\in [u,v)\) satisfies that \({\text{ rel }}(t)\) is divisible by \(\ell _i\) and \({\text{ rel }}(t)>0\). Then, Lemma 5(ii) together with the fact that a packet of size \(\ell _i\) is always pending in [u, v) (which follows from the definition of critical times and isegments), implies that from time t on, only packets of size at least \(\ell _i\) are scheduled. In particular, the current phase does not end before v.
For a proper isegment (u, v], we use the previous observation for \(t=u+\ell _i/s\) to prove (ii). Observe that \(t\le v\) by the assumption of (ii). Now \(L_{\textsf {PGDIV}(s)}(<i,(u,v])\) is either equal to 0 (if the phase starts by a packet of size at least \(\ell _i\) at time u), or equal to \(\ell _i\) (if the phase starts by a smaller packet). In both cases, \(\ell _i\) divides \(L_{\textsf {PGDIV}(s)}(<i,(u,v])\) and thus also \(L_{\textsf {PGDIV}(s)}((u,v])\). As in the analysis of PG, the total size of completed packets is more than \(s(vu)/2\), and (ii) follows.
 1.
The phase of u ends at some time \(u'\le u+\ell _i/s\). Then, by Lemma 5(iii) and the initial observation, the phase that immediately follows the one of u does not end in \((u',v)\) and from time \(u'+\ell _i/s\) on, only packets of size at least \(\ell _i\) are scheduled. Thus \(L_{\textsf {PGDIV}(s)}(<i,(u,v])\le 2\ell _i\).
 2.
The phase of u does not end by time \(u+\ell _i/s\). In this case, there exists \(t\in (u,u+\ell _i/s]\) such that \(\ell _i\) divides \({\text{ rel }}(t)\) and also \({\text{ rel }}(t)>0\) as \(t>u\). Using the initial observation for this t, we obtain that the phase does not end in (u, v) and from time t on, only packets of size at least \(\ell _i\) are scheduled. Thus \(L_{\textsf {PGDIV}(s)}(<i,(u,v])\le \ell _i\).
Theorem 5
Let the packet sizes be divisible. Then \(\textsf {PGDIV}(1)\) is 2competitive. Also, for any speed \(s\ge 2\), \({\textsf {PGDIV}(s)}\) is 1competitive.
Proof
Lemma 6(i) implies (3.2). We now prove (3.1) for any proper isegment (u, v] with \(vu\ge \ell _i\) and appropriate R. The theorem then follows by the Master Theorem. Since u is a time of a fault, we have \(L_\textsf {OPT}(\ge i,(u,v])\le vu\).
3.4.1 Example with two divisible packet sizes
We show that neither of our algorithms is better than 2competitive at speed less than 2, even if there are only two divisible packet sizes in the instance. This matches the upper bound given in Theorem 2 for \(\textsf {PG}(2)\) and our upper bounds for \(\textsf {PGDIV}(s)\) on divisible instances, i.e., ratio 2 for \(s<2\) and ratio 1 for \(s \ge 2\). We remark that by Theorem 7, no deterministic algorithm can be 1competitive with speed \(s<2\) on divisible instances, but this example shows a stronger lower bound for our algorithms, namely, that their ratios are at least 2.
Remark 4
PG and PGDIV have ratio no smaller than 2 when \(s<2\), even if packet sizes are only 1 and \(\ell \ge \max \{s+\epsilon ,\ \epsilon / (2s) \}\) for an arbitrarily small \(\epsilon >0\).
Proof
We denote either algorithm by ALG. There are N phases, that all look the same. In each phase, issue one packet of size \(\ell \) and \(\ell \) packets of size 1, and have the phase end by a fault at time \((2\ell \varepsilon )/s \ge \ell \), where the inequality holds by the bounds on \(\ell \). Then ALG completes all \(\ell \) packets of size 1, but no packet of size \(\ell \). By the previous inequality, OPT completes the packet of size \(\ell \) within the phase. Once all N phases are over, the jams occur every 1 unit of time, which allows OPT to complete all \(N\ell \) remaining packets of size 1. However, ALG is unable to complete any of the packets of size \(\ell \). Thus the ratio is 2. \(\square \)
4 PrudentGreedy with speed 4
In this section, we prove that speed 4 is sufficient for PG to be 1competitive. An example in Sect. 3.3 shows that speed 4 is also necessary for our algorithm.
Theorem 6
\({\textsf {PG}(s)}\) is 1competitive for \(s\ge 4\).
Intuition For \(s\ge 4\), we have that if at the start of a phase, \({\textsf {PG}(s)}\) has a packet of size \(\ell _i\) pending and the phase has length at least \(\ell _i\), then \({\textsf {PG}(s)}\) completes a packet of size at least \(\ell _i\). To show this, assume that the phase starts at time t. Then, the first packet p of size at least \(\ell _i\) is started before time \(t+2\ell _i/s\) by Lemma 2(ii) and it has size smaller than \(2\ell _i\) by the condition in Step (3). Thus, it completes before time \(t+4\ell _i/s\le t+\ell _i\), which is before the end of the phase. This property does not hold for \(s<4\), as exemplified by the instance in Remark 2. The property is important to our proof, as it shows that if the optimal schedule completes a packet of some size and such packet is pending for \({\textsf {PG}(s)}\), then \({\textsf {PG}(s)}\) completes a packet of the same size or larger. However, this is not sufficient to complete the proof by a local (phasebyphase) analysis similar to the previous section, as the next example shows.

First, there are N phases of length 1. In each phase, the optimum completes a packet of size 1, while among packets of size at least 1, \({\textsf {PG}(s)}\) completes a packet of size \(1.5  2\varepsilon \), as it starts packets of sizes \(1\varepsilon \), \(1\varepsilon \), \(1.5  2\varepsilon \), \(32\varepsilon \), in this order, and the last packet is jammed.

Then there are N phases of length \(1.5  2\varepsilon \) where the optimum completes a packet of size \(1.5  2\varepsilon \), while among packets of size at least 1, the algorithm completes only a single packet of size 1, as it starts packets of sizes \(1\varepsilon \), \(1\varepsilon \), 1, \(32\varepsilon \), in this order. The last packet is jammed, since for \(s=4\), the phase must have length at least \(1.5  \varepsilon \) to complete it.
Outline of the proof We define critical times \(C'_i\) similarly as before, but without the condition that they should be ordered (thus either \(C'_i \le C'_{i1}\) or \(C'_i > C'_{i1}\) may hold). Then, since the algorithm has nearly no pending packets of size \(\ell _i\) just before \(C'_i\), we can charge almost all adversary’s packets of size \(\ell _i\) started before \(C'_i\) to algorithm’s packets of size \(\ell _i\) completed before \(C'_i\) in a 1to1 fashion. We thus call these charges 1to1 charges. We account for the first few packets of each size completed at the beginning of the adversary schedule in the additive constant of the competitive ratio. Note that this shifts the targets of the 1to1 charges backward in time.
After the critical time \(C'_i\), packets of size \(\ell _i\) are always pending for the algorithm, and thus (as we noted in the very beginning) the algorithm schedules a packet of size at least \(\ell _i\) when the adversary completes a packet of size \(\ell _i\). It is more convenient not to work with phases, but rather partition the schedule into blocks between successive faults. A block can contain several phases of the algorithm separated by an execution of Step (4). However, in the most important and tight part of the analysis, the blocks coincide with phases.
In the crucial lemma of the proof, based on these observations and their refinements, we show that we can assign the remaining packets completed by the adversary to algorithm’s packets in the same block, such that for each algorithm’s packet q the total size of packets assigned to it is at most \(\ell (q)\). However, we cannot use this assignment directly to charge the remaining packets, as some of the algorithm’s big packets may receive 1to1 charges. This very issue can be seen in our introductory example. Instead, our analysis resolves the interactions of different blocks by carefully modifying the adversary schedule.
4.1 Blocks, critical times, 1to1 charges, and the additive constant
We now formally define the notions of blocks and (modified) critical times.
Definition 3
Let \(f_1, f_2,\ldots , f_N\) be the times of faults. Let \(f_0=0\) and \(f_{N+1}=T\) is the end of the schedule. Then the time interval \((f_i, f_{i+1}]\), \(i=0,\ldots ,N\), is called a block.
Definition 4
For \(i=1,\ldots k\), the critical time \(C'_i\) is the supremum of igood times \(t\in [0,T]\), where T is the end of the schedule and igood times are as defined in Definition 1.
All \(C'_i\)’s are defined, as \(t=0\) is igood for all i. Similarly to Sect. 3.1, each \(C'_i\) is of one of the following types: (i) \(C'_i\) starts a phase and a packet larger than \(\ell _i\) is scheduled, (ii) \(C'_i=0\), (iii) \(C'_i=T\), or (iv) just before time \(C'_i\) no packet of size \(\ell _i\) is pending but at time \(C'_i\) one or more packets of size \(\ell _i\) are pending. In the last case, \(C'_i\) is not igood but only the supremum of igood times. We observe that in each case, at time \(C'_i\) the total size of packets of size \(\ell _i\) pending for \({\textsf {PG}(s)}\) and released before \(C'_i\) is less than \(\ell _k\).
Next, we define the set of packets that contribute to the additive constant.
Definition 5
 (i)
the first \(\lceil 3\ell _k / \ell _i \rceil \) packets of size \(\ell _i\) completed by the adversary, and
 (ii)
the first \(\lceil 2\ell _k / \ell _i \rceil \) packets of size \(\ell _i\) completed by the adversary after \(C'_i\).
For each i, we put into A packets of size \(\ell _i\) of total size at most \(7\ell _k\). Thus, we have \(\ell (A) = \mathcal {O}(k \ell _k)\) which implies that packets in A can be counted in the additive constant.
We call a 1to1 charge starting and ending in the same block an up charge, a 1to1 charge from a block starting at u to a block ending at \(v' \le u\) a back charge, and a 1to1 charge from a block ending at v to a block starting at \(u' \ge v\) a forward charge. See Fig. 4 for an illustration. A charged packet is a packet charged by a 1to1 charge. The definition of A implies the following two important properties.
Lemma 7
Let p be a packet of size \(\ell _i\), started by the adversary at time t, charged by a forward charge to a packet q started by \({\textsf {PG}(s)}\) at time \(t'\). Then at any time \(\tau \in [t2\ell _k,t')\), more than \(\ell _k/\ell _i\) packets of size \(\ell _i\) are pending for \({\textsf {PG}(s)}\).
Proof
Let m be the number of packets of size \(\ell _i\) that \({\textsf {PG}(s)}\) completes before q. Then, by the definition of A, the adversary completes \(m+\lceil 3\ell _k / \ell _i \rceil \) packets of size \(\ell _i\) before p (which it starts at time t). The adversary starts less than \(2\ell _k/\ell _i\) of these packets in \((t2\ell _k,t]\). Thus, more than \(m+\ell _k / \ell _i\) of the packets started by the adversary are released before or at time \(t2\ell _k\). As \({\textsf {PG}(s)}\) completed only m packets of size \(\ell _i\) by \(t'\), it has more than \(\ell _k / \ell _i\) such packets pending at any time \(\tau \in [t2\ell _k,t')\). \(\square \)
Lemma 8
Let \(p\not \in A\) be an uncharged packet of size \(\ell _i\) started by the adversary at time t. Then at any time \(\tau \ge t2\ell _k\), a packet of size \(\ell _i\) is pending for \({\textsf {PG}(s)}\).
Proof
Any packet of size \(\ell _i\) started before \(C'_i+2\ell _k\) is either charged or put in A, thus \(t2\ell _k\ge C'_i\). After \(C'_i\), a packet of size \(\ell _i\) is pending by the definition of \(C'_i\). \(\square \)
4.2 Processing blocks
Initially, let ADV be an optimal (adversary) schedule. First, we remove all packets in A from ADV. Then we process blocks one by one in the order of time. When we process a block, we modify ADV as follows: We (i) remove from ADV some packets of total size of at most the total size of packets completed by \({\textsf {PG}(s)}\) in this block, including all packets in ADV charged to a packet in this block, and (ii) reschedule any remaining packets in ADV in this block to later blocks, while ensuring that ADV is still a feasible schedule. Summing over all blocks, (i) and (ii) guarantee that \({\textsf {PG}(s)}\) is 1competitive with an additive constant \(\ell (A)\). Moreover, they ensure that there are no charges to or from a processed block.
When we reschedule a packet in ADV, we keep the packet’s 1to1 charge (if it has one), however, its type may change due to rescheduling. Since we are moving packets to later times only, the release times are automatically respected. It also follows that Lemmas 7 and 8 apply to ADV even after rescheduling.
From now on, let (u, v] be the current block that we are processing. All previous blocks ending at \(v' \le u\) are already processed. As there are no charges to the previous blocks, any packet scheduled in ADV in (u, v] is charged by an up charge or a forward charge, or else it is not charged at all. We distinguish two main cases of the proof, depending on whether \({\textsf {PG}(s)}\) finishes any packet in the current block or not.
4.2.1 Main case 1: empty block
The algorithm does not finish any packet in (u, v]. We claim that ADV does not finish any packet. The processing of the block is then trivial.
For a contradiction, assume that ADV starts a packet p of size \(\ell _i\) at time t and completes it. The packet p cannot be charged by an up charge, as \({\textsf {PG}(s)}\) completes no packet in this block. It is also not charged by a back charge to a previous block, since there are no charges to already processed blocks. Hence, p is either charged by a forward charge or not charged. Lemma 7 or 8 implies that at time t some packet of size \(\ell _i\) is pending for \({\textsf {PG}(s)}\).
Since PG does not idle unnecessarily, this means that some packet q of size \(\ell _j\) for some j is started in \({\textsf {PG}(s)}\) at time \(\tau \le t\) and running at t. As \({\textsf {PG}(s)}\) does not complete any packet in (u, v], the packet q is jammed by the fault at time v. This implies that \(j>i\), as \(\ell _j>s(v\tau ) \ge vt\ge \ell _i\). We also have \(t\tau <\ell _j\). Moreover, q is the only packet started by \({\textsf {PG}(s)}\) in this block, thus it starts a phase.
As this phase is started by packet q of size \(\ell _j>\ell _i\), time \(\tau \) is igood and \(C'_i\ge \tau \). All packets ADV started before time \(C'_i+2\ell _k\) are charged, as the packets in A are removed from ADV and packets in ADV are rescheduled only to later times. Packet p is started before \(v<\tau +\ell _j/s<C'_i+\ell _k/s <C'_i+2\ell _k\), thus it is charged. It follows that p is charged by a forward charge. We now apply Lemma 7 again and observe that it implies that at \(\tau >t\ell _j\) there are more than \(\ell _k/\ell _i\) packets of size \(\ell _i\) pending for \({\textsf {PG}(s)}\). This contradicts the fact that \({\textsf {PG}(s)}\) started a phase by q of size \(\ell _j>\ell _i\) at \(\tau \).
4.2.2 Main case 2: nonempty block
Otherwise, \({\textsf {PG}(s)}\) completes a packet in the current block (u, v].
Let Q be the set of packets completed by \({\textsf {PG}(s)}\) in (u, v] that do not receive an up charge. Note that no packet in Q receives a forward charge, as the modified ADV contains no packets before u, thus packets in Q either get a back charge or no charge at all. Let P be the set of packets completed in ADV in (u, v] that are not charged by an up charge. Note that P includes packets charged by a forward charge and uncharged packets, as no packets are charged to a previous block.
We first assign packets in P to packets in Q such that for each packet \(q\in Q\), the total size of packets assigned to q is at most \(\ell (q)\). Formally, we iteratively define a provisional assignment \(f: P \rightarrow Q\) such that \(\ell (f^{1}(q)) \le \ell (q)\) for each \(q\in Q\).
Provisional assignment We maintain a set \(O\subseteq Q\) of occupied packets that we do not use for a future assignment. Whenever we assign a packet p to \(q\in Q\) and \(\ell (q)  \ell (f^{1}(q)) < \ell (p)\), we add q to O. This rule guarantees that each packet \(q\in O\) has \(\ell (f^{1}(q))>\ell (q)/2\).
We process packets in P in the order of decreasing sizes as follows. We take the largest unassigned packet \(p\in P\) of size \(\ell (p)\) (if there are more unassigned packets of size \(\ell (p)\), we take an arbitrary one) and choose an arbitrary packet \(q\in Q\setminus O\) such that \(\ell (q)\ge \ell (p)\). We prove in Lemma 9 below that such a q exists. We assign p to q, that is, we set \(f(p) = q\). Furthermore, as described above, if \(\ell (q)  \ell (f^{1}(q))< \ell (p)\), we add q to O. We continue until all packets are assigned.
If a packet p is assigned to q and q is not put in O, it follows that \(\ell (q)\ell (f^{1}(q))\ge \ell (p)\). This implies that after the next packet \(p'\) is assigned to q, we have \(\ell (q)\ge \ell (f^{1}(q))\), as the packets are processed from the largest one and thus \(\ell (p')\le \ell (p)\). If follows that at the end, we obtain a valid provisional assignment.
Lemma 9
The assignment process above assigns all packets in P.
Proof
We prove this independently for each packet size.
First, we fix the size \(\ell _j\) and define a few quantities.
Let n denote the number of packets of size \(\ell _j\) in P. Let o denote the total occupied size, defined as \(o=\ell (O) + \sum _{q\in Q\setminus O} \ell (f^{1}(q))\) at the time just before we start assigning the packets of size \(\ell _j\). Note that the rule for adding packets to O implies that \(\ell (f^{1}(Q))\ge o/2\). Let a denote the current total available size defined as \(a = \sum _{q\in Q\setminus O: \ell (q)\ge \ell _j} (\ell (q)\ell (f^{1}(q)))\). We remark that in the definition of a, we restrict attention only to packets of size \(\ell _j\) or larger, but in the definition of o, we consider all packets in Q. However, only packets of size at least \(\ell _j\) contribute to o, since packets in P are processed in the decreasing order of sizes.
First, we claim that it is sufficient to show that \(a>(2n2)\ell _j\) before we start assigning the packets of size \(\ell _j\). As long as \(a>0\), there is a packet \(q\in Q\setminus O\) of size at least \(\ell _j\) and thus we may assign the next packet (and, as noted before, actually \(a\ge \ell _j\), as otherwise \(q\in O\)). Furthermore, assigning a packet p of size \(\ell _j\) to q decreases a by \(\ell _j\) if q is not added to O and by less than \(2\ell _j\) if q is added to O. Altogether, after assigning the first \(n1\) packets, a decreases by less than \((2n2)\ell _j\), thus we still have \(a>0\), and we can assign the last packet. The claim follows.
We now split the analysis into two cases, depending on whether there is a packet of size \(\ell _j\) pending for \({\textsf {PG}(s)}\) at all times in [u, v) or not. In either case, we prove that the available space a is sufficiently large before assigning the packets of size \(\ell _j\).
In the first case, we suppose that a packet of size \(\ell _j\) is pending for \({\textsf {PG}(s)}\) at all times in [u, v). Let z be the total size of packets of size at least \(\ell _j\) charged by up charges in this block. Recall that the size of packets in P already assigned is \(\ell (f^{1}(Q))\ge o/2\), and that there are n yet unassigned packets of size \(\ell _j\) in P. As ADV has to schedule all these packets and the packets with up charges in this block, its size satisfies \(vu\ge \ell (P)+z\ge n\ell _j+o/2+z\). Consider the schedule of \({\textsf {PG}(s)}\) in this block. By Lemma 2, there is no end of phase in (u, v) and packets smaller than \(\ell _j\) scheduled by \({\textsf {PG}(s)}\) have total size less than \(2\ell _j\). All the other completed packets contribute to one of a, o, or z. Using Lemma 1, the previous bound on \(vu\), and \(s\ge 4\), the total size of completed packets is at least \(s(vu)/2 \ge 2n\ell _j+o+2z\). Hence, \(a>(2n\ell _j+o+2z)2\ell _joz\ge (2n2)\ell _j\), which completes the proof of the lemma in this case.
Otherwise, in the second case, there is a time in [u, v) when no packet of size \(\ell _j\) is pending for \({\textsf {PG}(s)}\). Let \(\tau \) be the supremum of times \(\tau '\in [u,v]\) such that \({\textsf {PG}(s)}\) has no pending packet of size at least \(\ell _j\) at time \(\tau '\). If no such \(\tau '\) exists, we set \(\tau =u\). Let t be the time when the adversary starts the first packet p of size \(\ell _j\) from P.
Towards bounding a, we show that (i) \({\textsf {PG}(s)}\) completes small packets only of a limited total size after \(\tau \) and thus \(a+o\) is large, and that (ii) \(f^{1}(Q)\) contains only packets run by ADV from \(\tau \) on, and thus o is small.
We claim that the total size of packets smaller than \(\ell _j\) completed in \({\textsf {PG}(s)}\) in \((\tau ,v]\) is less than \(3\ell _k\). (This claim and its proof are similar to Lemma 2.) Let \(\tau _1<\tau _2<\ldots <\tau _\alpha \) be all the ends of phases in \((\tau ,v)\) (possibly there is none, then \(\alpha =0\)). Also, let \(\tau _0=\tau \). For \(i=1,\ldots ,\alpha \), let \(r_i\) denote the packet started by \({\textsf {PG}(s)}\) at \(\tau _i\). Note that \(r_i\) exists since there is a pending packet at any time in \([\tau ,v]\) by the definition of \(\tau \). See Fig. 5 for an illustration. First, note that any packet started at or after time \(\tau _\alpha +\ell _k/s\) has size at least \(\ell _j\), as such a packet is pending and satisfies the condition in Step (3) of the algorithm. Thus, the total size of the small packets completed in \((\tau _\alpha ,v]\) is less than \(\ell _k+\ell _{k1}<2\ell _k\). The claim now follows for \(\alpha =0\). Otherwise, as there is no fault in (u, v), at \(\tau _i\), \(i=1,\ldots ,\alpha \), Step (4) of the algorithm is reached and thus no packet of size at most \(s(\tau _i\tau _{i1})\) is pending. In particular, this implies that \(\ell (r_i)>s(\tau _i\tau _{i1})\) for \(i=1,\ldots ,\alpha \). This also implies that the total size of the small packets completed in \((\tau _0,\tau _1]\) is less than \(\ell _k\) and the claim for \(\alpha =1\) follows. For \(\alpha \ge 2\), first note that by Lemma 2(i), \(s(\tau _i\tau _{i1})\ge \ell _j\) for all \(i=2,\ldots ,\alpha \) and thus \(r_i\) is not a small packet. Thus, for \(i=3,\ldots ,\alpha \), the total size of small packets in \((\tau _{i1},\tau _i]\) is at most \(s(\tau _i\tau _{i1})\ell (r_{i1})<\ell (r_i)\ell (r_{i1})\). The size of small packets completed in \((\tau _1,\tau _2]\) is at most \(s(\tau _2\tau _1)<\ell (r_2)\) and the total size of small packets completed in \((\tau _\alpha ,v]\) is at most \(2\ell _k\ell (r_\alpha )\). Thus, the total size of small packets completed in \((\tau _1,v]\) is at most \(2\ell _k\), and the claim follows.
Observe that no packet contributing to z is started by ADV before \(\tau \) as otherwise, it would be pending for \({\textsf {PG}(s)}\) just before \(\tau \), contradicting the definition of \(\tau \).
Also, observe that in \((u,\tau ]\), ADV runs no packet \(p\in P\) with \(\ell (p)\ge \ell _j\).
Indeed, for a contradiction, assume that such a p exists. As \(\tau \le C_{j'}\) for any \(j'\ge j\), such a p is charged. As \(p\in P\), it is charged by a forward charge. Hence, Lemma 7 implies that at all times between the start of p in ADV and v, a packet of size \(\ell (p)\) is pending for \({\textsf {PG}(s)}\). In particular, such a packet is pending in the interval before \(\tau \), contradicting the definition of \(\tau \).
These two observations imply that in \([\tau ,v]\), ADV starts and completes all the assigned packets from P, the n packets of size \(\ell _j\) from P, and all packets contributing to z. This gives \(v\tau \ge \ell (f^{1}(Q))+n\ell _j+z \ge o/2 +n\ell _j+z\).
For the second bound, we note that the n packets of size \(\ell _j\) from P are scheduled in [t, v]. Combined with \(t\ge \tau +2\ell _k\), this yields \(v\tau =vt+t\tau \ge n\ell _j+2\ell _k\).
Summing the two bounds on \(v\tau \) and multiplying by two, we get \(4(v\tau )\ge 4n\ell _j+4\ell _k+o+2z\). Summing with (4.1), we obtain \(a>4n\ell _j+z\ge 4n\ell _j\). This completes the proof of the second case. \(\square \)
We remark that the first case, which deals with blocks after \(C_j\), is the typical and tight one. The second case, which deals mainly with the block containing \(C_j\) and with some blocks before \(C_j\), has a lot of slack, but it is more technically involved. This is similar to the situation in the local analysis using the Master Theorem.
Modifying the adversary schedule Now all the packets from P are provisionally assigned by f and for each \(q\in Q\), we have that \(\ell (f^{1}(q)) \le \ell (q)\).
We process each packet q completed by \({\textsf {PG}(s)}\) in (u, v] according to one of the following three cases. In each case, we remove from ADV one or more packets with total size at most \(\ell (q)\).
If \(q\not \in Q\), then the definitions of P and Q imply that q is charged by an up charge from some packet \(p\not \in P\) of the same size. We remove p from ADV.
If \(q\in Q\) receives a charge, it is a back charge from some packet p of the same size. We remove p from ADV and in the interval where p was scheduled, we schedule packets from \(f^{1}(q)\) in an arbitrary order. As \(\ell (f^{1}(q)) \le \ell (q)\), this is feasible. If any packet \(p\in f^{1}(q)\) is charged, we keep its charge to the same packet in \({\textsf {PG}(s)}\). The charge was necessarily a forward charge, thus it leads to some later block. See Fig. 6 for an illustration.
After we have processed all the packets q, we have modified ADV by removing an allowed total size of packets and rescheduling the remaining packets in (u, v], while guaranteeing that any remaining charges go to later blocks. This completes processing of the block (u, v] and thus also the proof of 1competitiveness.
5 Lower bounds
In this section, we study what speed is necessary to achieve 1competitiveness. We start by revisiting a result of Anta et al. (2015) which applies to a very restricted setting. Namely, it gives a lower bound of 2 for instances with only two divisible packet sizes, proving that our algorithm PGDIV and the algorithm of Jurdzinski et al. (2015) are optimal. We then extend the construction to a setting with multiple nondivisible packet sizes, for which we show a lower bound of \(\phi + 1 \approx 2.618\).
In each proof, we describe a strategy that an adversary uses to create an instance for any algorithm ALG on which ALG is not 1competitive. This requires comparing the profit of ALG to the optimal profit. As is common, we do not consider the optimal profit directly, but rather use a lower bound on it that follows from a particular offline scheduling algorithm. We think of this scheduling algorithm as a counterpart of the adversary’s strategy, and therefore denote it by ADV.
5.1 Lower bound with two packet sizes
Note that the following lower bound follows from results of Anta et al. (2015) by a similar construction, although the packets in their construction are not released together.
Theorem 7
No deterministic online algorithm running at speed \(s < 2\) is 1competitive, even if packets have sizes only 1 and \(\ell \) for \(\ell > 2s / (2s)\) and all of them are released at time 0.
Proof
For a contradiction, consider an algorithm ALG running with speed \(s < 2\) that is claimed to be 1competitive with an additive constant A where A may depend on \(\ell \). At time 0, the adversary releases \(N_1 = \lceil A / \ell \rceil + 1\) packets of size \(\ell \) and \(\displaystyle {N_0 = \left\lceil \frac{2\ell }{s} \cdot \big ( N_1\cdot (s  1)\cdot \ell + A + 1\big )\right\rceil }\) packets of size 1. These are all packets in the instance.
The adversary’s strategy works by blocks where a block is a time interval between two faults, and the first block begins at time 0. The adversary ensures that in each such block, ALG completes no packet of size \(\ell \) and moreover, ADV either completes an \(\ell \)sized packet or completes more 1’s (packets of size 1) than ALG.
 (D1)

If ADV has less than \(2\ell / s\) pending packets of size 1, then the end of the schedule is at t.
 (D2)

If ADV has all packets of size \(\ell \) completed, then the adversary stops the current process and issues faults at times \(t+1, t+2, \dots \) until ADV, which completes a packet of size 1 between each pair of successive faults, has no packet of size 1. Then, there is the end of the schedule. Clearly, ALG may complete only packets of size 1 after t as \(\ell> 2s / (2s) > s\) for \(s<2\).
 (D3)
 If \(\tau \ge t+\ell / s  2\), then the next fault is at time \(t+\ell \). In the current block, the adversary completes a packet \(\ell \). ALG completes at most \(s\cdot \ell \) packets of size 1 and then it possibly starts \(\ell \) at \(\tau \) if \(\tau < t+\ell \). This packet would be completed atwhere the last inequality follows from \(\left( \frac{2}{s}  1\right) \ell > 2\) which is equivalent to \(\ell > 2s / (2s)\). Thus, the fault occurs before the \(\ell \)sized packet completes. See Fig. 7 for an illustration.$$\begin{aligned} \tau + \frac{\ell }{s} \ge t + \frac{2\ell }{s}  2 = t + \ell + \left( \frac{2}{s}  1\right) \ell  2 > t + \ell \end{aligned}$$
 (D4)

Otherwise, if \(\tau < t+\ell / s  2\), then the next fault is at time \(\tau + \ell / s  \varepsilon \) for a small enough \(\varepsilon > 0\). In the current block, ADV completes as many packets of size 1 as it can, that is \(\lfloor \tau + \ell / s  \varepsilon  t\rfloor \) packets of size 1. Note that by Case (D1), ADV has enough 1’s pending. Again, the algorithm does not complete the packet of size \(\ell \) started at \(\tau \), because it would be finished at \(\tau + \ell / s\). See Fig. 8 for an illustration.
First, notice that the process above ends, since in each block, the adversary completes a packet. We now show \(L_{\textsf {ADV}} > L_{\textsf {ALG}} + A\) which contradicts the claimed 1competitiveness of ALG.
If the adversary’s strategy ends in Case (D2), then ADV has all \(\ell \)’s completed and then it schedules all 1’s, thus \(L_{\textsf {ADV}} = N_1\cdot \ell + N_0 > A + N_0\). However, as ALG does not complete any \(\ell \)sized packet, \(L_{\textsf {ALG}} \le N_0\), which concludes this case.
Let \(\alpha \) be the number of blocks created in Case (D3). Note that \(\alpha \le N_1\), since in each such block, ADV finishes one \(\ell \)sized packet. ALG completes at most \(s\ell \) packets of size 1 in such block, thus \(L_{\textsf {ADV}}((u, v])  L_{\textsf {ALG}}((u, v]) \ge (1  s)\cdot \ell \) for a block (u, v] created in Case (D3).
5.2 Lower bound for general packet sizes
Our main lower bound of \(\phi + 1 = \phi ^2 \approx 2.618\) (where \(\phi = (\sqrt{5}+1)/2\) is the golden ratio) generalizes the construction of Theorem 7 for more packet sizes, which are no longer divisible. Still, we make no use of release times.
Theorem 8
No deterministic online algorithm running at speed \(s < \phi + 1\) is 1competitive, even if all packets are released at time 0.
Outline of the proof We start by describing the adversary’s strategy which works against an algorithm running at speed \(s < \phi + 1\), i.e., it shows that it is not 1competitive. It can be seen as a generalization of the strategy with two packet sizes above, but at the end, the adversary sometimes needs a new strategy in order to complete all short packets (of size less than \(\ell _i\) for some i), while preventing the algorithm from completing a long packet (of size at least \(\ell _i\)).
Then we show a few lemma about the behavior of the algorithm. Finally, we prove that the gain of the adversary, i.e., the total size of its completed packets, is substantially larger than the gain of the algorithm.
Adversary’s strategy The adversary chooses \(\varepsilon > 0\) small enough and \(k\in \mathbb {N}\) large enough, such that \(s < \phi + 1  1 / \phi ^{k1}\). For convenience, the smallest size in the instance is \(\varepsilon \) instead of 1. There are \(k+1\) packet sizes in the instance, namely, \(\ell _0 = \varepsilon \), and \(\ell _i = \phi ^{i1}\) for \(i = 1, \dots , k\).
Suppose for a contradiction that there is an algorithm ALG running at speed \(s < \phi + 1\) that is 1competitive with an additive constant A, where A may depend on \(\ell _i\)’s, in particular, on \(\varepsilon \) and k. The adversary issues \(N_i\) packets of size \(\ell _i\) at time 0, for \(i = 0, \dots , k\). The \(N_i\)’s are chosen such that \(N_0\gg N_1\gg \dots \gg N_k\). These are all the packets in the instance.
Let \(P_{\textsf {ADV}}(i)\) be the total size of \(\ell _i\)’s (packets of size \(\ell _i\)) pending for the adversary at time t.
 (B1)

If there are less than \(\phi \ell _k / \varepsilon \) packets of size \(\varepsilon \) pending for ADV, then the end of the schedule is at time t. Lemma 10 below shows that in blocks in which ADV schedules \(\varepsilon \)’s, it completes more than ALG in terms of total size. It follows that the schedule of ADV has much larger total completed size for \(N_0\) large enough, since the adversary scheduled nearly all packets of size \(\varepsilon \) (see Lemma 15).
 (B2)

If there is \(i \ge 1\) such that \(P_{\textsf {ADV}}(i) = 0\), then the adversary stops the current process and continues by Strategy Finish described below.
 (B3)

If \(\tau _1 < t + \ell _1 / (\phi \cdot s)\), then the next fault occurs at time \(\tau _1 + \ell _1 / s  \varepsilon \), thus ALG does not finish the first \(\ell _1\)sized packet. ADV schedules as many \(\varepsilon \)’s as it can. In this case, ALG schedules \(\ell _1\) too early, and in Lemma 10, we show that the total size of packets completed by ADV is larger than the total size of packets completed by ALG.
 (B4)

If \(\tau _{\ge 2} < t + \ell _2 / (\phi \cdot s)\), then the next fault is at time \(\tau _{\ge 2} + \ell _2 / s  \varepsilon \), thus ALG does not finish the first packet of size at least \(\ell _2\). ADV again schedules as many \(\varepsilon \)’s as it can. Similarly as in the previous case, ALG starts \(\ell _2\) or a larger packet too early, and we show that ADV completes more in terms of size than ALG, again using Lemma 10.
 (B5)

If there is \(1\le i < k\) such that \(\tau _{\ge i+1} < \tau _i\), then we choose the smallest such i and the next fault is at time \(t + \ell _i\). ADVschedules a packet of size \(\ell _i\). See Fig. 9 for an illustration. Intuitively, this case means that ALG skips \(\ell _i\) and schedules \(\ell _{i+1}\) (or a larger packet) earlier. Lemma 12 shows that the algorithm cannot finish its first packet of size at least \(\ell _{i+1}\) (nor of size \(\ell _i\), which it skipped).
 (B6)

Otherwise, the next fault occurs at \(t+\ell _k\) and ADV schedules a packet of size \(\ell _k\) in this block. Lemma 13 shows that ALG cannot complete an \(\ell _k\)sized packet in this block. See Fig. 10 for an illustration.
We remark that the process above eventually ends either in Case (B1), or in Case (B2), since in each block ADV schedules a packet. Also note that the length of each block is at most \(\phi \ell _k\).
We describe Strategy Finish, started in Case (B2). Let i be the smallest index \(i'\ge 1\) such that \(P_{\textsf {ADV}}(i') = 0\). For brevity, we call a packet of size at least \(\ell _i\)long, and a packet of size \(\ell _j\) with \(1\le j < i\)short. Note that \(\varepsilon \)’s are not short packets. In a nutshell, ADV tries to schedule all short packets, while preventing the algorithm from completing any long packet. Similarly to Cases (B3) and (B4), if ALG starts a long packet too early, ADV schedules \(\varepsilon \)’s and gains in terms of total size.
 (F1)

If \(P_{\textsf {ADV}}(0) < \phi \ell _k\), then the end of the schedule is at time t.
 (F2)

If ADV has no pending short packet, then the strategy Finish ends and the adversary issues faults at times \(t+\varepsilon , t+2\varepsilon , \dots \) Between every two consecutive faults after t, it completes one packet of size \(\varepsilon \) and it continues issuing faults until it has no pending \(\varepsilon \). Then there is the end of the schedule. Clearly, ALG may complete only \(\varepsilon \)’s after t if \(\varepsilon \) is small enough. Note that for \(i=1\), this case is immediately triggered, as \(\ell _0\)sized packets are not short, and hence, there are no short packets whatsoever.
 (F3)

If \(\tau < t + \ell _i / (\phi \cdot s)\), then the next fault is at time \(\tau + \ell _i / s  \varepsilon \), thus ALG does not finish the first long packet. ADV schedules as many \(\varepsilon \)’s as it can. Note that the length of this block is less than \(\ell _i / (\phi \cdot s) + \ell _i / s \le \phi \ell _k\). Again, we show that ADV completes more in terms of size using Lemma 10.
 (F4)

Otherwise, \(\tau \ge t + \ell _i / (\phi \cdot s)\). The adversary issues the next fault at time \(t+\ell _{i1}\). Let j be the largest \(j' < i\) such that \(P_{\textsf {ADV}}(j') > 0\). ADV schedules a packet of size \(\ell _j\) which is completed as \(j\le i1\). Lemma 14 shows that ALG does not complete the long packet started at \(\tau \).
Again, ADV completes a packet in each block, thus Strategy Finish eventually ends. Note that the length of each block is less than \(\phi \ell _k\).
Properties of the adversary’s strategy We now prove the lemma mentioned above. In the following, t is the beginning of the considered block and \(t'\) is the end of the block, i.e., the time of the next fault after t. Recall that \(L_{\textsf {ALG}}((t, t'])\) is the total size of packets completed by ALG in \((t, t']\). We start with a general lemma that covers all cases in which ADV schedules many \(\varepsilon \)’s.
Lemma 10
In Cases (B3), (B4), and (F3), \(L_{\textsf {ADV}}((t, t']) \ge L_{\textsf {ALG}}((t, t']) + \varepsilon \) holds.
Proof
Let i and \(\tau \) be as in Case (F3). We set \(i=1\) and \(\tau = \tau _1\) in Case (B3), and \(i=2\) and \(\tau =\tau _{\ge 2}\) in Case (B4). Note that the first packet of size (at least) \(\ell _i\) is started at \(\tau \) with \(\tau < t + \ell _i / (\phi \cdot s)\) and that the next fault occurs at time \(\tau + \ell _i / s  \varepsilon \). Furthermore, \(P_{\textsf {ADV}}(0, t) \ge \phi \ell _k\) by Cases (B1) and (F1). As \(t'  t\le \phi \ell _k\), it follows that \(\varepsilon \)size packets fill nearly the whole block in ADV. In particular, \(L_{\textsf {ADV}}((t, t']) > t'  t  \varepsilon \).
Let \(a = L_{\textsf {ALG}}((t, t'])\). Since ALG does not complete the \(\ell _i\)sized packet, we have \(\tau \ge t + a / s\) and thus also \(a < \ell _i / \phi \) as \(\tau < t + \ell _i / (\phi \cdot s)\).
For brevity, we inductively define \(S_0 = \phi  1\) and \(S_i = S_{i1} + \ell _i\) for \(i=1, \dots , k\). Thus \(S_i = \sum _{j=1}^{i} \ell _i + \phi  1\) and a calculation shows \(S_i = \phi ^{i+1}  1\). We prove a useful observation.
Lemma 11
Fix \(j\ge 2\). If Case (B3) and Case (B5) for \(i < j\) are not triggered in the block, then \(\tau _{i'+1}\ge t + S_{i'} / s\) for each \(i' < j\).
Proof
We have \(\tau _1\ge t + \ell _1 / (\phi \cdot s) = t + (\phi  1) / s\) by Case (B3) and \(\tau _{i+1} \ge \tau _i + \ell _i / s\) for any \(i < j\), since Case (B5) was not triggered for \(i < j\) and the first \(\ell _i\)sized packet needs to be finished before starting the next packet. Summing the bounds gives the inequalities in the lemma. \(\square \)
Lemma 12
If Case (B5) is triggered for (minimal) i, then the algorithm does not complete any packet of size \(\ell _i\) or larger.
Proof
Lemma 13
In Case (B6), ALG does not complete a packet of size \(\ell _k\).
Proof
Since Cases (B3) and (B5) are not triggered, Lemma 11 for \(j = k\) yields \(\tau _k\ge t + S_{k1} / s = t + (\phi ^k  1) / s\).
Lemma 14
In Case (F4), ALG does not complete any long packet.
Proof
Analysis of the gains We are ready to prove that at the end of the schedule, \(L_{\textsf {ADV}} > L_{\textsf {ALG}} + A\) holds, which contradicts the initial assumption about 1competitiveness of ALG, proving Theorem 8. We inspect all the cases in which the instances may end, starting with Cases (B1) and (F1).
We remark that we use only crude bounds to keep the analysis simple.
Lemma 15
If the schedule ends in Case (B1) or (F1), we have \(L_{\textsf {ADV}} > L_{\textsf {ALG}} + A\).
Proof
Recall that each block \((t, t']\) has length of at most \(\phi \ell _k\), thus \(L_{\textsf {ALG}}((t, t'])\le s\phi \ell _k\) and \(L_{\textsf {ADV}}((t, t'])\le \phi \ell _k\).
We call a block in which ADV schedules many \(\varepsilon \)’s small, other blocks are big. Recall that ADVschedules no \(\varepsilon \) in a big block. Note that Cases (B3), (B4), and (F3) concern small blocks, whereas Cases (B5), (B6), and (F4) concern big blocks.
The number of big blocks is at most \(\sum _{i=1}^k N_i\), since in each such block, ADV schedules a packet of size at least \(\ell _1\). For each such block, we have \(L_{\textsf {ADV}}((t, t'])  L_{\textsf {ALG}}((t, t'])\ge s\phi \ell _k\). (This is a crude bound, but sufficient for large enough \(N_0\).)
It remains to prove the same for termination by Case (F2), since there is no other case in which the strategy may end.
Lemma 16
If Strategy Finish ends in Case (F2), then \(L_{\textsf {ADV}} > L_{\textsf {ALG}} + A\).
Proof
Note that ADV schedules all short packets and all \(\varepsilon \)’s, i.e., those of size less than \(\ell _i\). In particular, we have \(L_{\textsf {ADV}}(<i) \ge L_{\textsf {ALG}}(<i)\).
Footnotes
 1.
We note that this ratio is always at least 1 for a maximization problem such as ours. However, some authors always consider the reciprocal, i.e., the “algtoopt” ratio, which is then at most 1 for maximization problems and at least 1 for minimization problems.
Notes
References
 Anta, A. F., Georgiou, C., Kowalski, D. R., Widmer, J., & Zavou, E. (2016). Measuring the impact of adversarial errors on packet scheduling strategies. Journal of Scheduling, 19(2), 135–152. Also appeared in Proceedings of SIROCCO 2013 (pp. 261–273). http://doi.org/10.1007/s109510150451z.
 Anta, A. F., Georgiou, C., Kowalski, D. R., & Zavou, E. (2015). Online parallel scheduling of nonuniform tasks: Trading failures for energy. Theoretical Computer Science, 590, 129–146. Also appeared in Proceedings of FCT 2013 (pp. 145–158). http://doi.org/10.1016/j.tcs.2015.01.027.
 Anta, A. F., Georgiou, C., Kowalski, D. R., & Zavou, E. (2018). Competitive analysis of fundamental scheduling algorithms on a faultprone machine and the impact of resource augmentation. Future Generation Computer Systems, 78, 245–256. Also appeared in Proceedings of the 2nd international workshop on adaptive resource management and scheduling for cloud computing (ARMSCC@PODC 2015), LNCS 9438 (pp. 1–16). http://doi.org/10.1016/j.future.2016.05.042.
 BenDavid, S., Borodin, A., Karp, R. M., Tardos, G., & Wigderson, A. (1994). On the power of randomization in online algorithms. Algorithmica, 11(1), 2–14. https://doi.org/10.1007/BF01294260.CrossRefGoogle Scholar
 Böhm, M., Jeż, Ł., Sgall, J., & Veselý, P. (2018). On packet scheduling with adversarial jamming and speedup. In Proceedings of the 15th international workshop on approximation and online algorithms (WAOA) (pp. 190–206). http://doi.org/10.1007/9783319894416_15.
 Borodin, A., & ElYaniv, R. (1998). Online computation and competitive analysis. Cambridge: Cambridge University Press.Google Scholar
 Chrobak, M., Epstein, L., Noga, J., Sgall, J., van Stee, R., Tichý, T., et al. (2003). Preemptive scheduling in overloaded systems. Journal of Computer and System Sciences, 67, 183–197. https://doi.org/10.1016/S00220000(03)000709.CrossRefGoogle Scholar
 Garncarek, P., Jurdziński, T., & Loryś, K. (2017). Faulttolerant online packet scheduling on parallel channels. In 2017 IEEE international parallel and distributed processing symposium (IPDPS) (pp. 347–356). http://doi.org/10.1109/IPDPS.2017.105.
 Georgiou, C., & Kowalski, D. R. (2015). On the competitiveness of scheduling dynamically injected tasks on processes prone to crashes and restarts. Journal of Parallel and Distributed Computing, 84, 94–107. https://doi.org/10.1016/j.jpdc.2015.07.007.CrossRefGoogle Scholar
 Graham, R. L. (1966). Bounds for certain multiprocessing anomalies. Bell Labs Technical Journal, 45(9), 1563–1581.CrossRefGoogle Scholar
 Jurdzinski, T., Kowalski, D. R., & Loryś, K. (2015). Online packet scheduling under adversarial jamming. In Proceedings of the 12th workshop on approximation and online algorithms (WAOA), LNCS 8952 (pp. 193–206). See http://arxiv.org/abs/1310.4935 for missing proofs. http://doi.org/10.1007/9783319182636_17.
 Kalyanasundaram, B., & Pruhs, K. (2000). Speed is as powerful as clairvoyance. Journal of the ACM, 47(4), 617–643. Also appeared in Proceedings of the 36nd IEEE symposium on foundations of computer science (FOCS) (pp. 214–221) (1995). http://doi.org/10.1145/347476.347479.
 Karlin, A. R., Manasse, M. S., Rudolph, L., & Sleator, D. D. (1988). Competitive snoopy caching. Algorithmica, 3(1–4), 79–119.CrossRefGoogle Scholar
 Lam, T. W., Ngan, T.W., & To, K.K. (2004). Performance guarantee for EDF under overload. Journal of Algorithms, 52, 193–206. https://doi.org/10.1016/j.jalgor.2003.10.004.CrossRefGoogle Scholar
 Lam, T. W., & To, K.K. (1999). Tradeoffs between speed and processor in harddeadline scheduling. In Proceedings of the 10th annual ACMSIAM symposium on discrete algorithms (SODA) (pp. 623–632). ACM/SIAM. http://dl.acm.org/citation.cfm?id=314500.314884.
 Phillips, C. A., Stein, C., Torng, E., & Wein, J. (2002). Optimal timecritical scheduling via resource augmentation. Algorithmica, 32, 163–200. https://doi.org/10.1007/s0045300100689.CrossRefGoogle Scholar
 Pruhs, K. (2007). Competitive online scheduling for server systems. SIGMETRICS Performance Evaluation Review, 34(4), 52–58. https://doi.org/10.1145/1243401.1243411.CrossRefGoogle Scholar
 Schewior, K. (2016). Deadline scheduling and convexbody chasing. PhD dissertation, TU Berlin. http://doi.org/10.14279/depositonce5427.
 Sleator, D. D., & Tarjan, R. E. (1985). Amortized efficiency of list update and paging rules. Communications of the ACM, 28(2), 202–208. https://doi.org/10.1145/2786.2793.CrossRefGoogle Scholar
Copyright information
OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.