Skip to main content

The Price of Fixed Assignments in Stochastic Extensible Bin Packing

  • Conference paper
  • First Online:
Approximation and Online Algorithms (WAOA 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11312))

Included in the following conference series:

Abstract

We consider the stochastic extensible bin packing problem (SEBP) in which n items of stochastic size are packed into m bins of unit capacity. In contrast to the classical bin packing problem, the number of bins is fixed and they can be extended at extra cost. This problem plays an important role in stochastic environments such as in surgery scheduling: Patients must be assigned to operating rooms beforehand, such that the regular capacity is fully utilized while the amount of overtime is as small as possible.

This paper focuses on essential ratios between different classes of policies: First, we consider the price of non-splittability, in which we compare the optimal non-anticipatory policy against the optimal fractional assignment policy. We show that this ratio has a tight upper bound of 2. Moreover, we develop an analysis of a fixed assignment variant of the LEPT rule yielding a tight approximation ratio of \((1+e^{-1}) \approx 1.368\) under a reasonable assumption on the distributions of job durations.

Furthermore, we prove that the price of fixed assignments, related to the benefit of adaptivity, which describes the loss when restricting to fixed assignment policies, is within the same factor. This shows that in some sense, LEPT is the best fixed assignment policy we can hope for.

The research of the first two authors is carried out in the framework of MATHEON supported by Einstein Foundation Berlin.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We do not specify how the processing time distributions should be represented in the input of the problem, as the policies we study only require the expected value of the processing times. In fact, we could even assume a setting in which the input consists only of the mean processing times \(\mu _j=\mathbb {E}[P_j]\) (\(\forall j\in \mathcal {J}\)), and an adversary chooses some distributions of the \(P_j\)’s matching the vector \(\varvec{\mu }\) of first moments.

References

  1. Alon, N., Azar, Y., Woeginger, G., Yadid, T.: Approximation schemes for scheduling on parallel machines. J. Sched. 1(1), 55–66 (1998)

    Article  MathSciNet  Google Scholar 

  2. Bansal, N., Nagarajan, V.: On the adaptivity gap of stochastic orienteering. Math. Program. 154(1–2), 145–172 (2015)

    Article  MathSciNet  Google Scholar 

  3. Berg, B., Denton, B.: Fast approximation methods for online scheduling of outpatient procedure centers. INFORMS J. Comput. 29(4), 631–644 (2017)

    Article  MathSciNet  Google Scholar 

  4. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

  5. Canetti, R., Irani, S.: Bounding the power of preemption in randomized scheduling. SIAM J. Comput. 27(4), 993–1015 (1998)

    Article  MathSciNet  Google Scholar 

  6. Correa, J.R., Skutella, M., Verschae, J.: The power of preemption on unrelated machines and applications to scheduling orders. Math. Oper. Res. 37(2), 379–398 (2012)

    Article  MathSciNet  Google Scholar 

  7. Dean, B.C., Goemans, M.X., Vondrák, J.: Adaptivity and approximation for stochastic packing problems. In: Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 395–404. Society for Industrial and Applied Mathematics (2005)

    Google Scholar 

  8. Dean, B.C., Goemans, M.X., Vondrák, J.: Approximating the stochastic knapsack problem: the benefit of adaptivity. Math. Oper. Res. 33(4), 945–964 (2008)

    Article  MathSciNet  Google Scholar 

  9. Dell’Olmo, P., Kellerer, H., Speranza, M., Tuza, Z.: A 13/12 approximation algorithm for bin packing with extendable bins. Inf. Process. Lett. 65(5), 229–233 (1998)

    Article  MathSciNet  Google Scholar 

  10. Dell’Olmo, P., Speranza, M.: Approximation algorithms for partitioning small items in unequal bins to minimize the total size. Discret. Appl. Math. 94(1–3), 181–191 (1999)

    Article  MathSciNet  Google Scholar 

  11. Denton, B., Miller, A., Balasubramanian, H., Huschka, T.: Optimal allocation of surgery blocks to operating rooms under uncertainty. Oper. Res. 58(4–part–1), 802–816 (2010)

    Article  MathSciNet  Google Scholar 

  12. Dexter, F., Traub, R.: How to schedule elective surgical cases into specific operating rooms to maximize the efficiency of use of operating room time. Anesth. Analg. 94(4), 933–942 (2002)

    Article  Google Scholar 

  13. Garey, M.R., Johnson, D.S.: Computers and Intractability: A Guide to NP-Completeness. WH Freeman and Company, San Francisco (1979)

    MATH  Google Scholar 

  14. Gupta, A., Nagarajan, V., Singla, S.: Algorithms and adaptivity gaps for stochastic probing. In: Proceedings of the Twenty-seventh Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1731–1747. SIAM (2016)

    Google Scholar 

  15. Isern, D., Sánchez, D., Moreno, A.: Agents applied in health care: a review. Int. J. Med. Inform. 79(3), 145–166 (2010)

    Article  Google Scholar 

  16. Kolliopoulos, S., Steiner, G.: Approximation algorithms for scheduling problems with a modified total weighted tardiness objective. Oper. Res. Lett. 35(5), 685–692 (2007)

    Article  MathSciNet  Google Scholar 

  17. Kovalyov, M.Y., Werner, F.: Approximation schemes for scheduling jobs with common due date on parallel machines to minimize total tardiness. J. Heuristics 8(4), 415–428 (2002)

    Article  Google Scholar 

  18. Leung, J.Y.T.: Handbook of Scheduling: Algorithms, Models, and Performance Analysis. CRC Press, Boca Raton (2004)

    MATH  Google Scholar 

  19. Liu, M., Xu, Y., Chu, C., Zheng, F.: Online scheduling to minimize modified total tardiness with an availability constraint. Theor. Comput. Sci. 410(47–49), 5039–5046 (2009)

    Article  MathSciNet  Google Scholar 

  20. Marshall, A., Olkin, I., Arnold, B.: Inequalities: Theory of Majorization and its Applications. Elsevier, Amsterdam (1979)

    MATH  Google Scholar 

  21. Megow, N., Uetz, M., Vredeveld, T.: Models and algorithms for stochastic online scheduling. Math. Oper. Res. 31(3), 513–525 (2006)

    Article  MathSciNet  Google Scholar 

  22. Möhring, R., Radermacher, F., Weiss, G.: Stochastic scheduling problems I-general strategies. Z. für Oper. Res. 28(7), 193–260 (1984)

    MathSciNet  MATH  Google Scholar 

  23. Sagnol, G., et al.: Robust allocation of operating rooms: a cutting plane approach to handle lognormal case durations. Eur. J. Oper. Res. (2018). https://doi.org/10.1016/j.ejor.2018.05.022, e-pub ahead of print

    Article  MathSciNet  Google Scholar 

  24. Schulz, A.S., Skutella, M.: Scheduling unrelated machines by randomized rounding. SIAM J. Discret. Math. 15(4), 450–469 (2002)

    Article  MathSciNet  Google Scholar 

  25. Skutella, M., Sviridenko, M., Uetz, M.: Unrelated machine scheduling with stochastic processing times. Math. Oper. Res. 41(3), 851–864 (2016)

    Article  MathSciNet  Google Scholar 

  26. Soper, A.J., Strusevich, V.A.: Power of preemption on uniform parallel machines. In: 17th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX 2014), pp. 392–402 (2014)

    Google Scholar 

  27. Speranza, M., Tuza, Z.: On-line approximation algorithms for scheduling tasks on identical machines with extendable working time. Ann. Oper. Res. 86, 491–506 (1999)

    Article  MathSciNet  Google Scholar 

  28. Xiao, G., van Jaarsveld, W., Dong, M., van de Klundert, J.: Models, algorithms and performance analysis for adaptive operating room scheduling. Int. J. Prod. Res. 56(4), 1389–1413 (2018)

    Article  Google Scholar 

  29. Zhang, Z., Xie, X., Geng, N.: Dynamic surgery assignment of multiple operating rooms with planned surgeon arrival times. IEEE Trans. Autom. Sci. Eng. 11(3), 680–691 (2014)

    Article  Google Scholar 

  30. Zhu, M., et al.: Managerial decision-making for daily case allocation scheduling and the impact on perioperative quality assurance. Transl. Perioper. Pain Med. 1(4), 20 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guillaume Sagnol .

Editor information

Editors and Affiliations

A Proofs of Intermediate Results

A Proofs of Intermediate Results

Proof

(of Proposition 2). To prove this result, we examine the change in the objective value of \(\varPi \) when we move one job to the machine with highest load in \(\varPi \), for a realization \({{\varvec{p}}}\) of the processing times. W.l.o.g. let machine 1 be the one with highest workload in \(\varPi ({{\varvec{p}}})\). Consider another machine \(i\in \mathcal {M}\setminus \{1\}\) on which at least one job is scheduled. Let k be the last job on machine i, i.e., \(C_k^\varPi ({{\varvec{p}}})=W_i^\varPi ({{\varvec{p}}})\). For the sake of simplicity, we define \(A:= \{j\in \mathcal {J}|j\xrightarrow {\varPi ({{\varvec{p}}})} i\}\setminus \{k\}\) and \(B:= \{j\in \mathcal {J}|j\xrightarrow {\varPi ({{\varvec{p}}})} 1\}\). We consider another schedule \(\varPi '({{\varvec{p}}})\) which coincides with \(\varPi ({{\varvec{p}}})\) except that job k is scheduled on machine 1 right after all jobs in B. We obtain

$$\begin{aligned}&\!\phi (\varPi ,{{\varvec{p}}})- \phi (\varPi ',{{\varvec{p}}})\\&=\!&\max \Bigl (\sum _{j\in A} p_j + p_k, 1\Bigr )\! +\! \max \Bigl (\sum _{j\in B} p_j, 1\Bigr ) - \Bigl (\max \Bigl (\sum _{j\in A} p_j, 1\Bigr ) + \max \Bigl (\sum _{j\in B} p_j + p_k, 1\Bigr )\Bigr )\\= & {} {\left\{ \begin{array}{ll} 1 + \max \Bigl (\displaystyle {\sum _{j\in B}} p_j, 1\Bigr ) - \Bigl (1 + \max \Bigl (\sum _{j\in B} p_j + p_k, 1\Bigr )\Bigr ) \quad &{}\ \text {if } \displaystyle {\sum _{j\in A}} p_j + p_k\le 1\\ \displaystyle {\sum _{j\in A}} p_j + p_k + \sum _{j\in B} p_j - \Bigl (\max \Bigl (\sum _{j\in A} p_j, 1\Bigr ) + \sum _{j\in B} p_j + p_k\Bigr ) \quad &{}\ \text {otherwise } \end{array}\right. }\\\le & {} \! 0. \end{aligned}$$

Hence, iteratively moving some job k to the fullest machine yields \(\phi (\varPi ,{{\varvec{p}}})\le \phi (\varPi _1,{{\varvec{p}}})\). Finally, the result follows by taking the expectation.

Proof

(of Lemma 1). The proof simply works by exploiting the analytical form of Poisson probabilities:

$$\begin{aligned} \frac{1}{\lambda }\mathbb {E}\Big [\max (Y, \lambda )\Big ]&= \frac{1}{\lambda }\sum _{k=0}^{\infty }\max (k, \lambda )\cdot \frac{e^{-\lambda }\lambda ^k}{k!}\\&= \frac{1}{\lambda }\sum _{k=0}^{\infty } k\cdot \frac{e^{-\lambda }\lambda ^k}{k!} + \frac{1}{\lambda }\sum _{k=0}^{\infty } \max (0, \lambda - k)\cdot \frac{e^{-\lambda }\lambda ^k}{k!}\\&= 1 + \sum _{k=0}^{\lambda } \Bigl (1 - \frac{k}{\lambda }\Bigr )\cdot \frac{e^{-\lambda }\lambda ^k}{k!}\\&= 1 + e^{-\lambda }\cdot \Bigl (\sum _{k=0}^{\lambda } \frac{\lambda ^k}{k!} - \sum _{k=1}^{\lambda } \frac{\lambda ^{k-1}}{(k-1)!}\Bigr )\\&= 1 + \frac{e^{-\lambda }\lambda ^\lambda }{\lambda !}, \end{aligned}$$

where the last step follows from the property of a telescoping sum.

Proof

(of Lemma 2).

Let X and Y be random variables with \(\mathbb {P}[ 0\le X\le 1]=1.\) Observe that \(0\le \mathbb {E}[X]\le 1\). We are going to show that \(\mathbb {E}[\max (X+Y,1)]\) can be bounded from above by choosing the two point distribution \(X^*\sim {\text {Bernoulli}}(\mathbb {E}[X])\), such that \(\mathbb {P}[X^*=0] = (1-\mathbb {E}[X])\) and \(\mathbb {P}[X^*=1] = \mathbb {E}[X].\) To do so, we define the function \(g:[0,1]\rightarrow \mathbb {R}\), \(x\mapsto \mathbb {E}_Y[\max (x+Y,1)].\) This function is convex, since it is the expectation of a pointwise maximum of two affine functions [4]. Therefore, for all \(x\in [0,1]\) we have \( g(x) \le g(0) + x (g(1) - g(0)). \) Then, by definition of g,

$$\begin{aligned} \mathbb {E}[\max (X+Y,1)] = \mathbb {E}_X[g(X)]&\le g(0) + \mathbb {E}_X[X]\cdot (g(1) - g(0))\\&= \mathbb {E}_{X^*}[g(X^*)]=\mathbb {E}[\max (X^*+Y,1)]. \end{aligned}$$

Using this bound for all \(j\in [k]\), we obtain \(\mathbb {E}\Bigl [\max \Bigl (\sum _{j=1}^k P_j,1\Bigr )\Bigr ] \le \mathbb {E}\Bigl [\max \Bigl (\sum _{j=1}^k P_j^*,1\Bigr )\Bigr ],\) where \(P_j^* \sim {\text {Bernoulli}}(\mathbb {E}[P_j])\). Then, by the law of total expectation, we have:

$$\begin{aligned} \mathbb {E}\Bigl [\max \Bigl (\sum _{j=1}^k P_j^*,1\Bigr )\Bigr ]&= \mathbb {E}\Bigl [\sum _{j=1}^k P_j^* \Big |\sum _{j=1}^k P_j^*\ge 1\Bigr ] \mathbb {P}[\sum _{j=1}^k P_j^*\ge 1] + \mathbb {E}[1]\ \mathbb {P}[\sum _{j=1}^k P_j^*< 1]. \end{aligned}$$

Since the random variable \(\sum _{j=1}^k P_j^*\) is a nonnegative integer, it cannot lie in the interval (0, 1), so the first term in the above sum is equal to \(\mathbb {E}\Bigl [\sum _{j=1}^k P_j^*\Bigr ]=\sum _{j=1}^k \mathbb {E}\Big [P_j\Big ]\), and the second term is equal to \(\mathbb {P}[P_1^*=\ldots =P_k^*=0]=\prod _{j=1}^k (1-\mathbb {E}[P_j])\).

Proof

(of Lemma 3). We set \(\ell := \min \{x_i : i\in \mathcal {M}\}\). Then, the first inequality follows immediately. Next, we will show that in each step that \(\text {LEPT}_{\mathcal {F}}\) assigns a job to a machine the second inequality is fulfilled. Let j denote the job which is put on machine i in the current step. Furthermore, let \(\ell '\) and \(\ell \) denote the minimum expected load among all machines before and after the allocation, respectively. Trivially, \(\ell '\le \ell \) is true. Moreover, let \(x_i'\) and \(x_i\) denote the expected workload of i before and after assigning j to it, respectively. Clearly, we have

$$\begin{aligned} x_i=x_i'+\mathbb {E}[P_j]. \end{aligned}$$

Observe, that \(\ell ' = x_i'\), because \(\text {LEPT}_{\mathcal {F}}\) assigns j to the machine with the smallest expected load. In addition, let \(n_i\) denote the number of jobs running on machine i after the insertion of j. Since \(\text {LEPT}_{\mathcal {F}}\) sorts jobs in decreasing order of their expected processing times, it holds

$$\begin{aligned} \mathbb {E}[P_j] \le \frac{x_i'}{n_i-1} = \frac{\ell '}{n_i-1}. \end{aligned}$$

Consider a machine other than i. If the inequality of the statement was fulfilled in an earlier step, then by setting the new \(\ell \) it still is true. In the beginning, when we have no job at all, the inequality is true, so we only have to take care of machine i.

Finally, we obtain on machine i

$$\begin{aligned}&\frac{x_i}{\ell }&= \frac{x_i'+\mathbb {E}[P_j]}{\ell } \le \frac{x_i'+\mathbb {E}[P_j]}{\ell '} \le 1 + \frac{\mathbb {E}[P_j]}{\ell '} \le 1 + \frac{\ell '}{\ell ' (n_i-1)} = \frac{n_i}{(n_i-1)}. \end{aligned}$$

Proof

(of Lemma 4). First, we argue that \(h: y \mapsto (1-y)^{1+\frac{\ell }{y}}\) is convex over [0, 1]. To see this, we compute its second derivative:

$$\begin{aligned} h''(y) = \frac{\ell (1-y)^{\frac{\ell }{y}-1}}{y^4} h_2(y), \end{aligned}$$

where \(h_2(y) := y^2 (\ell -y+2)+\ell (y-1)^2 \log ^2(1-y)-2 (\ell +1) (y-1) y \log (1-y).\) Now, we use the fact that \(\log (1-y) = -\sum _{k=1}^\infty \frac{y^k}{k}\) for all \(y\in [0,1)\). Hence, \(\log ^2 (1-y) = \sum _{k=2}^\infty \gamma _k y^k\), where \(\gamma _k:=\sum _{i=1}^{k-1} \frac{1}{i(k-i)}\). After some calculus, the terms of order 2 and 3 vanish and we obtain the following series representation of \(h_2\) over [0, 1):

$$\begin{aligned} h_2(y) = (\frac{\ell }{4}+\frac{1}{3}) y^4 + \sum _{k=5}^\infty (\frac{2(\ell +1)}{(k-1)(k-2)} + \ell ( \gamma _k + \gamma _{k-2} - 2 \gamma _{k-1})) y^k. \end{aligned}$$

We are going to show that \(\gamma _k + \gamma _{k-2} - 2 \gamma _{k-1}\ge 0\) for \(k\ge 5\) implying that \(h''(y)\ge 0\) for all \(y\in [0,1)\). To do so, we rewrite the sums using the partial fraction decomposition. As a consequence, we obtain

The last inequality results from the fact that for all \(k\ge 5\) we have \(4\sum _{i=1}^{k-3}\frac{1}{i} \ge 6\). Hence, h is convex on [0, 1), and even on [0, 1] by continuity. Now, let \(v^*(\rho ,\ell )\) denote the optimal value of the problem

$$\begin{aligned} \underset{{{\varvec{y}}}\in \mathbb {R}^m}{\varvec{{\text {maximize}}}}&\quad \sum _{i\in \mathcal {M}} h(y_i)\end{aligned}$$
(8a)
$$\begin{aligned} s.t.&\quad \sum _{i\in \mathcal {M}} y_i = m (\rho -\ell ) \end{aligned}$$
(8b)
$$\begin{aligned}&\quad 0\le y_i \le 1,\quad (\forall i\in \mathcal {M}). \end{aligned}$$
(8c)

As h is convex, a maximizer of the optimization problem above is an extreme point of the polytope induced by the constraints (8b) and (8c). Let \(k:=\lfloor m(\rho -\ell ) \rfloor \) and \(u:= m(\rho -\ell ) - k \), where \(\lfloor . \rfloor \) denotes the floor function, that is, \(\lfloor x \rfloor \) is the largest integer less than or equal to x. By construction, it holds \(0\le u \le 1\), and \(u+k=m(\rho -\ell )\). At an extreme point, at least \(m-1\) inequalities of (8c) must be tight. Hence, one coordinate of \({{\varvec{y}}}\) must be u, k coordinates must be 1 and the remaining \((m-k-1)\) coordinates must be 0.

It follows that \(v^*(\rho ,\ell ) = (m-k-1) h(0) + h(u) = (m-k-1) e^{-\ell } + (1-u)^{1+\ell /u}\). Now, we observe that \((1-u)^{\ell /u} \le e^{-\ell }\), so

where the first equivalence is due to the decomposition \(m(\rho -\ell )=k+u\).

Finally, the inequality of the proposition follows from the fact that \((1+\ell -\rho )e^{-\ell }\) is a nondecreasing function of \(\ell \) over \([0,\rho ]\).

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sagnol, G., Schmidt genannt Waldschmidt, D., Tesch, A. (2018). The Price of Fixed Assignments in Stochastic Extensible Bin Packing. In: Epstein, L., Erlebach, T. (eds) Approximation and Online Algorithms. WAOA 2018. Lecture Notes in Computer Science(), vol 11312. Springer, Cham. https://doi.org/10.1007/978-3-030-04693-4_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04693-4_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04692-7

  • Online ISBN: 978-3-030-04693-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics