Skip to main content

\(\ell _1\)-sparsity Approximation Bounds for Packing Integer Programs

  • Conference paper
  • First Online:
Integer Programming and Combinatorial Optimization (IPCO 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11480))

Abstract

We consider approximation algorithms for packing integer programs (PIPs) of the form \(\max \{\langle c, x\rangle : Ax \le b, x \in \{0,1\}^n\}\) where c, A, and b are nonnegative. We let \(W = \min _{i,j} b_i / A_{i,j}\) denote the width of A which is at least 1. Previous work by Bansal et al. [1] obtained an \(\varOmega (\frac{1}{\varDelta _0^{1/\lfloor W \rfloor }})\)-approximation ratio where \(\varDelta _0\) is the maximum number of nonzeroes in any column of A (in other words the \(\ell _0\)-column sparsity of A). They raised the question of obtaining approximation ratios based on the \(\ell _1\)-column sparsity of A (denoted by \(\varDelta _1\)) which can be much smaller than \(\varDelta _0\). Motivated by recent work on covering integer programs (CIPs) [4, 7] we show that simple algorithms based on randomized rounding followed by alteration, similar to those of Bansal et al. [1] (but with a twist), yield approximation ratios for PIPs based on \(\varDelta _1\). First, following an integrality gap example from [1], we observe that the case of \(W=1\) is as hard as maximum independent set even when \(\varDelta _1 \le 2\). In sharp contrast to this negative result, as soon as width is strictly larger than one, we obtain positive results via the natural LP relaxation. For PIPs with width \(W = 1 + \epsilon \) where \(\epsilon \in (0,1]\), we obtain an \(\varOmega (\epsilon ^2/\varDelta _1)\)-approximation. In the large width regime, when \(W \ge 2\), we obtain an \(\varOmega ((\frac{1}{1 + \varDelta _1/W})^{1/(W-1)})\)-approximation. We also obtain a \((1-\epsilon )\)-approximation when \(W = \varOmega (\frac{\log (\varDelta _1/\epsilon )}{\epsilon ^2})\).

C. Chekuri and K. Quanrud supported in part by NSF grant CCF-1526799. M. Torres supported in part by fellowships from NSF and the Sloan Foundation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We can allow the variables to have general integer upper bounds instead of restricting them to be boolean. As observed in [1], one can reduce this more general case to the \(\{0,1\}\) case without too much loss in the approximation.

References

  1. Bansal, N., Korula, N., Nagarajan, V., Srinivasan, A.: Solving packing integer programs via randomized rounding with alterations. Theory Comput. 8(24), 533–565 (2012). https://doi.org/10.4086/toc.2012.v008a024

    Article  MathSciNet  MATH  Google Scholar 

  2. Chan, T.M.: Approximation schemes for 0-1 knapsack. In: 1st Symposium on Simplicity in Algorithms (2018)

    Google Scholar 

  3. Chekuri, C., Khanna, S.: On multidimensional packing problems. SIAM J. Comput. 33(4), 837–851 (2004)

    Article  MathSciNet  Google Scholar 

  4. Chekuri, C., Quanrud, K.: On approximating (sparse) covering integer programs. In: Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1596–1615. SIAM (2019)

    Google Scholar 

  5. Chekuri, C., Quanrud, K., Torres, M.R.: \(\ell _1\)-sparsity approximation bounds for packing integer programs (2019). arXiv preprint: arXiv:1902.08698

  6. Chekuri, C., VondrĂ¡k, J., Zenklusen, R.: Submodular function maximization via the multilinear relaxation and contention resolution schemes. SIAM J. Comput. 43(6), 1831–1879 (2014)

    Article  MathSciNet  Google Scholar 

  7. Chen, A., Harris, D.G., Srinivasan, A.: Partial resampling to approximate covering integer programs. In: Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1984–2003. Society for Industrial and Applied Mathematics (2016)

    Google Scholar 

  8. Frieze, A., Clarke, M.: Approximation algorithms for the m-dimensional 0-1 knapsack problem: worst-case and probabilistic analyses. Eur. J. Oper. Res. 15(1), 100–109 (1984)

    Article  MathSciNet  Google Scholar 

  9. Harvey, N.J.: A note on the discrepancy of matrices with bounded row and column sums. Discrete Math. 338(4), 517–521 (2015)

    Article  MathSciNet  Google Scholar 

  10. HĂ¥stad, J.: Clique is hard to approximate within \(n^{1- \epsilon }\). Acta Math. 182(1), 105–142 (1999)

    Article  MathSciNet  Google Scholar 

  11. Hazan, E., Safra, S., Schwartz, O.: On the complexity of approximating k-set packing. Comput. Complex. 15(1), 20–39 (2006)

    Article  MathSciNet  Google Scholar 

  12. Mitzenmacher, M., Upfal, E.: Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, Cambridge (2005)

    Book  Google Scholar 

  13. Pritchard, D.: Approximability of sparse integer programs. In: Fiat, A., Sanders, P. (eds.) ESA 2009. LNCS, vol. 5757, pp. 83–94. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04128-0_8

    Chapter  Google Scholar 

  14. Pritchard, D., Chakrabarty, D.: Approximability of sparse integer programs. Algorithmica 61(1), 75–93 (2011)

    Article  MathSciNet  Google Scholar 

  15. Raghavan, P., Tompson, C.D.: Randomized rounding: a technique for provably good algorithms and algorithmic proofs. Combinatorica 7(4), 365–374 (1987)

    Article  MathSciNet  Google Scholar 

  16. Srinivasan, A.: Improved approximation guarantees for packing and covering integer programs. SIAM J. Comput. 29(2), 648–670 (1999)

    Article  MathSciNet  Google Scholar 

  17. Trevisan, L.: Non-approximability results for optimization problems on bounded degree instances. In: Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing, pp. 453–461. ACM (2001)

    Google Scholar 

  18. Zuckerman, D.: Linear degree extractors and the inapproximability of max clique and chromatic number. In: Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing, pp. 681–690. ACM (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manuel R. Torres .

Editor information

Editors and Affiliations

Appendices

Appendix

A Chernoff Bounds and Useful Inequalities

The following standard Chernoff bound is used to obtain a more convenient Chernoff bound in Theorem 7. The proof of Theorem 7 follows directly from choosing \(\delta \) such that \((1 + \delta )\mu = W - \beta \) and applying Theorem 6.

Theorem 6

([12]). Let \(X_1,\ldots , X_n\) be independent random variables where \(X_i\) is defined on \(\{0, \beta _i\}\), where \(0 < \beta _i \le \beta \le 1\) for some \(\beta \). Let \(X = \sum _i X_i\) and denote \(\mathbb {E}[X]\) as \(\mu \). Then for any \(\delta > 0\),

$$ \Pr [X \ge (1 + \delta )\mu ] \le \left( \frac{e^\delta }{(1+\delta )^{1+\delta }}\right) ^{\mu /\beta } $$

Theorem 7

Let \(X_1,\ldots , X_n \in [0,\beta ]\) be independent random variables for some \(0 < \beta \le 1\). Suppose \(\mu = \mathbb {E}[\sum _i X_i] \le \alpha W\) for some \(0< \alpha < 1\) and \(W \ge 1\) where \((1 - \alpha )W > \beta \). Then

$$ \Pr \left[ \sum _i X_i > W - \beta \right] \le \left( \frac{\alpha e^{1-\alpha } W}{W - \beta }\right) ^{(W-\beta )/\beta }. $$

Lemma 7

Let \(x \in (0,1]\). Then \((1/e^{1/e})^{1/x} \le x\).

Lemma 8

Let \(y \ge 2\) and \(x \in (0, 1]\). Then \(x/y \ge (1/e^{2/e})^{y/2x}\).

B Skipped Proofs

1.1 B.1 Proof of Theorem 1

Proof

Let \(G = (V,E)\) be an undirected graph without self-loops and let \(n = \left| V\right| \). Let \(A \in [0,1]^{n \times n}\) be indexed by V. For all \(v \in V\), let \(A_{v,v} = 1\). For all \(uv \in E\), let \(A_{u,v} = A_{v,u} = 1/n\). For all the remaining entries in A that have not yet been defined, set these entries to 0. Consider the following PIP:

$$\begin{aligned} \text {maximize}\ \langle x, \mathbf {1}\rangle \ \text {over} \ x \in \{0,1\}^n \ \text {s.t.} \ Ax \le 1. \end{aligned}$$
(1)

Let S be the set of all feasible integral solutions of (1) and \(\mathcal {I}\) be the set of independent sets of G. Define \(g : S \rightarrow \mathcal {I}\) where \(g(x) = \{v : x_v = 1\}\). To show g is surjective, consider a set \(I \in \mathcal {I}\). Let y be the characteristic vector of I. That is, \(y_v\) is 1 if \(v \in I\) and 0 otherwise. Consider the row in A corresponding to an arbitrary vertex u where \(y_u = 1\). For all \(v \in V\) such that v is a neighbor to u, \(y_v = 0\) as I is an independent set. Thus, as the nonzero entries in A of the row corresponding to u are, by construction, the neighbors of u, it follows that the constraint corresponding to u is satisfied in (1). As u is an arbitrary vertex, it follows that y is a feasible integral solution to (1) and as \(I = \{v : y_v = 1\}\), \(g(y) = I\).

Define \(h : S \rightarrow \mathbb {N}_0\) such that \(h(x) = \left| g(x)\right| \). It is clear that \(\max _{x \in S} h(x)\) is equal to the optimal value of (1). Let \(I_{max}\) be a maximum independent set of G. As g is surjective, there exists \(z \in S\) such that \(g(z) = I_{max}\). Thus, \(\max _{x\in S} h(x) \ge \left| I_{max}\right| \). As \(\max _{x \in S} h(x)\) is equal to the optimum value of (1), it follows that a \(\beta \)-approximation for PIPs implies a \(\beta \)-approximation for maximum independent set.

Furthermore, we note that for this PIP, \(\varDelta _1 \le 2\), thus concluding the proof.

1.2 B.2 Proof of Lemma 3

Proof

The proof proceeds similarly to the proof of Lemma 2. Since \(\alpha _1 < 1/2\), everything up to and including the application of the Chernoff bound there applies. This gives that for each \(i\in [m]\) and \(j \in [n]\),

$$ \Pr [E_{ij} | X_j = 1] \le \left( 2e\alpha _1\right) ^{(W-A_{i,j})/A_{i,j}}. $$

By choice of \(\alpha _1\), we have

$$ \left( 2e\alpha _1\right) ^{(W-A_{i,j})/A_{i,j}} = \left( \frac{1}{2e^{2/e}(1 +\varDelta _1/W)^{1/(W-1)}}\right) ^{(W-A_{i,j})/A_{i,j}} $$

We prove the final inequality in two parts. First, note that \(\frac{W - A_{i,j}}{A_{i,j}} \ge W - 1\) since \(A_{i,j} \le 1\). Thus,

$$ \left( \frac{1}{2(1 +\varDelta _1/W)^{1/(W-1)}}\right) ^{(W-A_{i,j})/A_{i,j}} \le \frac{1}{2^{W-1}(1 + \varDelta _1/W)} \le \frac{W}{2\varDelta _1}. $$

Second, we see that

$$ \left( \frac{1}{e^{2/e}}\right) ^{(W-A_{i,j})/A_{i,j}} \le \left( \frac{1}{e^{2/e}}\right) ^{W/2A_{i,j}} \le \frac{A_{i,j}}{W} $$

for \(A_{i,j} \le 1\), where the first inequality holds because \(W \ge 2\) and the second inequality holds by Lemma 8.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chekuri, C., Quanrud, K., Torres, M.R. (2019). \(\ell _1\)-sparsity Approximation Bounds for Packing Integer Programs. In: Lodi, A., Nagarajan, V. (eds) Integer Programming and Combinatorial Optimization. IPCO 2019. Lecture Notes in Computer Science(), vol 11480. Springer, Cham. https://doi.org/10.1007/978-3-030-17953-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-17953-3_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-17952-6

  • Online ISBN: 978-3-030-17953-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics