Skip to main content

The Iteration-Complexity Upper Bound for the Mizuno-Todd-Ye Predictor-Corrector Algorithm is Tight

  • Conference paper
  • First Online:
Book cover Modeling and Optimization: Theory and Applications (MOPTA 2017)

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 279))

Included in the following conference series:

  • 456 Accesses

Abstract

It is an open question whether there is an interior-point algorithm for linear optimization problems with a lower iteration-complexity than the classical bound \(\mathcal {O}(\sqrt{n} \log (\frac{\mu _1}{\mu _0}))\). This paper provides a negative answer to that question for a variant of the Mizuno-Todd-Ye predictor-corrector algorithm. In fact, we prove that for any \(\varepsilon >0\), there is a redundant Klee-Minty cube for which the aforementioned algorithm requires \(n^{( \frac{1}{2}-\varepsilon )} \) iterations to reduce the barrier parameter by at least a constant. This is provably the first case of an adaptive step interior-point algorithm where the classical iteration-complexity upper bound is shown to be tight.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Deza, A., Nematollahi, E., Peyghami, R., Terlaky, T.: The central path visits all the vertices of the Klee-Minty cube. Optim. Methods Softw. 21(5), 851–865 (2006)

    Article  MathSciNet  Google Scholar 

  2. Deza, A., Nematollahi, E., Terlaky, T.: How good are interior point methods? Klee-Minty cubes tighten iteration-complexity bounds. Math. Program. 113(1), 1–14 (2008)

    Article  MathSciNet  Google Scholar 

  3. Huhn, P., Borgwardt, K.H.: Interior-point methods: worst case and average case analysis of a phase-i algorithm and a termination procedure. J. Complex. 18(3), 833–910 (2002)

    Article  MathSciNet  Google Scholar 

  4. Jansen, B., Roos, C., Terlaky, T.: A short survey on ten years interior point methods. Technical report 95–45, Delft University of Technology, Delft, The Netherlands (1995)

    Google Scholar 

  5. Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica 4(4), 373–395 (1984)

    Article  MathSciNet  Google Scholar 

  6. Megiddo, N., Shub, M.: Boundary behavior of interior point algorithms in linear programming. Math. Oper. Res. 14(1), 97–146 (1989)

    Article  MathSciNet  Google Scholar 

  7. Mizuno, S., Todd, M., Ye, Y.: Anticipated behavior of path-following algorithms for linear programming. Technical report 878, School of Operations Research and Industrial Engineering, Ithaca, New York (1989)

    Google Scholar 

  8. Mizuno, S., Todd, M.J., Ye, Y.: On adaptive-step primal-dual interior-point algorithms for linear programming. Math. Oper. Res. 18(4), 964–981 (1993)

    Article  MathSciNet  Google Scholar 

  9. Mut, M., Terlaky, T.: An analogue of the Klee-Walkup result for Sonnevend’s curvature of the central path. J. Optim. Theory Appl. 169(1), 17–31 (2016)

    Article  MathSciNet  Google Scholar 

  10. Nematollahi, E., Terlaky, T.: A redundant Klee-Minty construction with all the redundant constraints touching the feasible region. Oper. Res. Lett. 36(4), 414–418 (2008)

    Article  MathSciNet  Google Scholar 

  11. Nematollahi, E., Terlaky, T.: A simpler and tighter redundant Klee-Minty construction. Optim. Lett. 2(3), 403–414 (2008)

    Article  MathSciNet  Google Scholar 

  12. Nesterov, Y., Nemirovskii, A.: Interior-Point Polynomial Algorithms in Convex Programming, vol. 13. SIAM, Philadelphia (1994)

    Book  Google Scholar 

  13. Potra, F.A.: A quadratically convergent predictor-corrector method for solving linear programs from infeasible starting points. Math. Program. 67(1–3), 383–406 (1994)

    Article  MathSciNet  Google Scholar 

  14. Roos, C., Terlaky, T., Vial, J.P.: Interior Point Methods for Linear Optimization. Springer, New York (2006)

    MATH  Google Scholar 

  15. Sonnevend, G., Stoer, J., Zhao, G.: On the complexity of following the central path of linear programs by linear extrapolation II. Math. Program. 52, 527–553 (1991)

    Article  MathSciNet  Google Scholar 

  16. Stoer, J., Zhao, G.: Estimating the complexity of a class of path-following methods for solving linear programs by curvature integrals. Appl. Math. Optim. 27, 85–103 (1993)

    Article  MathSciNet  Google Scholar 

  17. Todd, M.J.: A lower bound on the number of iterations of primal-dual interior-point methods for linear programming. Technical report, Cornell University Operations Research and Industrial Engineering (1993)

    Google Scholar 

  18. Todd, M.J., Ye, Y.: A lower bound on the number of iterations of long-step primal-dual linear programming algorithms. Ann. Oper. Res. 62(1), 233–252 (1996)

    Article  MathSciNet  Google Scholar 

  19. Zhao, G.: On the relationship between the curvature integral and the complexity of path-following methods in linear programming. SIAM J. Optim. 6(1), 57–73 (1996)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Research supported by a Start-up grant of Lehigh University. It is also supported by TAMOP-4.2.2.A-11/1KONV-2012-0012: Basic research for the development of hybrid and electric vehicles. The TAMOP Project is supported by the European Union and co-financed by the European Regional Development Fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Murat Mut .

Editor information

Editors and Affiliations

Appendix

Appendix

Lemma 8.1

For large enough r, there is one-dimensional LO problem with \((r+1)\) constraints for which \(\tau _1 \sqrt{r}\le \kappa (\mu )\le \tau _2 \sqrt{r}\) for any \(\mu \in [\alpha _1,\alpha _2]\), where \(\alpha _1=\frac{1}{r-\frac{\sqrt{r}}{4}}\) and \(\alpha _2=\frac{1}{r-\sqrt{r}}\) for some constants \(\tau _1,\tau _2 \ge 0\).

Proof

Consider the problem \(\min \{ \ y : \ y \le 1 \ \mathrm {and}, \ y \ge 0 \ \mathrm {counted}\ r \ \mathrm {times} \}\). The construction is given in [15], p:551. Consider the interval \([\alpha _1,\alpha _2]\), where \(\alpha _1=\frac{1}{r-\frac{\sqrt{r}}{4}}\) and \(\alpha _2=\frac{1}{r-\sqrt{r}}\). Let \(s_0(\mu )=1-y(\mu )\). Then it is shown in [15], p. 551 that, \(\displaystyle \frac{\dot{s}_0(\mu )}{s_0(\mu )} \ge \displaystyle \frac{r^2}{3\sqrt{r}}\) on \([\alpha _1,\alpha _2]\). This implies \(\displaystyle \frac{\mu \dot{s}_0(\mu )}{s_0(\mu )}=\varOmega (\sqrt{r})\) on \([\alpha _1,\alpha _2]\). Then, from Proposition 2.1 part 1., we have \(\kappa (\mu )=\varOmega (\sqrt{r})\) for all \(\mu \in [\alpha _1,\alpha _2]\). The proof is complete.    \(\square \)

Proposition 8.1

Consider the LO problems

$$\begin{aligned} \begin{array}{ccc} \begin{array}{crl} \min &{} (c^1)^T x &{}\\ \mathrm {s.t. } &{} A^1 x^1 &{} =b^1 \\ &{} x^1 &{} \ge 0, \\ \end{array} &{} \text { and } \ &{} \begin{array}{crl} \min &{}(c^2)^T x &{}\\ \mathrm {s.t. } &{} A^2 x^2 &{} =b^2 \\ &{} x^2 &{} \ge 0, \ \ \\ \end{array} \\ \end{array} \end{aligned}$$
(19)

with the corresponding \(\kappa ^1(\mu )\) and \(\kappa ^2(\mu )\) on the interval \([\mu _0,\mu _1]\). Then for the problem

$$\begin{aligned} \begin{array}{crl} \min &{}c^T x &{}\\ \mathrm {s.t. } &{} A x &{} =b \\ &{} x &{} \ge 0, \ \ \\ \end{array} \end{aligned}$$
(20)

with the corresponding \(\overline{\kappa }(\mu )\) where \(c=\left[ \begin{array}{c} c^1\\ c^2\\ \end{array} \right] \), \(b=\left[ \begin{array}{c} b^1\\ b^2\\ \end{array} \right] \) and \(A= \left[ \begin{array}{cc} A^1 &{} 0 \\ 0 &{} A^2 \\ \end{array} \right] \), on \([\mu _0,\mu _1]\), we have \(\overline{\kappa }(\mu ) \ge \kappa ^i(\mu )\) for \(i=1,2\).

Proof

Let \(\left( x^1(\mu ),y^1(\mu ),s^1(\mu )\right) \) and \(\left( x^2(\mu ),y^2(\mu ),s^2(\mu )\right) \) be the central paths in (19). Then the term \(\overline{\kappa }(\mu )\) for the combined problem (20) becomes \(\overline{\kappa }(\mu )=\left| \left| [\mu \dot{x}^1 \dot{s}^1 , \mu \dot{x}^2 \dot{s}^2] \right| \right| ^{\frac{1}{2}} \ge \kappa ^i(\mu )\) for \(i=1,2\).    \(\square \)

Proposition 8.2

Let \(\eta >0\) and consider the central path ((2) and its \(\kappa (\mu )\). Let \((\hat{A},\hat{b},\hat{c})\) be another problem instance, where \((\hat{A},\hat{b},\hat{c})=(A,\frac{b}{\eta },c)\) with its corresponding \(\hat{\kappa }(\mu )\). Then, we have

$$\begin{aligned} \hat{\kappa }(\mu )=\kappa (\eta \mu ), \ \mu \in \left[ \frac{\mu _0}{\eta },\frac{\mu _1}{\eta } \right] . \end{aligned}$$
(21)

Proof

Using (2), it is straightforward to verify that the central path \((\hat{x}(\mu ),\hat{y}(\mu ),\hat{s}(\mu ))\) of the new problem satisfies \(\hat{x}(\mu )= \displaystyle \frac{x(\eta \mu )}{\eta }\), \(\hat{y}(\mu )= y(\eta \mu )\) and \(\hat{s}(\mu )= s(\eta \mu )\). Using the definition of \(\kappa (\mu )\), we get \(\hat{\kappa }({\mu })=\kappa ({\eta \mu })\). Hence the claim follows.    \(\square \)

Lemma 8.2

Given an interval \([\mu _0,\mu _1]\) and a constant \(\nu >0\), there exists an LO problem of size \(n=\varTheta \left( \log (\frac{\mu _1}{\mu _0})\right) \) such that \(\overline{\kappa }(\mu ) \ge \nu \) for all \(\mu \in [\mu _0,\mu _1]\). The hidden constant in \(n=\varTheta \left( \log (\frac{\mu _1}{\mu _0})\right) \) depends on \(\nu \).

Proof

Let a constant \(\nu >0\) and an interval \([\mu _0,\mu _1]\) be given. For the given \(\nu >0\), by Lemma 8.1, there exists an LO problem with its \(\kappa (\mu ) \ge \nu \) on an interval \(\mu \in [\alpha _1,\alpha _2]\). By applying Proposition 8.2 for \(\eta :=\frac{\alpha _1}{\left( \frac{\alpha _2}{\alpha _1}\right) ^i \mu _0}\) for \(i=0,1,\dots ,k\), we find \((k-1)\) scaled LO problems with their corresponding \(\kappa ^i(\mu )\), \(i=0,1,\dots ,k-1\) such that \(\kappa ^i(\mu )=\kappa (\eta \mu )\) on \(\mu \in \left[ (\frac{\alpha _2}{\alpha _1})^i \mu _0, (\frac{\alpha _2}{\alpha _1})^{i+1} \mu _0 \right] \), for \(i=0,1,\dots ,k-1\). Then by using Proposition 8.1, we can obtain a block diagonal LO problem with its \(\overline{\kappa }(\mu ) \ge \kappa ^i(\mu ) \ge \nu \) for \(i=0,1,\dots ,k-1\) for any \(\mu \in \left[ \mu _0, \left( \frac{\alpha _2}{\alpha _1}\right) ^k \mu _0 \right] \). In order to have \(\overline{\kappa }(\mu ) \ge \nu \) for any \(\mu \in [\mu _0, \mu _1 ]\), it is then enough to have \(\left( \frac{\alpha _2}{\alpha _1}\right) ^k \mu _0 \ge \mu _1\). This is true if and only if \(k \log \left( \frac{\alpha _2}{\alpha _1}\right) \ge \log \left( \frac{\mu _1}{\mu _0} \right) \). Since by Lemma 8.1, the ratio \( \frac{\alpha _2}{\alpha _1}\) is a constant depending only on the given \(\nu \), the number of blocks k needed is \(\varTheta \left( \log (\frac{\alpha _2}{\alpha _1}) \right) \). Also since the size of the LO problem with its \(\kappa (\mu )\) is a constant only determined by \(\nu \), the size of the problem is \(n=\varTheta \left( \log (\frac{\mu _1}{\mu _0})\right) \) to achieve \(\overline{\kappa }(\mu ) \ge \nu \) for all \(\mu \in [\mu _0,\mu _1]\). This completes the proof.    \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mut, M., Terlaky, T. (2019). The Iteration-Complexity Upper Bound for the Mizuno-Todd-Ye Predictor-Corrector Algorithm is Tight. In: Pintér, J.D., Terlaky, T. (eds) Modeling and Optimization: Theory and Applications. MOPTA 2017. Springer Proceedings in Mathematics & Statistics, vol 279. Springer, Cham. https://doi.org/10.1007/978-3-030-12119-8_6

Download citation

Publish with us

Policies and ethics