Advertisement

Journal of Global Optimization

, Volume 58, Issue 1, pp 109–135 | Cite as

An entire space polynomial-time algorithm for linear programming

  • Da Gang Tian
Article

Abstract

We propose an entire space polynomial-time algorithm for linear programming. First, we give a class of penalty functions on entire space for linear programming by which the dual of a linear program of standard form can be converted into an unconstrained optimization problem. The relevant properties on the unconstrained optimization problem such as the duality, the boundedness of the solution and the path-following lemma, etc, are proved. Second, a self-concordant function on entire space which can be used as penalty for linear programming is constructed. For this specific function, more results are obtained. In particular, we show that, by taking a parameter large enough, the optimal solution for the unconstrained optimization problem is located in the increasing interval of the self-concordant function, which ensures the feasibility of solutions. Then by means of the self-concordant penalty function on entire space, a path-following algorithm on entire space for linear programming is presented. The number of Newton steps of the algorithm is no more than \(O(nL\log (nL/ {\varepsilon }))\), and moreover, in short step, it is no more than \(O(\sqrt{n}\log (nL/{\varepsilon }))\).

Keywords

Polynomial-time algorithm Linear programming Entire space  Self-concordance Penalty function 

References

  1. 1.
    Grotschel, M., Lovasz, L., Schrijver, A.: The ellipsoid method and its consequences in combinatorial optimization. Combinatorica 1(2), 169–197 (1981)CrossRefGoogle Scholar
  2. 2.
    Frenk, J.B.G., Gromicho, J., Zhang, S.: A deep cut ellipsoid algorithm for convex programming: theory and applications. Math. Program. 63(1–3), 83–108 (1994)CrossRefGoogle Scholar
  3. 3.
    Kort, B.W., Bertsekas, D.P.: A new penalty function method for constrained minimization. In: Proceedings of the IEEE Conference on Decision and Control (New Orleans, 1972), pp. 162–166 (1972)Google Scholar
  4. 4.
    Cominetti, R.: Asymptotic convergence of the steepest descent method for the exponential penalty in linear programming. J. Convex Anal. 2(1–2), 145–152 (1995)Google Scholar
  5. 5.
    Alvarez, F., Cominetti, R.: Primal and dual convergence of a proximal point exponential penalty method for linear programming. Math. Program. Ser. A 93, 87–96 (2002)CrossRefGoogle Scholar
  6. 6.
    Fang, S.C., Tsao, H.S.J.: On the entropic perturbation and exponential penalty methods for linear programming. J. Optim. Theory Appl. 89(2), 461–466 (1996)CrossRefGoogle Scholar
  7. 7.
    Polyak, R.: Modified barrier functions. Math. program. 54, 177–222 (1992)CrossRefGoogle Scholar
  8. 8.
    Griva, I.: Numerical experiments with an interior-exterior point method for nonlinear programming. Comput. Optim. Appl. 29(2), 173–195 (2004)CrossRefGoogle Scholar
  9. 9.
    Kort, B.W., Bertsekas, D.P.: Multiplier methods for convex programming. In: Proceedings IEEE Conference on Decision and Control, San Diego, California, pp. 428–432 (1973)Google Scholar
  10. 10.
    Tseng, P., Bertsekas, D.: On the convergence of the exponential multipliers method for convex programming. Math. Program. 60, 1C19 (1993)CrossRefGoogle Scholar
  11. 11.
    Polyak, R., Teboulle, M.: Nonlinear rescaling and Proximal-like methods in convex optimization. Math. program. 76, 265–284 (1997)Google Scholar
  12. 12.
    Polyak, R.: Nonlinear rescaling vs. smoothing technique in convex optimization. Math. Program. Ser. A 92, 197–235 (2002)CrossRefGoogle Scholar
  13. 13.
    Yamashita, H., Tanabe, T.: A primal-dual exterior point method for nonlinear optimization. SIAM J. Optim. 20(6), 3335–3363 (2010)CrossRefGoogle Scholar
  14. 14.
    Polyak, R., Griva, I.: Primal-dual nonlinear rescaling method for convex optimization. J. Optim. Theory Appl. 122(1), 111–156 (2004)CrossRefGoogle Scholar
  15. 15.
    Polyak, R.: Primal-dual exterior point method for convex optimization. Optim. Methods Softw. 23(1), 141–160 (2007)CrossRefGoogle Scholar
  16. 16.
    Griva, I., Polyak, R.: 1.5-Q-superlinear convergence of an exterior-point method for constrained optimization. J. Global Optim. 40(4), 679–695 (2008)CrossRefGoogle Scholar
  17. 17.
    Griva, I., Polyak, R.: Primal-dual nonlinear rescaling method with dynamic scaling parameter update. Math. Program. Ser. A 106, 237–259 (2006)CrossRefGoogle Scholar
  18. 18.
    Nesterov, Y., Nemirovskii, A.: Interior-Point Polynomial Methods in Convex Programming. Society for Industrial and Applied Mathematics (1994)Google Scholar
  19. 19.
    Renegar, J.: A Mathematical View of Interior-Point Methods in Convex Optimization. Society for Industrial and, Applied Mathematics (2001)Google Scholar
  20. 20.
    Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)CrossRefGoogle Scholar
  21. 21.
    Wright, M.H.: The interior-point revolution in optimization: history, recent developments, and lasting consequences. Bull. Am. Math. Soc. 42, 39–56 (2005)CrossRefGoogle Scholar
  22. 22.
    Karmarkar, N.K.: A new polynomial time algorithm for linear programming. Combinatorica 4, 373–395 (1984)CrossRefGoogle Scholar
  23. 23.
    Danchi, J., Moore, J.B., Huibo, J.: Self-concordant functions for optimization on smooth manifolds. J. Global Optim. 38(3), 437–457 (2007)CrossRefGoogle Scholar
  24. 24.
    Bercu, G., Postolache, M.: Class of self-concordant functions on Riemannian manifolds. Balkan J. Geom. Appl. 14(2), 13–20 (2009)Google Scholar
  25. 25.
    Quiroz, E.A., Oliveira, P.R.: New Results on Linear Optimization Through Diagonal Metrics and Riemannian Geometry Tools, Technical Report ES-654/04. Federal University of Rio de Janeiro, PESC COPPE (2004)Google Scholar
  26. 26.
    Roos, C., Mansouri, H.: Full-Newton step polynomial-time methods for linear optimization based on locally self-concordant barrier functions(Manuscript). Department of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands (2006)Google Scholar
  27. 27.
    Jin, Z., Bai, Y.: Polynomial-time interior-point algorithm based on a local self-concordant finite barrier function. J. Shanghai Univ. 13(4), 333–339 (2009). (English edition)CrossRefGoogle Scholar
  28. 28.
    Kojima, M., Megiddo, N., Mizuno, S.: A primal-dual infeasible-interior-point algorithm for linear programming. Math. Program. 61(1–3), 263–280 (1993)CrossRefGoogle Scholar
  29. 29.
    Salahi, M., Terlaky, T., Zhang, G.: The complexity of self-regular proximity based infeasible IPMs. Comput. Optim. Appl. 33(2), 157–185 (2006)CrossRefGoogle Scholar
  30. 30.
    Roos, C.: A full-Newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM J. Optim. 16(4), 1110–1136 (2006)CrossRefGoogle Scholar
  31. 31.
    Mansouri, H., Roos, C.: Simplified O(nL) infeasible interior-point algorithm for linear optimization using full-Newton step. Optim. Methods Softw. 22(3), 519–530 (2007)CrossRefGoogle Scholar
  32. 32.
    Burke, Jim, Song, Xu: A non-interior predictor-corrector path following algorithm for the monotone linear complementarity problem. Math. Program. 87(1), 113–130 (2000)Google Scholar
  33. 33.
    Zhao, Yun-Bin, Li, Duan: A globally and locally superlinear convergent non-interior-point algorithm for $P_{0}$ LCPS. SIAM J. Optim. 13(4), 1195–1221 (2003)CrossRefGoogle Scholar
  34. 34.
    Hotta, K., Inaba, M., Yoshise, A.: A complexity analysis of a smoothing method using CHKS-function for monotone linear complementarity problems. Comput. Optim. Appl. 17, 183–201 (2000)CrossRefGoogle Scholar
  35. 35.
    Burke, J., Xu, S.: Complexity of a noninterior path-following method for the linear complementarity problem. J. Optim. Theory Appl. 112(1), 53–76 (2002)CrossRefGoogle Scholar
  36. 36.
    Nesterov, Y.: Constructing Self-Concordant Barriers for Convex Cones (March 2006). CORE Discussion Paper No. 2006/30. Available at SSRN: http://ssrn.com/abstract=921790
  37. 37.
    Shevchenko, O.: Recursive construction of optimal self-concordant barriers for homogeneous cones. J. Optim. Theory Appl. 140(2), 339–354 (2009)CrossRefGoogle Scholar
  38. 38.
    Papa Quiroz, E.A., Oliveira, P.R.: New Self-Concordant Barrier for the Hypercube. J. Optim. Theory Appl. 135(3), 475–490 (2007)CrossRefGoogle Scholar
  39. 39.
    Renegar, J.: A polynomial-time algorithm, based on Newton’s method, for linear programming. Math. Program. 40(1–3), 59–93 (1988)CrossRefGoogle Scholar
  40. 40.
    Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)Google Scholar
  41. 41.
    Alexandrov, A.D.: Convex Polyhedra. Springer, Berlin (2005)Google Scholar
  42. 42.
    Khachiyan, L.G.: A Polynomial Algorithm in Linear Programming. Doklady Akademiia Nauk SSSR, 244,1093C1096 (1979). (English translation: Soviet Mathematics Doklady, 20(1),191C194 (1979))Google Scholar
  43. 43.
    Gács, P., Lovász, L.: Khachian’s algorithm for linear programming. Math. Program. Study 14, 61–68 (1981)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Business SchoolShanghai University for Science and TechnologyShanghaiChina

Personalised recommendations