The Mosek Interior Point Optimizer for Linear Programming: An Implementation of the Homogeneous Algorithm
The purpose of this work is to present the MOSEK optimizer intended for solution of large-scale sparse linear programs. The optimizer is based on the homogeneous interior-point algorithm which in contrast to the primal-dual algorithm detects a possible primal or dual infeasibility reliably. It employs advanced (parallelized) linear algebra, it handles dense columns in the constraint matrix efficiently, and it has a basis identification procedure.
This paper discusses in details the algorithm and linear algebra employed by the MOSEK interior point optimizer. In particular the homogeneous algorithm is emphasized. Furthermore, extensive computational results are reported. These results include comparative results for the XPRESS simplex and the MOSEK interior point optimizer. Finally, computational results are presented to demonstrate the possible speed-up, when using a parallelized version of the MOSEK interior point optimizer on a multiprocessor Silicon Graphics computer.
KeywordsInterior Point Dense Column Interior Point Method Cholesky Decomposition Complementary Solution
Unable to display preview. Download preview PDF.
- E. D. Andersen. On exploiting problem structure in the basis identification procedure for linear programming. Technical Report 6, Department of Management, Odense University, Denmark, July 1996.Google Scholar
- E. D. Andersen and K. D. Andersen. A parallel interior-point based linear programming solver for shared-memory multiprocessor computers: A case study based on the XPRESS LP solver. Technical report, CORE, UCL, Belgium, 1997.Google Scholar
- E. D. Andersen, J. Gondzio, C. Mészáros, and X. Xu. Implementation of interior point methods for large scale linear programming. In T. Terlaky, editor, Interior-point methods of mathematical programming, pages 189–252. Kluwer Academic Publishers, 1996.Google Scholar
- J. W. Chinneck. Computer codes for the analysis of infeasible linear programs. J. Oper. Res. Soc., 67:61–72, 1996.Google Scholar
- A. George and J. W. -H. Liu. Computing Solution of Large Sparse Positive Definite Systems. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1981.Google Scholar
- J. Gondzio and T. Terlaky. A computational view of interior point methods for linear programming. In J. Beasley, editor, Advances in linear and integer programming. Oxford University Press, Oxford, England, 1995.Google Scholar
- A. Gupta. Graph partitioning based sparse matrix orderings for interior-point algorithms. Technical Report RC20467(90480), IBM Research Division, T.J. Watson Research Center, P. O. Box 218, Yorktown Heights, New York 10598, May 1996.Google Scholar
- B. Hendrickson and E. Rothberg. Sparse matrix ordering methods for interior point linear programming. Technical report, Silicon Graphics, Inc., January 1996.Google Scholar
- G. Karypis, A. Gupta, and V. Kumar. A parallel formulation of interior point algorithms. Technical Report 94–20, Department of computer science, University of Minnesota, 1994.Google Scholar
- M. Kojima, S. Mizuno, and A. Yoshise. A primal-dual interior point algorithm for linear proramming. In N. Megiddo, editor, Progress in Mathematical Programming: Interior-Point Algorithms and Related Methods, pages 29–47. Springer Verlag, Berlin, 1989.Google Scholar
- I. J. Lustig, R. E. Marsten, and D. F. Shanno. The interaction of algorithms and architectures for interior point methods. In P. M. Pardalos, editor, Advances in optimization and parallel computing, pages 190–205. Elsevier Sciences Publishers B.V., 1992.Google Scholar
- E. Rothberg. Ordering sparse matrices using approximate minimum local fill. Technical report, Silicon Graphics, Inc. Mountain View, CA 94043, April 1996.Google Scholar
- S. Wright. Modified Cholesky factorizations in interior-point algorithms for linear programming. Technical Report ANL/MCS-P600–0596, Mathematics and Computer Science division, Argonne National Laboratory, Argonne, IL 60439, USA, 1996.Google Scholar