Computational Optimization and Applications

, Volume 64, Issue 2, pp 489–511 | Cite as

On how to solve large-scale log-determinant optimization problems



We propose a proximal augmented Lagrangian method and a hybrid method, i.e., employing the proximal augmented Lagrangian method to generate a good initial point and then employing the Newton-CG augmented Lagrangian method to get a highly accurate solution, to solve large-scale nonlinear semidefinite programming problems whose objective functions are a sum of a convex quadratic function and a log-determinant term. We demonstrate that the algorithms can supply a high quality solution efficiently even for some ill-conditioned problems.


Quadratic programming Log-determinant optimization problem Proximal augmented Lagrangian method Augmented Lagrangian method Newton-CG method 



I sincerely appreciate the Institute for Mathematical Sciences, National University of Singapore for supporting me to visit the institute and attend the workshop “Optimization: Computation, Theory and Modeling” in 2012 so that I can have a good opportunity to have fruitful discussions with Professors Defeng Sun and Kim-Chuan Toh. I appreciate Dr. Xinyuan Zhao in Beijing University of Technology for many discussions on this topic. I also appreciate the two anonymous referees and the editor for their helpful comments and suggestions, which improved the quality of this paper. The author’s research was supported by the National Natural Science Foundation of China under Grant 11201382, the Youth Fund of Humanities and Social Sciences of the Ministry of Education under Grant 12YJC910008, the project of the science and technology department of Sichuan province under Grant 2012ZR0154, and the Fundamental Research Funds for the Central Universities under Grants SWJTU12CX055 and SWJTU12ZT15.


  1. 1.
    Alizadeh, F., Haeberly, J.P.A., Overton, O.L.: Complementarity and nondegeneracy in semidefinite programming. Math. Program. 77, 111–128 (1997)MathSciNetMATHGoogle Scholar
  2. 2.
    Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)CrossRefMATHGoogle Scholar
  3. 3.
    Dahl, J., Vandenberghe, L., Roychowdhury, V.: Covariance selection for non-chordal graphs via chordal embedding. Optim. Methods Softw. 23, 501–520 (2008)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Dempster, A.: Covariance selection. Biometrics 28, 157–175 (1972)CrossRefGoogle Scholar
  5. 5.
    d’Aspremont, A., Banerjee, O., El Ghaoui, L.: First-order methods for sparse covariance selection. SIAM J. Matrix Anal. Appl. 30, 56–66 (2008)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Fazel, M., Pong, T.-K., Sun, D., Tseng, P.: Hankel matrix rank minimization with applications to system identification and realization. SIAM J. Matrix Anal. Appl. 34, 946–977 (2013)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Freund, R.W., Nachtigal, N.M.: A new Krylov subspace method for symmetric indefinite linear systems, ORNL/TM-12754, (1994)Google Scholar
  8. 8.
    Gao, Y., Sun, D.: Calibrating least squares semidefinite programming with equality and inequality constraints. SIAM J. Matrix Anal. Appl. 31, 1432–1457 (2009)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985)CrossRefMATHGoogle Scholar
  10. 10.
    Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)CrossRefMATHGoogle Scholar
  11. 11.
    Hughes, T.R., Marton, M.J., Jones, A.R., Roberts, C.J., Stoughton, R., Armour, C.D., Bennett, H.A., Coffey, E., Dai, H., He, Y.D., Kidd, M.J., King, A.M., Meyer, M.R., Slade, D., Lum, P.Y., Stepaniants, S.B., Shoemaker, D.D., Gachotte, D., Chakraburtty, K., Simon, J., Bard, M., Friend, S.H.: Functional discovery via a compendium of expression profiles. Cell 102, 109–126 (2000)CrossRefGoogle Scholar
  12. 12.
    Hu, Z., Cao, J., Hong, L.J.: Robust simulation of global warming policies using the DICE model. Manag. Sci. 58, 1–17 (2012)CrossRefGoogle Scholar
  13. 13.
    Jiang, K.F., Sun, D.F., Toh, K.-C.: An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP. SIAM J. Optim. 22, 1042–1064 (2012)MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Li, L., Toh, K.-C.: An inexact interior point method for L1-regularized sparse covariance selection. Math. Program. Comput. 2, 291–315 (2010)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Lu, Z.: Smooth optimization approach for sparse covariance selection. SIAM J. Optim. 19, 1807–1827 (2009)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Lu, Z.: Adaptive first-order methods for general sparse inverse covariance selection. SIAM J. Matrix Anal. Appl. 31, 2000–2016 (2010)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Lu, Z., Zhang, Y.: Penalty decomposition methods for \(L0\)-norm minimization. In: Proceedings of Neural Information Processing Systems (NIPS), pp. 46–54 (2011)Google Scholar
  18. 18.
    Martinet, B.: Regularisation d’inéquations variationelles par approximations successives. Rev. Française d’Informat. Recherche Opérationnelle, 154–159, (1970)Google Scholar
  19. 19.
    Meng, F., Sun, D., Zhao, G.: Semismoothness of solutions to generalized equations and the Moreau-Yosida regularization. Math. Program. 104, 561–581 (2005)MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Minty, G.J.: On the monotonicity of the gradient of a convex function. Pac. J. Math. 14, 243–247 (1964)MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Moreau, J.J.: Proximité et dualité dans un espace Hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)MathSciNetMATHGoogle Scholar
  22. 22.
    Natsoulis, G., Pearson, C.I., Gollub, J., Eynon, B.P., Ferng, J., Nair, R., Idury, R., Lee, M.D., Fielden, M.R., Brennan, R.J., Roter, A.H., Jarnagin, K.: The liver pharmacological and xenobiotic gene response repertoire. Mol. Syst. Biol. 175, 1–12 (2008)Google Scholar
  23. 23.
    Olsen, P., Oztoprak, F., Nocedal, J., Rennie, S.: Newton-like methods for sparse inverse covariance estimation.
  24. 24.
    Qi, H., Sun, D.: A quadratically convergent Newton method for computing the nearest correlation matrix. SIAM J. Matrix Anal. Appl. 28, 360–385 (2006)MathSciNetCrossRefMATHGoogle Scholar
  25. 25.
    Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)CrossRefMATHGoogle Scholar
  26. 26.
    Rockafellar, R.T.: A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Program. 5, 354–373 (1973)MathSciNetCrossRefMATHGoogle Scholar
  27. 27.
    Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)MathSciNetCrossRefMATHGoogle Scholar
  28. 28.
    Rockafellar, R.T.: Augmented Lagrangains and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976)MathSciNetCrossRefMATHGoogle Scholar
  29. 29.
    Scheinberg, K., Ma, S., Goldfarb, D.: Sparse inverse covariance selection via alternating linearization methods. In: Twenty-Fourth Annual Conference on Neural Information Processing Systems (NIPS), pp. 2101–2109 (2010)Google Scholar
  30. 30.
    Scheinberg, K., Rish, I.: Learning sparse Gaussian Markov networks using a greedy coordinate ascent approach. In: Balcazar, J.L., Bonchi, F., Gionis, A., Sebag, M. (eds.) Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science 6323, pp. 196–212. Springer, Berlin (2010)CrossRefGoogle Scholar
  31. 31.
    Sun, D.: The strong second order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications. Math. Oper. Res. 31, 761–776 (2006)MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Toh, K.-C.: Primal-dual path-following algorithms for determinant maximization problems with linear matrix inequalities. Comput. Optim. Appl. 14, 309–330 (1999)MathSciNetCrossRefMATHGoogle Scholar
  33. 33.
    Toh, K.-C.: An inexact primal-dual path following algorithm for convex quadratic SDP. Math. Program. 112, 221–254 (2008)MathSciNetCrossRefMATHGoogle Scholar
  34. 34.
    Tütüncü, R.H., Toh, K.-C., Todd, M.J.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. 95, 189–217 (2003)MathSciNetCrossRefMATHGoogle Scholar
  35. 35.
    Toh, K.-C., Tütüncü, R.H., Todd, M.J.: Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems. Pac. J. Optim. 3, 135–164 (2007)MathSciNetMATHGoogle Scholar
  36. 36.
    Varadarajan, B., Povey, D., Chu, S.M.: Quick fmllr for speaker adaptation in speech recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2008)Google Scholar
  37. 37.
    Wang, C., Sun, D., Toh, K.-C.: Solving log-determinant optimization problems by a Newton-CG primal proximal point algorithm. SIAM J. Optim. 20, 2994–3013 (2010)MathSciNetCrossRefMATHGoogle Scholar
  38. 38.
    Yang, J., Sun, D., Toh, K.-C.: A proximal point algorithm for log-determinant optimization with group Lasso regularization. SIAM J. Optim. 23, 857–893 (2013)MathSciNetCrossRefMATHGoogle Scholar
  39. 39.
    Yang, S., Shen, X., Wonka, P., Lu, Z., Ye, J.: Fused multiple graphical Lasso.
  40. 40.
    Yuan, X.: Alternating direction methods for sparse covariance selection. J. Sci. Comput. 51, 261–273 (2012)MathSciNetCrossRefMATHGoogle Scholar
  41. 41.
    Zhao, X.-Y.: A Semismooth Newton-CG augmented Lagrangian method for large scale linear and convex quadratic SDPs. PhD thesis, National University of Singapore (2009)Google Scholar
  42. 42.
    Zhao, X.-Y., Sun, D., Toh, K.-C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20, 1737–1765 (2010)MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.School of MathematicsSouthwest Jiaotong UniversityChengduChina

Personalised recommendations