Learning CRFs Using Graph Cuts

  • Martin Szummer
  • Pushmeet Kohli
  • Derek Hoiem
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5303)


Many computer vision problems are naturally formulated as random fields, specifically MRFs or CRFs. The introduction of graph cuts has enabled efficient and optimal inference in associative random fields, greatly advancing applications such as segmentation, stereo reconstruction and many others. However, while fast inference is now widespread, parameter learning in random fields has remained an intractable problem. This paper shows how to apply fast inference algorithms, in particular graph cuts, to learn parameters of random fields with similar efficiency. We find optimal parameter values under standard regularized objective functions that ensure good generalization. Our algorithm enables learning of many parameters in reasonable time, and we explore further speedup techniques. We also discuss extensions to non-associative and multi-class problems. We evaluate the method on image segmentation and geometry recognition.


Ground Truth Loss Function Parameter Learning Submodular Function Foreground Region 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Kolmogorov, V., Zabih, R.: What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 147–159 (2004)CrossRefzbMATHGoogle Scholar
  2. 2.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001)CrossRefGoogle Scholar
  3. 3.
    Boykov, Y., Jolly, M.: Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images. In: ICCV, pp. I: 105–112 (2001)Google Scholar
  4. 4.
    Rother, C., Kolmogorov, V., Blake, A.: “GrabCut”: interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics 23(3), 309–314 (2004)CrossRefGoogle Scholar
  5. 5.
    Barbu, A., Zhu, S.C.: Generalizing Swendsen-Wang to sampling arbitrary posterior probabilities. IEEE Trans. Pattern Anal. Mach. Intell. 27(8), 1239–1253 (2005)CrossRefGoogle Scholar
  6. 6.
    Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., Tappen, M.F., Rother, C.: A comparative study of energy minimization methods for Markov random fields. In: ECCV, pp. 16–29 (2006)Google Scholar
  7. 7.
    Kumar, S., August, J., Hebert, M.: Exploiting inference for approximate parameter learning in discriminative fields: An empirical study. In: EMMCVPR, pp. 153–168 (2005)Google Scholar
  8. 8.
    Tsochantaridis, I., Joachims, T., Hofmann, T., Altun, Y.: Large margin methods for structured and interdependent output variables. Jrnl. Machine Learning Research 6, 1453–1484 (2005)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Anguelov, D., Taskar, B., Chatalbashev, V., Koller, D., Gupta, D., Heitz, G., Ng, A.Y.: Discriminative learning of Markov random fields for segmentation of 3D scan data. In: CVPR, pp. 169–176 (2005)Google Scholar
  10. 10.
    Taskar, B., Chatalbashev, V., Koller, D., Guestrin, C.: Learning structured prediction models: a large margin approach. In: ICML, pp. 896–903 (2005)Google Scholar
  11. 11.
    Boros, E., Hammer, P.L.: Pseudo-boolean optimization. Discrete Applied Mathematics 123(1–3), 155–225 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Hoiem, D., Efros, A.A., Hebert, M.: Geometric context from a single image. In: ICCV (2005)Google Scholar
  13. 13.
    Finley, T., Joachims, T.: Training structural SVMs when exact inference is intractable. In: Proc. Intl. Conf. on Machine Learning (ICML), pp. 304–311 (2008)Google Scholar
  14. 14.
    Komodakis, N., Tziritas, G., Paragios, N.: Fast, approximately optimal solutions for single and dynamic MRFs. In: CVPR (2007)Google Scholar
  15. 15.
    Collins, M.: Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms. In: ACL-02 conf. Empirical methods in natural lang. processing (EMNLP), pp. 1–8 (2002)Google Scholar
  16. 16.
    Taskar, B., Chatalbashev, V., Koller, D.: Learning associative Markov networks. In: ICML (2004)Google Scholar
  17. 17.
    Kohli, P., Torr, P.H.S.: Measuring uncertainty in graph cut solutions - efficiently computing min-marginal energies using dynamic graph cuts. In: ECCV, pp. 30–43 (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Martin Szummer
    • 1
  • Pushmeet Kohli
    • 1
  • Derek Hoiem
    • 2
  1. 1.Microsoft ResearchCambridgeUnited Kingdom
  2. 2.Beckman InstituteUniversity of Illinois at Urbana-ChampaignUSA

Personalised recommendations