Slack and Margin Rescaling as Convex Extensions of Supermodular Functions

  • Matthew B. BlaschkoEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10746)


Slack and margin rescaling are variants of the structured output SVM, which is frequently applied to problems in computer vision such as image segmentation, object localization, and learning parts based object models. They define convex surrogates to task specific loss functions, which, when specialized to non-additive loss functions for multi-label problems, yield extensions to increasing set functions. We demonstrate in this paper that we may use these concepts to define polynomial time convex extensions of arbitrary supermodular functions, providing an analysis framework for the tightness of these surrogates. This analysis framework shows that, while neither margin nor slack rescaling dominate the other, known bounds on supermodular functions can be used to derive extensions that dominate both of these, indicating possible directions for defining novel structured output prediction surrogates. In addition to the analysis of structured prediction loss functions, these results imply an approach to supermodular minimization in which margin rescaling is combined with non-polynomial time convex extensions to compute a sequence of LP relaxations reminiscent of a cutting plane method. This approach is applied to the problem of selecting representative exemplars from a set of images, validating our theoretical contributions.



This work is funded by Internal Funds KU Leuven, FP7-MC-CIG 334380, and the Research Foundation - Flanders (FWO) through project number G0A2716N.


  1. 1.
    Buchbinder, N., Feldman, M., Naor, J.S., Schwartz, R.: A tight linear time (1/2)-approximation for unconstrained submodular maximization. In: FOCS (2012)Google Scholar
  2. 2.
    Choi, H., Meshi, O., Srebro, N.: Fast and scalable structural SVM with slack rescaling. In: AISTATS, pp. 667–675 (2016)Google Scholar
  3. 3.
    Conforti, M., Cornuéjols, G.: Submodular set functions, matroids and the greedy algorithm: tight worst-case bounds and some generalizations of the Rado-Edmonds theorem. Discret. Appl. Math. 7(3), 251–274 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Conforti, M., Cornuéjols, G., Zambelli, G.: Integer Programming. Springer, Cham (2014). CrossRefzbMATHGoogle Scholar
  5. 5.
    Fujishige, S.: Submodular Functions and Optimization. Elsevier, Amsterdam (2005)zbMATHGoogle Scholar
  6. 6.
    Iyer, R., Bilmes, J.: Algorithms for approximate minimization of the difference between submodular functions, with applications. In: UAI, pp. 407–417 (2012)Google Scholar
  7. 7.
    Iyer, R.K., Bilmes, J.A.: Polyhedral aspects of submodularity, convexity and concavity. CoRR, abs/1506.07329 (2015)Google Scholar
  8. 8.
    Iyer, R.K., Jegelka, S., Bilmes, J.A.: Curvature and optimal algorithms for learning and minimizing submodular functions. In: NIPS, pp. 2742–2750 (2013)Google Scholar
  9. 9.
    Jegelka, S., Bilmes, J.A.: Submodularity beyond submodular energies: coupling edges in graph cuts. In: CVPR, pp. 1897–1904 (2011)Google Scholar
  10. 10.
    Kahl, F., Strandmark, P.: Generalized roof duality. Discret. Appl. Math. 160(16–17), 2419–2434 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Krause, A., Golovin, D.: Submodular function maximization. In: Bordeaux, L., Hamadi, Y., Kohli, P. (eds.) Tractability: Practical Approaches to Hard Problems. Cambridge University Press, Cambridge (2014)Google Scholar
  12. 12.
    Kumar, M.P., Kolmogorov, V., Torr, P.H.S.: An analysis of convex relaxations for MAP estimation of discrete MRFs. JMLR 10, 71–106 (2009)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Lovász, L.: Submodular functions and convexity. In: Bachem, A., Korte, B., Grötschel, M. (eds.) Mathematical Programming The State of the Art, pp. 235–257. Springer, Berlin (1983). CrossRefGoogle Scholar
  14. 14.
    McAllester, D.: Generalization bounds and consistency for structured labeling. In: Bakır, G., Hofmann, T., Schölkopf, B., Smola, A., Taskar, B., Vishwanathan, S. (eds.) Predicting Structured Data. MIT Press, Cambridge (2007)Google Scholar
  15. 15.
    Nemhauser, G.L., Wolsey, L.A., Fisher, M.L.: An analysis of approximations for maximizing submodular set functions-I. Math. Prog. 14(1), 265–294 (1978)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Torralba, A., Fergus, R., Freeman, W.T.: 80 million tiny images: a large data set for nonparametric object and scene recognition. PAMI 30(11), 1958–1970 (2008)CrossRefGoogle Scholar
  17. 17.
    Tsochantaridis, I., Joachims, T., Hofmann, T., Altun, Y.: Large margin methods for structured and interdependent output variables. JMLR 6, 1453–1484 (2005)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Vondrák, J.: Optimal approximation for the submodular welfare problem in the value oracle model. In: STOC, pp. 67–74 (2008)Google Scholar
  19. 19.
    Vondrák, J.: Submodularity and curvature: the optimal algorithm. RIMS Kôkyûroku Bessatsu 23, 253–266 (2010)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Weller, A., Sontag, D., Rowland, M.: Tightness of LP relaxations for almost balanced models. In: AISTATS, pp. 47–55 (2016)Google Scholar
  21. 21.
    Yu, J., Blaschko, M.B.: Learning submodular losses with the Lovász hinge. In: ICML, pp. 1623–1631 (2015)Google Scholar
  22. 22.
    Yu, J., Blaschko, M.B.: A convex surrogate operator for general non-modular loss functions. In: AISTATS, pp. 1032–1041 (2016)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Center for Processing Speech and Images, Departement ElektrotechniekKU LeuvenLeuvenBelgium

Personalised recommendations