Skip to main content

Approximate Reduction from AUC Maximization to 1-Norm Soft Margin Optimization

  • Conference paper
Algorithmic Learning Theory (ALT 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6925))

Included in the following conference series:

Abstract

Finding linear classifiers that maximize AUC scores is important in ranking research. This is naturally formulated as a 1-norm hard/soft margin optimization problem over pn pairs of p positive and n negative instances. However, directly solving the optimization problems is impractical since the problem size (pn) is quadratically larger than the given sample size (p+n). In this paper, we give (approximate) reductions from the problems to hard/soft margin optimization problems of linear size. First, for the hard margin case, we show that the problem is reduced to a hard margin optimization problem over p+n instances in which the bias constant term is to be optimized. Then, for the soft margin case, we show that the problem is approximately reduced to a soft margin optimization problem over p+n instances for which the resulting linear classifier is guaranteed to have a certain margin over pairs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Asuncion, A., Newman, D.J.: UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences (2007), http://mlearn.ics.uci.edu/MLRepository.html

  2. Balcan, N., Bansal, N., Beygelzimer, A., Coppersmith, D., Langford, J., Sorkin, G.B.: Robust reductions from ranking to classification. In: Bshouty, N.H., Gentile, C. (eds.) COLT. LNCS (LNAI), vol. 4539, pp. 604–619. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  3. Bradley, J.K., Shapire, R.: Filterboost: Regression and classification on large datasets. In: Advances in Neural Information Processing Systems, vol. 20, pp. 185–192 (2008)

    Google Scholar 

  4. Brefeld, U., Scheffer, T.: Auc maximizing support vector learning. In: Proceedings of the ICML Workshop on ROC Analysis in Machine Learning (2005)

    Google Scholar 

  5. Cohen, W.W., Schapire, R.E., Singer, Y.: Learning to order things. Journal of Artificial Intelliegence Research 10, 243–279 (1999)

    MathSciNet  MATH  Google Scholar 

  6. Cortes, C., Mohri, M.: AUC optimization vs. error rate minimization. In: Adances in Neural Information Processing Systems, vol. 16 (2004)

    Google Scholar 

  7. Domingo, C., Watanabe, O.: MadaBoost: A modification of AdaBoost. In: Proceedings of 13th Annual Conference on Computational Learning Theory (COLT 2000), pp. 180–189 (2000)

    Google Scholar 

  8. Freund, Y., Iyer, R., Shapire, R.E., Singer, Y.: An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research 4, 933–969 (2003)

    MathSciNet  MATH  Google Scholar 

  9. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  10. Fung, G., Rosales, R., Krishnapuram, B.: Learning rankings via convex hull separation. In: Advances in Neural Information Processing Systems (NIPS 2005), vol. 18 (2005)

    Google Scholar 

  11. Gavinsky, D.: Optimally-smooth adaptive boosting and application to agnostic learning. Journal of Machine Learning Research (2003)

    Google Scholar 

  12. Hatano, K.: Smooth boosting using an information-based criterion. In: Balcázar, J.L., Long, P.M., Stephan, F. (eds.) ALT 2006. LNCS (LNAI), vol. 4264, pp. 304–318. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  13. Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2002)

    Google Scholar 

  14. Long, P.M., Servedio, R.A.: Boosting the area under the roc curve. In: Advances in Neural Information Processing Systems, vol. 20 (2008)

    Google Scholar 

  15. Moribe, J., Hatano, K., Takimoto, E., Takeda, M.: Smooth boosting for margin-based ranking. In: Freund, Y., Györfi, L., Turán, G., Zeugmann, T. (eds.) ALT 2008. LNCS (LNAI), vol. 5254, pp. 227–239. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  16. Rätsch, G.: Robust Boosting via Convex Optimization: Theory and Applications. PhD thesis, University of Potsdam (2001)

    Google Scholar 

  17. Rätsch, G., Warmuth, M.K.: Efficient margin maximizing with boosting. Journal of Machine Learning Research 6, 2131–2152 (2005)

    MathSciNet  MATH  Google Scholar 

  18. Raykar, V.C., Duraiswami, R., Krishnapuram, B.: A fast algorithm for learning a ranking function from large-scale data sets. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(7), 1158–1170 (2008)

    Article  Google Scholar 

  19. Rudin, C.: Ranking with a P-norm push. In: Lugosi, G., Simon, H.U. (eds.) COLT 2006. LNCS (LNAI), vol. 4005, pp. 589–604. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  20. Rudin, C.: The P-Norm Push: A simple convex ranking algorithm that concentrates at the top of the list. Journal of Machine Learning Research 10, 2233–2271 (2009)

    MathSciNet  MATH  Google Scholar 

  21. Rudin, C., Schapire, R.E.: Margin-based ranking and an equivalence between AdaBoost and RankBoost. Journal of Machine Learning Research 10, 2193–2232 (2009)

    MathSciNet  MATH  Google Scholar 

  22. Schölkopf, B., Smola, A.J., Williamson, R.C., Bartlett, P.L.: New support vector algorithms. Neural Computation 12(5), 1207–1245 (2000)

    Article  Google Scholar 

  23. Servedio, R.A.: Smooth boosting and learning with malicious noise. Journal of Machine Learning Research 4, 633–648 (2003)

    MathSciNet  MATH  Google Scholar 

  24. Warmuth, M., Glocer, K., Rätsch, G.: Boosting algorithms for maximizing the soft margin. In: Advances in Neural Information Processing Systems, vol. 20, pp. 1585–1592 (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Suehiro, D., Hatano, K., Takimoto, E. (2011). Approximate Reduction from AUC Maximization to 1-Norm Soft Margin Optimization. In: Kivinen, J., Szepesvári, C., Ukkonen, E., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2011. Lecture Notes in Computer Science(), vol 6925. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24412-4_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24412-4_26

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24411-7

  • Online ISBN: 978-3-642-24412-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics