Skip to main content

Learning Nested Halfspaces and Uphill Decision Trees

  • Conference paper
Learning Theory (COLT 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4539))

Included in the following conference series:

Abstract

Predicting class probabilities and other real-valued quantities is often more useful than binary classification, but comparatively little work in PAC-style learning addresses this issue. We show that two rich classes of real-valued functions are learnable in the probabilistic-concept framework of Kearns and Schapire.

Let X be a subset of Euclidean space and f be a real-valued function on X. We say f is a nested halfspace function if, for each real threshold t, the set { x ∈ X | f(x) ≤ t}, is a halfspace. This broad class of functions includes binary halfspaces with a margin (e.g., SVMs) as a special case. We give an efficient algorithm that provably learns (Lipschitz-continuous) nested halfspace functions on the unit ball. The sample complexity is independent of the number of dimensions.

We also introduce the class of uphill decision trees, which are real-valued decision trees (sometimes called regression trees) in which the sequence of leaf values is non-decreasing. We give an efficient algorithm for provably learning uphill decision trees whose sample complexity is polynomial in the number of dimensions but independent of the size of the tree (which may be exponential). Both of our algorithms employ a real-valued extension of Mansour and McAllester’s boosting algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Blum, A., Frieze, A., Kannan, R., Vempala, S.: A polynomial time algorithm for learning noisy linear threshold functions. Algorithmica 22(1/2), 35–52 (1997)

    MathSciNet  Google Scholar 

  • Hastie, T.J., Tibshirani, R.J.: Generalized Additive Models, London: Chapman and Hall (1990)

    Google Scholar 

  • Kalai, A.: Learning Monotonic Linear Functions. Lecture Notes in Computer Science: Proceedings of the 17th Annual Conference on Learning Theory 3120, 487–501 (2004)

    MathSciNet  Google Scholar 

  • Kearns, M., Mansour, Y.: On the boosting ability of top-down decision tree learning algorithms. Journal of Computer and System Sciences 58, 109–128 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  • Kearns, M., Schapire, R.: Efficient distribution-free learning of probabilistic concepts. Journal of Computer and Systems Sciences 48, 464–497 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  • Kearns, M., Valiant, L.: Learning boolean formulae or finite automata is as hard as factoring. Technical Report TR-14-88, Harvard University Aiken Computation Laboratory (1988)

    Google Scholar 

  • Mansour, Y., McAllester, D.: Boosting using branching programs. Journal of Computer and System Sciences 64, 103–112 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  • McCullagh, P., Nelder, J.: Generalized Linear Models, Chapman and Hall, London (1989)

    Google Scholar 

  • McDiarmid, C.: On the method of bounded differences. In: J Siemons, (ed.), Surveys in Combinatorics. London Math Society ( 1989)

    Google Scholar 

  • O’Donnell, R., Servedio, R.: Learning Monotone Decision Trees in Polynomial Time. In: Proceedings of the 21st Annual Conference on Computational Complexity (CCC), pp. 213–225 (2006)

    Google Scholar 

  • Schapire, R.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Nader H. Bshouty Claudio Gentile

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Kalai, A.T. (2007). Learning Nested Halfspaces and Uphill Decision Trees. In: Bshouty, N.H., Gentile, C. (eds) Learning Theory. COLT 2007. Lecture Notes in Computer Science(), vol 4539. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72927-3_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72927-3_28

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72925-9

  • Online ISBN: 978-3-540-72927-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics