Recursive Teaching Dimension, Learning Complexity, and Maximum Classes

  • Thorsten Doliwa
  • Hans Ulrich Simon
  • Sandra Zilles
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6331)


This paper is concerned with the combinatorial structure of concept classes that can be learned from a small number of examples. We show that the recently introduced notion of recursive teaching dimension (RTD, reflecting the complexity of teaching a concept class) is a relevant parameter in this context. Comparing the RTD to self-directed learning, we establish new lower bounds on the query complexity for a variety of query learning models and thus connect teaching to query learning.

For many general cases, the RTD is upper-bounded by the VC-dimension, e.g., classes of VC-dimension 1, (nested differences of) intersection-closed classes, “standard” boolean function classes, and finite maximum classes. The RTD thus is the first model to connect teaching to the VC-dimension.

The combinatorial structure defined by the RTD has a remarkable resemblance to the structure exploited by sample compression schemes and hence connects teaching to sample compression. Sequences of teaching sets defining the RTD coincide with unlabeled compression schemes both (i) resulting from Rubinstein and Rubinstein’s corner-peeling and (ii) resulting from Kuzmin and Warmuth’s Tail Matching algorithm.


Concept Class Query Complexity Combinatorial Structure Compression Scheme Target Concept 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Angluin, D.: Queries and concept learning. Mach. Learn. 2, 319–342 (1988)Google Scholar
  2. 2.
    Balbach, F.: Measuring teachability using variants of the teaching dimension. Theoret. Comput. Sci. 397, 94–113 (2008)zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Ben-David, S., Eiron, N.: Self-directed learning and its relation to the VC-dimension and to teacher-directed learning. Mach. Learn. 33, 87–104 (1998)zbMATHCrossRefGoogle Scholar
  4. 4.
    Ben-David, S., Litman, A.: Combinatorial variability of Vapnik-Chervonenkis classes with applications to sample compression schemes. Discrete Appl. Math. 86(1), 3–25 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Floyd, S., Warmuth, M.: Sample compression, learnability, and the vapnik-chervonenkis dimension. Mach. Learn. 21(3), 269–304 (1995)Google Scholar
  6. 6.
    Goldman, S., Kearns, M.: On the complexity of teaching. J. Comput. Syst. Sci. 50(1), 20–31 (1995)zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Goldman, S., Rivest, R., Schapire, R.: Learning binary relations and total orders. SIAM J. Comput. 22(5), 1006–1034 (1993)zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Goldman, S., Sloan, R.: The power of self-directed learning. Mach. Learn. 14(1), 271–294 (1994)MathSciNetGoogle Scholar
  9. 9.
    Helmbold, D., Sloan, R., Warmuth, M.: Learning nested differences of intersection-closed concept classes. Mach. Learn. 5, 165–196 (1990)Google Scholar
  10. 10.
    Jackson, J., Tomkins, A.: A computational model of teaching. In: 5th Annl. Workshop on Computational Learning Theory, pp. 319–326 (1992)Google Scholar
  11. 11.
    Kuhlmann, C.: On teaching and learning intersection-closed concept classes. In: Fischer, P., Simon, H.U. (eds.) EuroCOLT 1999. LNCS (LNAI), vol. 1572, pp. 168–182. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  12. 12.
    Kuzmin, D., Warmuth, M.: Unlabeled compression schemes for maximum classes. J. Mach. Learn. Research 8, 2047–2081 (2007)MathSciNetGoogle Scholar
  13. 13.
    Littlestone, N.: Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Mach. Learn. 2(4), 285–318 (1988)Google Scholar
  14. 14.
    Littlestone, N., Warmuth, M.: Relating data compression and learnability. Technical report, UC Santa Cruz (1986)Google Scholar
  15. 15.
    Maass, W., Turán, G.: Lower bound methods and separation results for on-line learning models. Mach. Learn. 9, 107–145 (1992)zbMATHGoogle Scholar
  16. 16.
    Natarajan, B.: On learning boolean functions. In: 19th Annl. Symp. Theory of Computing, pp. 296–304 (1987)Google Scholar
  17. 17.
    Rubinstein, B., Rubinstein, J.: A geometric approach to sample compression (2009) (unpublished manuscript)Google Scholar
  18. 18.
    Sauer, N.: On the density of families of sets. J. Comb. Theory, Ser. A 13(1), 145–147 (1972)zbMATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    Shinohara, A., Miyano, S.: Teachability in computational learning. New Generat. Comput. 8, 337–348 (1991)zbMATHCrossRefGoogle Scholar
  20. 20.
    Vapnik, V., Chervonenkis, A.: On the uniform convergence of relative frequencies of events to their probabilities. Theor. Probability and Appl. 16, 264–280 (1971)zbMATHCrossRefGoogle Scholar
  21. 21.
    Welzl, E.: Complete range spaces (1987) (unpublished notes)Google Scholar
  22. 22.
    Zilles, S., Lange, S., Holte, R., Zinkevich, M.: Teaching dimensions based on cooperative learning. In: 21st Annl. Conf. Learning Theory, pp. 135–146 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Thorsten Doliwa
    • 1
  • Hans Ulrich Simon
    • 1
  • Sandra Zilles
    • 2
  1. 1.Fakultät für MathematikRuhr-Universität BochumBochumGermany
  2. 2.Department of Computer ScienceUniversity of ReginaReginaCanada

Personalised recommendations