Machine Learning

, Volume 18, Issue 2–3, pp 187–230 | Cite as

On the complexity of function learning

  • Peter Auer
  • Philip M. Long
  • Wolfgang Maass
  • Gerhard J. Woeginger
Article

Abstract

The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as ℕ or ℝ. In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.

We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any “learning” model).

Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.

Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from ℝd into ℝ. Previously, the class of linear functions from ℝd into ℝ was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.

Finally we give a sufficient condition for an arbitrary class\(\mathcal{F}\) of functions from ℝ into ℝ that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from\(\mathcal{F}\). This allows us to exhibit a number of further nontrivial classes of functions from ℝ into ℝ for which there exist efficient learning algorithms.

Keywords

computational learning theory on-line learning mistake-bounded learning function learning 

References

  1. Auer, P., & Long, P.M. (1994). Simulating access to hidden information while learning,Proceedings of the 26th Annual ACM Symposium on the Theory of Computing (pp. 263–272).Google Scholar
  2. Auer, P., Long, P.M., Maass, W., & Woeginger, G.J. (1993). On the complexity of function learning,Proceedings of the Sixth Annual ACM Conference on Computational Learning Theory, pp. 392–401.Google Scholar
  3. Angluin, D. (1988). Queries and concept learning,Machine Learning, 2(4):319–342.Google Scholar
  4. Barland, I. (1992). Some ideas on learning with directional feedback. Master's thesis, Computer Science Department, UC Santa Cruz.Google Scholar
  5. Berlekamp, E.R. (1968). Block coding for the binary symmetric channel with noiseless, delayless feedback, InError Correcting Codes (pp. 61–85), New York: Wiley.Google Scholar
  6. Barzdin, J.M., & Frievald, R.V. (1972). On the prediction of general recursive functions,Soviet Math. Doklady, 13:1224–1228.Google Scholar
  7. Cesa-Bianchi, N., Freund, Y., Helmbold, D.P., & Warmuth, M.K. (in press). On-line prediction and conversion strategies. InProceedings of the First Euro-COLT Workshop, The Institute of Mathematics and its Applications, to appear.Google Scholar
  8. Cesa-Bianchi, N., Long, P.M., & Warmuth, M.K. (1993). Worst-case quadratic loss bounds for a generalization of the Widrow-Hoff rule. InProceedings of the 6th Annual Workshop on Comput. Learning Theory (pp. 429–438).Google Scholar
  9. Dawid, A. (1984). Statistical theory: The sequential approach.Journal of the Royal Statistical Society (Series A), pp. 278–292.Google Scholar
  10. Faber, V., & Mycielski, J. (1991). Applications of learning theorems.Fundamenta Informaticae, 15(2):145–167.Google Scholar
  11. Feder, M., Merhav, N., & Gutman, M. (1992). Universal prediction of individual sequences.IEEE Transactions of Information Theory, 38:1258–1270.Google Scholar
  12. Kimber, D., & Long, P.M. (1992). The learning complexity of smooth functions of a single variable. InProc. 5th Annu. Workshop on Comput. Learning Theory (pp. 153–159).Google Scholar
  13. Kearns, M.J., Schapire, R.E., & Sellie, L.M. (1992). Toward efficient agnostic learning. InProc. 5th Annu. Workshop on Comput. Learning Theory (pp. 341–352).Google Scholar
  14. Littlestone, N. (1988). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm,Machine Learning, 2:285–318.Google Scholar
  15. Littlestone, N. (1989).Mistake Bounds and Logarithmic Linear-threshold Learning Algorithms PhD thesis. Technical Report UCSC-CRL-89-11, University of California Santa Cruz.Google Scholar
  16. Littlestone, N., Long, P.M., & Warmuth, M.K. (1991). On-line learning of linear functions. InProc. of the 23rd Symposium on Theory of Computing (pp. 465–475).Google Scholar
  17. Littlestone, N., & Warmuth, M.K. (1991). The weighted majority algorithm, Technical Report UCSC-CRL-91-28, UC Santa Cruz. A preliminary version appeared inthe Proceedings of the 30th Annual IEEE Symposium of the Foundations of Computer Science.Google Scholar
  18. Long, P.M., & Warmuth, M.K. (in press). Composite geometric concepts and polynomial predictability.Inform. Comput.Google Scholar
  19. Maass, W. (1991). On-line learning with an oblivious environment and the power of randomization.Proc. 4th Annu. Workshop on Comput. Learning Theory (pp. 167–175).Google Scholar
  20. Maass, W., & Turán, G. (1992). Lower bound methods and separation results for on-line learning models.Machine Learning, 9:107–145.Google Scholar
  21. Mycielski, J. (1988). A learning algorithm for linear operators.Proceedings of the American Mathematical Society, 103(2):547–550.Google Scholar
  22. Rivest, R.L., Meyer, A.R., Kleitman, D.J., Winklmann, K., & Spencer, J. (1980). Coping with errors in binary search procedures.Journal of Computer and System Sciences, 20:396–404.Google Scholar
  23. Sauer, N. (1972). On the density of families of sets.J. Combinatorial Theory (A), 13:145–147.Google Scholar
  24. Spencer, J. (1992). Ulam's searching game with a fixed number of lies.Theoretical Computer Science, 95(2):307–321.Google Scholar
  25. Uspensky, J.V. (1948).Theory of Equations, McGraw-Hill.Google Scholar
  26. Vovk, V. (1990). Aggregating strategies. InProc. 3rd Annu. Workshop on Comput. Learning Theory (pp. 371–383).Google Scholar
  27. Vovk, V. (1992). Universal forecasting algorithms.Inform. Comput., 96(2):245–277.Google Scholar

Copyright information

© Kluwer Academic Publishers 1995

Authors and Affiliations

  • Peter Auer
    • 1
  • Philip M. Long
    • 1
  • Wolfgang Maass
    • 1
  • Gerhard J. Woeginger
    • 1
  1. 1.Institute for Theoretical Computer ScienceTechnische Universität GrazGrazAustria

Personalised recommendations