Skip to main content

A Review of Some Extensions to the PAC Learning Model

  • Chapter
Book cover Learning and Geometry: Computational Approaches

Part of the book series: Progress in Computer Science and Applied Logic ((PCS,volume 14))

  • 274 Accesses

Abstract

The Probably Approximately Correct (PAC) learning model, which has received much attention recently in the machine learning community, attempts to formalize the notion of learning from examples. In this paper, we review several extensions to the basic PAC model with a focus on the information complexity of learning. The extensions discussed are learning over a class of distributions, learning with queries, learning functions, and learning from generalized samples.

This work was supported in part by the U.S. Army Research Office under grants DAAL03-86-K-0171 and DAAL03-92-G-0320 and by the National Science Foundation under grant IRI-9457645.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alexandrov, A.D. and Yu.G. Reshetnyak, General Theory of Irregular Curves, Mathematics and Its Applications (Soviet Series) Vol. 29, Kluwer Academic Publishers, 1989.

    Google Scholar 

  2. Amsterdam, J., “Extending the Valiant learning model,” Proc. 5th Int. Conf, on Machine Learning, pp. 381–394, 1988.

    Google Scholar 

  3. Angluin, D. and P. Laird, “Learning from noisy examples,” Machine Learning, Vol. 2, pp. 343–370, 1988.

    Google Scholar 

  4. Angluin, D., “Queries and Concept Learning,” Machine Learning, Vol. 2, pp.319–342, 1988.

    Google Scholar 

  5. Ben-David, S., G.M. Benedek, Y. Mansour, “A parameterization scheme for classifying models of learnability,” Proc. Second Annual Workshop on Computational Learning Theory, pp. 285–302, 1989.

    Google Scholar 

  6. Ben-David, S., A. Itai, E. Kushilevitz, “Learning by distances,” Proc. of Third Annual Workshop on Computational Learning Theory, pp. 232–245, 1990.

    Google Scholar 

  7. Benedek, G.M. and A. Itai, “Learnability with respect to fixed distributions,” Theoretical Computer Science, Vol. 86(2), pp. 377–390, 1991.

    Article  MathSciNet  MATH  Google Scholar 

  8. Benedek, G.M. and A. Itai, “Nonuniform learnability,” ICALP, pp. 82–92, 1988.

    Google Scholar 

  9. Buescher, K. and P.R. Kumar, “Simultaneous learning and estimation for classes of probabilities,” Proc. Fourth Annual Workshop on Computational Learning Theory, Santa Cruz, CA, Aug 1991.

    Google Scholar 

  10. Blumer, A., A. Ehrenfeucht, D. Haussler, M. Warmuth, “Occam’s razor,” Info. Proc. Let., Vol. 24, pp. 377–380, 1987.

    Article  MathSciNet  MATH  Google Scholar 

  11. Blumer, A., A. Ehrenfeucht, D. Haussler, M. Warmuth, “Learnability and the Vapnik-Chervonenkis dimension,” J. ACM, Vol. 6, No. 4, pp. 929–965, 1989.

    Article  MathSciNet  Google Scholar 

  12. Dudley, R.M., “Central limit theorems for empirical measures,” Ann. Probability, Vol. 6, No. 6, pp. 899–929, 1978.

    Article  MathSciNet  MATH  Google Scholar 

  13. Dudley, R.M., “Metric entropy of some classes of sets with differentiate boundaries,” J. Approx. Theory, Vol. 10, No. 3, pp. 227–236, 1974.

    Article  MathSciNet  MATH  Google Scholar 

  14. Dudley, R.M., S.R. Kulkarni, T.J. Richardson, and O. Zeitouni, “A metric entropy bound is not sufficient for learnability,” IEEE Trans. Information Theory, Vol. 40, No. 3, pp. 883–885, 1994.

    Article  MathSciNet  MATH  Google Scholar 

  15. Ehrenfeucht, A., D. Haussler, M. Kearns, and L. Valiant, “A general lower bound on the number of examples needed for learning,” Information and Computation, Vol. 82, No. 3, pp. 247–251, 1989.

    Article  MathSciNet  MATH  Google Scholar 

  16. Eisenberg, B. and R.L. Rivest, “On the Sample Complexity of Pac-Learning Using Random and Chosen Examples,” Proc. Third Annual Workshop on Computational Learning Theory, pp. 154–162, 1990.

    Google Scholar 

  17. Elias, P., “Universal codeword sets and representations of the integers,” IEEE Trans, on Info. Theory, Vol. IT-21, No. 2, pp. 194–203, 1975.

    Article  MathSciNet  MATH  Google Scholar 

  18. Gallager, R.G., Information Theory and Reliable Communication, Wiley & Sons, 1968.

    MATH  Google Scholar 

  19. Gasarch, W.I and C.H. Smith, “Learning via queries,” Proc. of the 29th IEEE Symp. on Foundations of Computer Science, 1988.

    Google Scholar 

  20. Gold, I.M, “Language identification in the limit,” Information and Control, Vol. 10, pp. 447–474, 1967.

    Google Scholar 

  21. Haussler, D., M. Kearns, N. Littlestone, M.K. Warmuth, “Equivalence of models for polynomial learnability,” Proc. First Workshop on Computational Learning Theory, pp. 42–55, 1988.

    Google Scholar 

  22. Haussler, D., M. Kearns, R. Schapire, “Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension,” Proc. Fourth Annual Workshop on Computational Learning Theory, Santa Cruz, CA, Aug 1991.

    Google Scholar 

  23. Haussler, D., “Decision theoretic generalizations of the PAC model for neural net and other learning applications,” Information and Computation, Vol. 100, pp. 78–150, 1992.

    Article  MathSciNet  MATH  Google Scholar 

  24. Karl, W.C., “Reconstructing objects from projections,” Ph.D. Thesis, Dept. of EECS, Massachusetts Institute of Technology, February, 1991.

    Google Scholar 

  25. Kearns, M. and M. Li, “Learning in the presence of malicious errors,” Proc. 20th ACM Symp. on Theory of Comp., Chicago, Illinois, pp. 267–279, 1988.

    Google Scholar 

  26. Kearns, M.J. and R.E. Schapire, “Efficient distribution-free learning of probabilistic concepts,” Proc. of the 31th IEEE Symp. on Foundations of Computer Science, 1990.

    Google Scholar 

  27. Kinber, E.B., “Some problems of learning with an oracle,” Proc. Third Workshop on Computational Learning Theory, pp. 178–186, 1990.

    Google Scholar 

  28. Kolmogorov, A.N. and V.M. Tihomirov, “£-Entropy and €-capacity of sets in functional spaces,” Amer. Math. Soc. Transi., Vol. 17, pp. 277–364, 1961.

    MathSciNet  Google Scholar 

  29. Kulkarni, S.R., “Problems of computational and information complexity in machine vision and learning,” Ph.D. thesis, Dept. of Electrical Engineering and Computer Science, M.I.T., June, 1991.

    Google Scholar 

  30. Kulkarni, S.R., “Applications of PAC learning to problems in geometric reconstruction,” Proc. 27th Annual Conf. on Info. Sciences and Systems, Johns Hopkins University, March, 1993.

    Google Scholar 

  31. Kulkarni, S.R., S.K. Mitter, J.N. Tsitsiklis, “Active learning using arbitrary binary valued queries,” Machine Learning, Vol. 11, pp. 23–35, 1993.

    Article  MATH  Google Scholar 

  32. Kulkarni, S.R., S.K. Mitter, J.N. Tsitsiklis, and 0. Zeitouni, “PAC learning with generalized samples and an application to stochastic geometry,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 15, No. 9, pp. 1–10, 1993.

    Article  Google Scholar 

  33. Kulkarni, S.R. and D.N.C. Tse, “A paradigm for class identification problems,” IEEE Trans. Information Theory, Vol. 40, No. 3, pp. 696–705, 1994.

    Article  MATH  Google Scholar 

  34. Kulkarni, S.R. and M. Vidyasagar, “Learning decision rules for pattern classification under a family of probability measures,” submitted to IEEE Trans. Information Theory.

    Google Scholar 

  35. Lele, A.S., S.R. Kulkarni, and A.S. Willsky, “Convex set estimation from support line measurements and applications to target reconstruction from laser radar data,” J. Optical Soc. of Amer. A, Vol. 9, No. 10, 1992.

    Google Scholar 

  36. Linial, N., Y. Mansour, R.L. Rivest, “Results on learnability and the Vapnik-Chervonenkis dimension,” Proc. First Workshop on Computational Learning Theory, pp. 56–68, 1988.

    Google Scholar 

  37. Littlestone, N., “Mistake bounds and logarithmic linear-threshold learning,” Ph.D. thesis, U.C. Santa Cruz, March, 1989.

    Google Scholar 

  38. Littlestone, N., “Learning when irrelevant attributes abound: A new linear-threshold algorithm,” Machine Learning, Vol. 2, pp. 285–318, 1988.

    Google Scholar 

  39. Moran, P.A.P., “Measuring the length of a curve,” Biometrika, Vol. 53, pp. 359–364, 1966.

    MathSciNet  MATH  Google Scholar 

  40. Natarajan, B.K., “Probably approximate learning over classes of distributions,” Carnegie-Mellon Univ., unpublished manuscript, 1989.

    Google Scholar 

  41. Natarajan, B.K. and P.T. Tadepalli, “Two new frameworks for learning,” Proc. 5th Int. Conf. on Machine Learning, pp. 402–415, 1988.

    Google Scholar 

  42. Pollard, D., Convergence of Stochastic Processes, Springer-Verlag, 1984.

    MATH  Google Scholar 

  43. Prince, J.L. and A.S. Willsky, “Estimating convex sets from noisy support line measurements,” IEEE Trans. PAMI, Vol. 12, pp. 377–389, 1990.

    Article  Google Scholar 

  44. Rissanen, J., “A universal prior for the integers and estimation by minimum description length,” Annals of Statistics, Vol. 11, No. 2, pp.416–431, 1983.

    Article  MathSciNet  MATH  Google Scholar 

  45. Rissanen, J., Stochastic Complexity in Statistical Inquiry, Series in Computer Science Vol. 15, World Scientific, 1989.

    Google Scholar 

  46. Rivest, R.L., A.R. Meyer, D.J. Kleitman, K. Winklmann, and J. Spencer, “Coping with errors in binary search procedures,” J. of Computer and System Sciences, Vol. 20, pp. 396–404, 1980.

    Article  MATH  Google Scholar 

  47. Salzberg, S., A. Delcher, D. Heath, and S. Kasif, “Learning with a helpful teacher,” Technical Report, Dept. of Computer Science, Johns Hopkins University, 1990.

    Google Scholar 

  48. Santalo, L.A., Integral Geometry and Geometric Probability. Volume 1 of Encyclopedia of Mathematics and its Applications, Addison-Wesley, Reading, MA, 1976.

    Google Scholar 

  49. Schapire, R., “The strength of weak learnability,” Machine Learning, Vol. 5, pp.197–227, 1990.

    Google Scholar 

  50. Skiena, S.S., “Geometric probing,” Ph.D. thesis, Dept. of Computer Science, Univ. of Illinois at Urbana-Champaign, (report no. UIUCDCS-R-88–1425), April, 1988.

    Google Scholar 

  51. Sloan, R., “Types of noise in data for concept learning,” Proc. First Workshop on Computational Learning Theory, pp. 91–96, 1988.

    Google Scholar 

  52. Steinhaus, H., “Length, shape, and area,” Colloquium Mathematicum, Vol. 3, pp. 1–13, 1954.

    MathSciNet  MATH  Google Scholar 

  53. Tikhomirov, V.M., “Kolmogorov’s work on e-entropy of functional classes and the superposition of functions,” Russian Math. Surveys, Vol. k8, pp. 51–75, 1963.

    Article  Google Scholar 

  54. Valiant, L.G., “A theory of the learnable,” Comm. ACM, Vol. 27, No. 11, pp. 1134–1142, 1984.

    Article  MATH  Google Scholar 

  55. Vapnik, V. N. and A. Ya. Chervonenkis, “On the uniform convergence of relative frequencies to their probabilities,” Theory of Prob. and its Appl., Vol. 16, No. 2, pp. 264–280, 1971.

    Article  MathSciNet  MATH  Google Scholar 

  56. Vapnik, V. N. and A. Ya. Chervonenkis, “Necessary and and sufficient conditions for the uniform convergence of means to their expectations,” Theory of Prob. and its Appl., Vol. 26, No. 3, pp. 532–553, 1981.

    Article  MathSciNet  Google Scholar 

  57. Vapnik, V.N., Estimation of Dependences Based on Empirical Data, Springer-Verlag, 1982.

    MATH  Google Scholar 

  58. Yamanishi, K., “A learning criterion for stochastic rules,” Proc. Third Workshop on Computational Learning Theory, pp. 67–81, 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Birkhäuser Boston

About this chapter

Cite this chapter

Kulkarni, S.R. (1996). A Review of Some Extensions to the PAC Learning Model. In: Kueker, D.W., Smith, C.H. (eds) Learning and Geometry: Computational Approaches. Progress in Computer Science and Applied Logic, vol 14. Birkhäuser Boston. https://doi.org/10.1007/978-1-4612-4088-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-4088-4_3

  • Publisher Name: Birkhäuser Boston

  • Print ISBN: 978-1-4612-8646-2

  • Online ISBN: 978-1-4612-4088-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics