Skip to main content

A note on the use of probabilities by mechanical learners

  • Conference paper
  • First Online:
Book cover Computational Learning Theory (EuroCOLT 1995)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 904))

Included in the following conference series:

  • 146 Accesses

Abstract

We raise the following problem: in a probabilistic context, is it always fruitful for a machine to compute probabilities? The question is made precise in a paradigm of the limit-identification kind, where a learner must discover almost surely whether an infinite sequence of heads and tails belongs to an effective subset S of the Cantor space. In this context, a successful strategy for an ineffective learner is to compute, at each stage, the conditional probability that he is faced with an element of S, given the data received so far. We show that an effective learner should not proceed this way in all circumstances. Indeed, even if he gets an optimal description of a set S, and even if some machine can always compute the conditional probability for S given any data, an effective learner optimizes his inductive competence only if he does not compute the relevant probabilities. We conclude that the advice “compute probabilities whenever you can” should sometimes be received with caution in the context of machine learning.

We thank Scott Weinstein, Michiel van Lambalgen, and several anonymous reviewers for helpful comments. Research support was provided by the Office of Naval Research under contracts Nos. N00014-87-K-0401 N00014-89-J-1725, by the Swiss National Science Foundation under grant number 21-32399.91, and by the Centre National de la Recherche Scientifique.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Billingsley, P.: Probability and Measure. John Wiley & Sons, second edition (1986)

    Google Scholar 

  • Osherson, D., Weinstein, S., de Jongh, D. and Martin, E.: Formal Learning Theory. Van Bentham, J. and ter Meulen, A. editors. Handbook of Logic and Language. Elsevier (1994)

    Google Scholar 

  • Earman, J.: Bayes or Bust? MIT Press (1992)

    Google Scholar 

  • Enderton, H.: Recursion Theory. Barwise, J. editor. Handbook of Mathematical Logic. North Holland (1977) 527–566

    Google Scholar 

  • Hinman, P.: Recursion-Theoretic Hierarchies. Perspectives in Mathematical Logic, Omega Series. Springer-Verlag (1978)

    Google Scholar 

  • Kelly, K.: The Logic of Reliable Inquiry. MIT Press (1994)

    Google Scholar 

  • Levy, A.: Basic Set Theory. Springer-Verlag (1979)

    Google Scholar 

  • Osherson, D., Stob, M. and Weinstein, S.: Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press (1986)

    Google Scholar 

  • Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of plausible inference. Morgan-Kaufmann (1988)

    Google Scholar 

  • Schervish, M. and Seidenfeld, T.: An Approach to Consensus and Certainty with Increasing Evidence. Journal of Statistical Planning and Inference 25 (1990) 401–414

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Paul Vitányi

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Martin, E., Osherson, D. (1995). A note on the use of probabilities by mechanical learners. In: Vitányi, P. (eds) Computational Learning Theory. EuroCOLT 1995. Lecture Notes in Computer Science, vol 904. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59119-2_183

Download citation

  • DOI: https://doi.org/10.1007/3-540-59119-2_183

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59119-1

  • Online ISBN: 978-3-540-49195-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics