Advertisement

Memory limited inductive inference machines

  • Rūsinš Freivalds
  • Carl H. Smith
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 621)

Abstract

The traditional model of learning in the limit is restricted so as to allow the learning machines only a fixed, finite amount of memory to store input and other data. A class of recursive functions is presented that cannot be learned deterministically by any such machine, but can be learned by a memory limited probabilistic leaning machine with probability 1.

Keywords

Recursive Function Inductive Inference Input Tape State Transition Graph Computational Learn Theory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ablaev, F. M. and Freivalds, R. Why sometimes probabilistic algorithms can be more effective. In Lecture Notes in Computer Science, 233, 1986Google Scholar
  2. 2.
    Addanki, S. Connectionism. In The Encyclopedia of Artificial Intelligence, S. Shapiro, Ed., John Wiley and Sons Inc., New York, NY, 1987Google Scholar
  3. 3.
    Angluin, D. and Smith, C. H. Inductive inference: theory and methods. Computing Surveys 15, 237–269 (1983)Google Scholar
  4. 4.
    Angluin, D. and Smith, C. H. Inductive inference. In Encyclopedia of Artificial Intelligence, S. Shapiro, Ed., John Wiley and Sons Inc., 1987Google Scholar
  5. 5.
    Blum, L. and Blum, M. Toward a mathematical theory of inductive inference. Information and Control 28, 125–155 (1975)Google Scholar
  6. 6.
    Case, J. and Ngomanguelle, S. Refinements of inductive inference by popperian machines. Kybernetika (19??) to appear.Google Scholar
  7. 7.
    Case, J. and Smith, C. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science 25, 2, 193–220 (1983)Google Scholar
  8. 8.
    Feldman, J. A. and Ballard, D. H. Connectionist models and their properties. Cognitive Science 6, 3, 205–254 (1982)Google Scholar
  9. 9.
    Feller, W.An Introduction to Probability Theory and its Applications. Wiley, New York, 1968 Third Edition.Google Scholar
  10. 10.
    Freivalds, R. and Smith, C. On the power of procrastination for machine learning. Manuscript.Google Scholar
  11. 11.
    Fulk, M. and Case, J.Proceedings of the Third Annual Workshop on Computational Learning Theory. Margan Kaufmann Publishers, Palo Alto, CA., 1990Google Scholar
  12. 12.
    Gold, E. M. Language identification in the limit. Information and Control 10, 447–474 (1967)Google Scholar
  13. 13.
    Haussler, D. and Pitt, L.Proceedings of the 1988 Workshop on Computational Learning Theory. Margan Kaufmann Publishers, Palo Alto, CA., 1988Google Scholar
  14. 14.
    Heath, D., Kasif, S., Kosaraju, R., Salzberg, S., and Sullivan, G.Learning nested concept classes with limited storage. Computer Science Department, Johns Hopkins University, Baltimore MD., 1990Google Scholar
  15. 15.
    Kleene, S. On notation for ordinal numbers. Journal of Symbolic Logic 3, 150–155 (1938)Google Scholar
  16. 16.
    Machtey, M. and Young, P.An Introduction to the General Theory of Algorithms. North-Holland, New York, 1978Google Scholar
  17. 17.
    Michalski, R., Carbonell, J., and Mitchell, T.Machine Learning. Tioga Publishing Co., Palo Alto, CA, 1983Google Scholar
  18. 18.
    Miyahara, T. Inductive inference by iteratively working and consistent strategies with anomalies. Bulletin of Informatics and Cybernetics 22, 171–177 (1987)Google Scholar
  19. 19.
    Miyahara, T. A note on iteratively working strategies in inductive inference. In Proceedings of the Fujitsu IIAS-SIS Workshop on Computational Learning Theory, Numazu, Japan, 1989Google Scholar
  20. 20.
    Osherson, D., Stob, M., and Weinstein, S.Systems that Learn. MIT Press, Cambridge, Mass., 1986Google Scholar
  21. 21.
    Pitt, L. A Characterization of Probabilistic Inference. Journal of the ACM 36, 2, 383–433 (1989)Google Scholar
  22. 22.
    Pitt, L. and Smith, C. Probability and plurality for aggregations of learning machines. Information and Computation 77, 77–92 (1988)Google Scholar
  23. 23.
    Popper, K.The Logic of Scientific Discovery. Harper Torch Books, N.Y., 1968 2nd Edition.Google Scholar
  24. 24.
    Rivest, R., Haussler, D., and Warmuth, M.Proceedings of the Second Annual Workshop on Computational Learning Theory. Margan Kaufmann Publishers, Palo Alto, CA., 1989Google Scholar
  25. 25.
    Rogers, H. Jr. Gödel numberings of partial recursive functions. Journal of Symbolic Logic 23, 331–341 (1958)Google Scholar
  26. 26.
    Shapiro, S.Encyclopedia of Artificial Intelligence. John Wiley and Sons Inc., New York, NY, 1987Google Scholar
  27. 27.
    Taimina, D. Ya. and Freivalds, R. On complexity of probabilistic finite automata recognizing superlanguages. In Methods of logic in construction of effective algorithms, Kalinin State University, 1966 In Russian.Google Scholar
  28. 28.
    Warmuth, M. and Valiant, L.Proceedings of the 1991 Workshop on Computational Learning Theory. Margan Kaufmann Publishers, Palo Alto, CA., 1991Google Scholar
  29. 29.
    Wiehagen, R. Limes-erkennung rekursiver funktionen durch spezielle strategien. Elektronische Informationsverarbeitung und Kybernetik 12, 93–99 (1976)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • Rūsinš Freivalds
    • 1
  • Carl H. Smith
    • 2
  1. 1.Institute of Mathematics and Computer ScienceUniversity of LatviaRigaLatvia
  2. 2.Department of Computer Science and Institute for Advanced Computer StudiesThe University of MarylandCollege ParkUSA

Personalised recommendations