Advertisement

Machine induction without revolutionary paradigm shifts

  • John Case
  • Sanjay Jain
  • Arun Sharma
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 997)

Abstract

This paper provides a beginning study of the effects on inductive inference of paradigm shifts whose absence is approximately modeled by various formal approaches to forbidding large changes in the size of programs conjectured.

One approach, called severe parsimony, requires all the programs conjectured on the way to success to be nearly (i.e., within a recursive function of) minimal size. It is shown that this very conservative constraint allows learning infinite classes of functions, but not infinite r.e. classes of functions.

Another approach, called non-revolutionary, requires all conjectures to be nearly the same size as one another. This quite conservative constraint is, nonetheless, shown to permit learning some infinite r.e. classes of functions. Allowing, up to one extra bounded size mind change towards a final program learned certainly doesn't appear revolutionary. However, somewhat surprisingly for scientific (inductive) inference, it is shown that there are classes learnable with the non-revolutionary constraint (respectively, with severe parsimony), up to (i}+1) mind changes, and no anomalies, which classes cannot be learned with no size constraint, an unbounded, finite number of anomalies in the final program, but with no more than i mind changes. Hence, in some cases, the possibility of one extra mind change is considerably more liberating than removal of very conservative size shift constraints. The proof of these results is also combinatorially interesting.

Keywords

Paradigm Shift Initial Segment Recursive Function Computable Function Inductive Inference 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Angluin, W. Gasarch, and C. Smith. Training sequences. Theoretical Computer Science, 66(3):255–272, 1989.Google Scholar
  2. 2.
    G. Baliga, J. Case, S. Jain, and M. Suraj. Machine learning of higher order programs. Journal of Symbolic Logic, 59(2):486–500, 1994.Google Scholar
  3. 3.
    J. M. Barzdin and K. Podnieks. The theory of inductive inference. In Mathematical Foundations of Computer Science, High Tatras, Czechoslovakia, pages 9–15, 1973.Google Scholar
  4. 4.
    L. Blum and M. Blum. Toward a mathematical theory of inductive inference. Information and Control, 28:125–155, 1975.CrossRefGoogle Scholar
  5. 5.
    M. Blum. A machine independent theory of the complexity of recursive functions. Journal of the ACM, 14:322–336, 1967.CrossRefGoogle Scholar
  6. 6.
    J. Case. The power of vacillation in language learning. Technical Report 93-08, University of Delaware, 1992. Expands on the article in Proceedings of the Workshop on Computational Learning Theory, Morgan Kauffman, 1988; journal article under revision.Google Scholar
  7. 7.
    J. Case, S. Jain, and S. Ngo Manguelle. Refinements of inductive inference by Popperian and reliable machines. Kybernetika, 30:23–52, 1994.Google Scholar
  8. 8.
    J. Case, S. Jain, and A. Sharma. On learning limiting programs. International Journal of Foundations of Computer Science, 3(1):93–115, 1992.Google Scholar
  9. 9.
    J. Case, S. Jain, and A. Sharma. Vacillatory learning of nearly minimal size grammers. Journal of Computer and System Sciences, 49(2):189–207, October 1994.Google Scholar
  10. 10.
    J. Case, S. Jain, and M. Suraj. Not-so-nearly-minimal-size program inference. In Klaus P. Jantke and Steffen Lange, editors, Algorithmic Learning for Knowledge-Based Systems, volume 961 of Lecture Notes in Artificial Intelligence, pages 77–96. Springer-Verlag, 1995.Google Scholar
  11. 11.
    J. Case and C. Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science, 25:193–220, 1983.CrossRefGoogle Scholar
  12. 12.
    K. Chen. Tradeoffs in inductive inference of nearly minimal sized programs. Information and Control, 52:68–86, 1982.CrossRefGoogle Scholar
  13. 13.
    R. Freivalds. Minimal Gödel numbers and their identification in the limit. Lecture Notes in Computer Science, 32:219–225, 1975.Google Scholar
  14. 14.
    M. A. Fulk and S. Jain. Approximate inference and scientific method. Information and Computation, 114(2):179–191, November 1994.Google Scholar
  15. 15.
    E. M. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.CrossRefGoogle Scholar
  16. 16.
    S. Jain. On a question about learning nearly minimal programs. Information Processing Letters, 53(1):1–4, January 1995.Google Scholar
  17. 17.
    S. Jain and A. Sharma. Restrictions on grammar size in language identification. In David Powers and Larry Reeker, editors, Proceedings MLNLO '91, Machine Learning of Natural Language and Ontology, Stanford University, California. Document D91-09, DFKI: Kaiserslautern FRG, 1991., pages 87–92, March 1991.Google Scholar
  18. 18.
    S. Jain and A. Sharma. Prudence in vacillatory language identification. Mathematical Systems Theory, 1994. To Appear.Google Scholar
  19. 19.
    Thomas Kuhn. The Structure of Scientific Revolutions. University of Chicago Press, Chicago, 1970.Google Scholar
  20. 20.
    S. Lange and P. Watson. Machine discovery in the presence of incomplete or ambiguous data. In K. Jantke and S. Arikawa, editors, Algorithmic Learning Theory, volume 872 of Lecture Notes in Artificial Intelligence, pages 438–452. Springer-Verlag, Berlin, Reinhardsbrunn Castle, Germany, October 1994.Google Scholar
  21. 21.
    M. Machtey and P. Young. An Introduction to the General Theory of Algorithms. North Holland, New York, 1978.Google Scholar
  22. 22.
    H. Putnam. Probability and confirmation. In Mathematics, Matter, and Method. Cambridge University Press, 1975.Google Scholar
  23. 23.
    G. Riccardi. The Independence of Control Structures in Abstract Programming Systems. PhD thesis, SUNY/ Buffalo, 1980.Google Scholar
  24. 24.
    G. Riccardi. The independence of control structures in abstract programming systems. Journal of Computer and System Sciences, 22:107–143, 1981.Google Scholar
  25. 25.
    H. Rogers. Gödel numberings of partial recursive functions. Journal of Symbolic Logic, 23:331–341, 1958.Google Scholar
  26. 26.
    H. Rogers. Theory of Recursive Functions and Effective Computability. McGraw Hill, New York, 1967. Reprinted, MIT Press 1987.Google Scholar
  27. 27.
    J. Royer. A Connotational Theory of Program Structure. Lecture Notes in Computer Science 273. Springer Verlag, 1987.Google Scholar
  28. 28.
    N. Shapiro. Review of “Limiting recursion” by E.M. Gold and “Trial and error predicates and the solution to a problem of Mostowski” by H. Putnam. Journal of Symbolic Logic, 36:342, 1971.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • John Case
    • 1
  • Sanjay Jain
    • 2
  • Arun Sharma
    • 3
  1. 1.Department of Computer and Information SciencesUniversity of DelawareNewarkUSA
  2. 2.Department of Information Systems and Computer ScienceNational University of SingaporeSingaporeRepublic of Singapore
  3. 3.School of Computer Science and EngineeringThe University of New South WalesSydneyAustralia

Personalised recommendations