Directions for Computability Theory Beyond Pure Mathematical

  • John Case
Part of the International Mathematical Series book series (IMAT, volume 5)


This paper begins by briefly indicating the principal, non-standard motivations of the author for his decades of work in Computability Theory (CT), a.k.a. Recursive Function Theory.


Cellular Automaton Language Learning Computable Function Inductive Inference Computability Theory 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    A. Ambainis, J. Case, S. Jain, and M. Surajm, Parsimony hierarchies for inductive inference, J. Symb. Log. 69 (2004), 287–328.MATHCrossRefGoogle Scholar
  2. 2.
    D. Angluin, W. Gasarch, and C. Smith, Training sequences, Theor. Comput. Sci. 66 (1989), no. 3, 255–272.MATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    D. Angluin, Inductive inference of formal languages from positive data, Inf. Control 45 (1980), 117–135.MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    J. Bārzdiņš, Two theorems on the limiting synthesis of functions (in Russian), Theory of Algorithms and Programs, Riga, Latvian State Univ. 210 (1974), 82–88.Google Scholar
  5. 5.
    L. Blum and M. Blum, Toward a mathematical theory of inductive inference, Inf. Control 28 (1975), 125–155.MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    P. Bartlett, S. Ben-David, and S. Kulkarni, Learning changing concepts by exploiting the structure of change, In: Proceedings of the Ninth Annual Conference on Computational Learning Theory, ACM Press, 1996, pp. 131–139.Google Scholar
  7. 7.
    A. Blum and P. Chalasani, Learning switching concepts, In: Proceedings of the Fifth Annual Conference on Computational Learning Theory, ACM Press, 1992, pp. 231–242Google Scholar
  8. 8.
    G. Baliga, J. Case, and S. Jain, Language learning with some negative information, J. Comput. Syst. Sci. 51 (1995), 273–285.MATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    G. Baliga, J. Case, and S. Jain, The synthesis of language learners, Inf. Comput. 152 (1999), no. 1, 16–43.MATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    G. Baliga, J. Case, S. Jain, and M. Suraj, Machine learning of higher order programs, J. Symb. Log. 59 (1994), no. 2, 486–500.MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    R. Berwick, The Acquisition of Syntactic Knowledge, The MIT Press, 1985.Google Scholar
  12. 12.
    K. Bartlmae, S. Gutjahr, and G. Nakhaeizadeh, Incorporating prior knowledge about financial markets through neural multitask learning, In: Proceedings of the Fifth International Conference on Neural Networks in the Capital Markets, 1997.Google Scholar
  13. 13.
    M. Blum, A machine independent theory of the complexity of recursive functions, J. Assoc. Comput. Mach. 14 (1967), 322–336.MATHMathSciNetGoogle Scholar
  14. 14.
    M. Bowerman, Starting to talk worse: Clues to language acquisition from children’s late speech errors, In: U-Shaped Behavioral Growth, S. Strauss and R. Stavy (Eds.), Academic Press, 1982.Google Scholar
  15. 15.
    S. Baluja and D. Pomerleau, Using the representation in a neural network’s hidden layer for task-specific focus of attention, Technical Report CMU-CS-95-143, School of Computer Science, CMU, May 1995. [To appear in Proceedings of the 1995 IJCAI]Google Scholar
  16. 16.
    M. Bain and C. Sammut, A framework for behavioural cloning, In: Machine Intelligence 15, Intelligent Agents, K. Furakawa S. Muggleton, and D. Michie (Eds.), Oxford Univ. Press, 1999, pp. 103–129.Google Scholar
  17. 17.
    A. W. Burks (Ed.), Essays on Cellular Automata, Univ. Illinois Press, 1970.Google Scholar
  18. 18.
    R. A. Caruana, Multitask connectionist learning, In: Proceedings of the 1993 Connectionist Models Summer School, pp. 372–379.Google Scholar
  19. 19.
    R. A. Caruana, Algorithms and applications for multitask learning, In: Proceedings of the 13th International Conference on Machine Learning, 1996, pp. 87–95.Google Scholar
  20. 20.
    J. Case, A note on the degrees of self-describing Turing machines, J. Assoc. Comput. Mach. 18 (1971), 329–338.MATHMathSciNetGoogle Scholar
  21. 21.
    J. Case, Periodicity in generations of automata, Math. Syst. Theory 8 (1974), 15–32.MATHCrossRefMathSciNetGoogle Scholar
  22. 22.
    J. Case, Learning machines, In: Language Learning and Concept Acquisition, W. Demopoulos and A. Marras (Eds.), Ablex Publishing Company, 1986.Google Scholar
  23. 23.
    J. Case, Effectivizing inseparability, Z. Math. Logik Grundlagen Math. 37 (1991), no. 2, 97–111. [ corrects missing set complement signs in definitions in the journal version]MATHCrossRefMathSciNetGoogle Scholar
  24. 24.
    J. Case, Infinitary self-reference in learning theory, J. Exp. Theor. Artif. Intell. 6 (1994), no. 1, 3–16.MATHCrossRefMathSciNetGoogle Scholar
  25. 25.
    J. Case, The power of vacillation in language learning, SIAM J. Comput. 28 (1999), no. 6, 1941–1969.MATHCrossRefMathSciNetGoogle Scholar
  26. 26.
    J. Case, Machine self-reference and consciousness, In: Proceedings and Abstracts of the Third Annual Meeting of the Association for the Scientific Study of Consciousness, London, Ontario, 1999. []Google Scholar
  27. 27.
    J. Case, K. Chen, and S. Jain, Costs of general purpose learning, Theor. Comput. Sci. 259 (2001), no. 1–2, 455–473.MATHCrossRefMathSciNetGoogle Scholar
  28. 28.
    J. Case, K. Chen, S. Jain, W. Merkle, and J. Royer, Generality’s price: Inescapable deficiencies in machine-learned programs, Ann. Pure Appl. Logic 139 (2006), no. 1–3, 303–326.MATHCrossRefMathSciNetGoogle Scholar
  29. 29.
    L. Carlucci, J. Case, S. Jain, and F. Stephan, Non U-shaped vacillatory and team learning, In: Algorithmic Learning Theory: 16th International Conference, ALT 2005, Singapore, October 8–11, 2005. Proceedings, S. Jain, H. U. Simon, and E. Tomita (Eds.), Lect. Notes Comput. Sci. 3734, Springer, 2005Google Scholar
  30. 30.
    Z. Chen and S. Homer, The bounded injury priority method and the learnability of unions of rectangles, Ann. Pure Appl. Logic 77 (1996), no. 2, 143–168.MATHCrossRefMathSciNetGoogle Scholar
  31. 31.
    D. Chalmers, The Conscious Mind: In Search of a Fundamental Theory, Oxford, Oxford University Press, 1996.MATHGoogle Scholar
  32. 32.
    K. Chen, Tradeoffs in Machine Inductive Inference, PhD Thesis, Computer Science Department, SUNY at Buffalo, 1981.Google Scholar
  33. 33.
    K. Chen, Tradeoffs in the inductive inference of nearly minimal size programs, Inf. Control 52 (1982), 68–86.MATHCrossRefGoogle Scholar
  34. 34.
    J. Case, S. Jain, S. Kaufmann, A. Sharma, and F. Stephan, Predictive learning models for concept drift, Theor. Comput. Sci. 268 (2001), no. 2, 323–349.MATHCrossRefMathSciNetGoogle Scholar
  35. 35.
    J. Case, S. Jain, and S. Ngo Manguelle, Refinements of inductive inference by Popperian and reliable machines, Kybernetika 30 (1994), no. 1, 23–52.MATHMathSciNetGoogle Scholar
  36. 36.
    J. Case, S. Jain, M. Ott, A. Sharma, and F. Stephan, Robust learning aided by context, J. Comput. Syst. Sci. 60 (2000), 234–257.MATHCrossRefMathSciNetGoogle Scholar
  37. 37.
    J. Case, S. Jain, and A. Sharma, On learning limiting programs, Int. J. Found. Comput. Sci. 3 (1992), no. 1, 93–115.MATHCrossRefMathSciNetGoogle Scholar
  38. 38.
    J. Case, S. Jain, and A. Sharma, Machine induction without revolutionary changes in hypothesis size, Inf. Comput. 128 (1996), no. 2, 73–86.MATHCrossRefMathSciNetGoogle Scholar
  39. 39.
    J. Case, S. Jain, and M. Suraj, Control structures in hypothesis spaces: The influence on learning, Theor. Comput. Sci. 270 (2002), no. 1–2, 287–308.MATHCrossRefMathSciNetGoogle Scholar
  40. 40.
    J. Case, S. Jain, F. Stephan, and R. Wiehagen, Robust learning — rich and poor, J. Comput. Syst. Sci. 69 (2004), 123–165.MATHCrossRefMathSciNetGoogle Scholar
  41. 41.
    J. Case, S. Kaufmann, E. Kinber, and M. Kummer, Learning recursive functions from approximations, J. Comput. Syst. Sci. 55 (1997), 183–196.MATHCrossRefMathSciNetGoogle Scholar
  42. 42.
    J. Case and C. Lynes, Machine inductive inference and language identification, In: Automata, Languages and Programming: Ninth Colloquium Aarhus, Denmark, July 12–16, 1982, M. Nielsen and E. M. Schmidt (Eds.), Lect. Notes Comput. Sci. 140 Springer, 1982, pp. 107–115.Google Scholar
  43. 43.
    T. Cormen, C. Leiserson, R. Rivest, and C. Stein, Introduction to Algorithms, The MIT Press, 2001.Google Scholar
  44. 44.
    J. Case, M. Ott, A. Sharma, and F. Stephan, Learning to win process-control games watching game-masters, Inf. Comput. 174 (2002), no. 1, 1–19.MATHCrossRefMathSciNetGoogle Scholar
  45. 45.
    D. Cenzer and J. Remmel, Recursively presented games and strategies, Math. Soc. Sci. 24 (1992), no. 2–3, 117–139.MATHCrossRefMathSciNetGoogle Scholar
  46. 46.
    J. Case and C. Smith, Anomaly hierarchies of mechanized inductive inference, In: Conference Record of the Tenth Annual ACM Symposium on Theory of Computing, San Diego, California, 1–3 May 1978, pp. 314–319.Google Scholar
  47. 47.
    J. Case and C. Smith, Comparison of identification criteria for machine inductive inference, Theor. Comput. Sci. 25 (1983), 193–220.MATHCrossRefMathSciNetGoogle Scholar
  48. 48.
    J. Case and M. Suraj, Inductive inference of Σ 10-vs. Σ 20-definitions for computable functions, In: Proceedings of the International Conference on Mathematical Logic, Novosibirsk, Russia, 1999.Google Scholar
  49. 49.
    J. Case and M. Suraj, Weakened refutability for machine learning of higher order definitions 2006. [Working paper for eventual journal submission]Google Scholar
  50. 50.
    M. Davis, Is mathematical insight algorithmic? Behav. Brain. Sci. 3 (1990), 659–660.Google Scholar
  51. 51.
    M. Davis, How subtle is Gödel’s theorem? More on Roger Penrosem Behav. Brain. Sci. 16 (1993), 611–612.CrossRefGoogle Scholar
  52. 52.
    M. Davis, The myth of hypercomputation In: Alan Turing: Life and Legacy of a Great Thinker, C. Teuscher (Ed.), Springer, 2004, pp. 195–212.Google Scholar
  53. 53.
    M. Davis, Computability, computation and the real world, In: Imagination and Rigor: Essays on Eduardo R. Caieniello’s Scientific Heritage, S. Termini (Ed.), Springer, 2005, pp. 63–70.Google Scholar
  54. 54.
    M. Davis, Why there is no such subject as hypercomputation, Appl. Math. Comput., 2006. [To appear]Google Scholar
  55. 55.
    M. Davis, The Church-Turing thesis: Consensus and opposition, In: Proceedings cCiE 2006, Springer Notes on Computer Science, Swansee, July 2006.Google Scholar
  56. 56.
    H. de Garis, Genetic programming: Building nanobrains with genetically programmed neural network modules, In: IJCNN: International Joint Conference on Neural Networks, Vol. 3, IEEE Service Center, Piscataway, New Jersey, June 17–21, 1990, pp. 511–516.Google Scholar
  57. 57.
    H. de Garis, Genetic programming: Modular neural evolution for Darwin machines, In: International Joint Conference on Neural Networks, Vol. 1, M. Caudill (Ed.), Lawrence Erlbaum Associates, Publishers, Hillsdale, New Jersey, January 1990. pp. 194–197.Google Scholar
  58. 58.
    H. de Garis, Genetic programming: Building artificial nervous systems with genetically programmed neural network modules, In: Neural and Intelligenct Systems Integeration: Fifth and Sixth Generation Integerated Reasoning Information Systems, B. Souček and The IRIS Group (Eds.), John Wiley and Sons, 1991, Chapt. 8, pp. 207–234.Google Scholar
  59. 59.
    T. G. Dietterich, H. Hild, and G. Bakiri, A comparison of ID3 and backpropogation for English text-to-speech mapping, Mach. Learn. 18 (1995), no. 1, 51–80.Google Scholar
  60. 60.
    K. deLeeuw, E. Moore, C. Shannon, and N. Shapiro, Computability by probabilistic machines, Automata Studies, Ann. Math. Studies 34 (1956), 183–212.MathSciNetGoogle Scholar
  61. 61.
    M. Devaney and A. Ram, Dynamically adjusting concepts to accommodate changing contexts, In: Proceedings of the ICML-96 Pre-Conference Workshop on Learning in Context-Sensitive Domains, Bari, Italy, M. Kubat and G. Widmer (Eds.), 1994. [Journal submission]Google Scholar
  62. 62.
    S. Fahlman, The recurrent cascade-correlation architecture, In: Advances in Neural Information Processing Systems 3, R. Lippmann, J. Moody, and D. Touretzky (Eds.), Morgan Kaufmann, 1991, pp. 190–196.Google Scholar
  63. 63.
    R. Feynman, Simulating physics with computers, Int. J. Theor. Phys. 21 (1982), no. 6/7.Google Scholar
  64. 64.
    R. Feynman, Feynman Lectures on Computation, A. Hey and R. Allen (Eds.), Perseus Books, 2000.Google Scholar
  65. 65.
    U. Frisch, B. Hasslacher, and Y. Pomeau, Lattice-gas automata for the Navier Stokes equation, Phys. Rev. Letters 56 (1986), no. 14, 1505–1508.CrossRefGoogle Scholar
  66. 66.
    M. Fulk and S. Jain, Approximate inference and scientific method, Inf. Comput. 114 (1994), no. 2, 179–191.MATHCrossRefMathSciNetGoogle Scholar
  67. 67.
    Y. Freund and Y. Mansour, Learning under persistent drift, In: Proceedings of the Third European Conference on Computational Learning Theory (EuroCOLT’97), S. Ben-David (Ed.), Lect. Notes Artif. Intell. 1208, Springer, 1997, pp. 94–108.Google Scholar
  68. 68.
    R. Freivalds, Minimal Gödel numbers and their identification in the limit, In: Mathematical Foundations of Computer Science 1975 4th Symposium, Marianske Lazne, September 1–5, 1975, J. Becvar (Ed.), Lect. Notes Comput. Sci. 32, Springer, 1975, pp. 219–225.Google Scholar
  69. 69.
    E, Fredkin and T. Toffoli, Conservative logic, Int. J. Theor. Phys. 21 (1982), no. 3/4.Google Scholar
  70. 70.
    M. Fulk, A Study of Inductive Inference Machines, PhD Thesis, SUNY at Buffalo, 1985.Google Scholar
  71. 71.
    M. Fulk, Prudence and other conditions on formal language learning, Inf. Comput. 85 (1990), no. 1, 1–11.MATHCrossRefMathSciNetGoogle Scholar
  72. 72.
    J. Gill, Probabilistic Turing Machines and Complexity of Computation, PhD Thesis, University of California, Berkeley, 1972.Google Scholar
  73. 73.
    J. Gill, Computational complexity of probabilistic Turing machines, SIAM J. Comput. 6 (1977), 675–695.MATHCrossRefMathSciNetGoogle Scholar
  74. 74.
    L. Gleitman, Biological dispositions to learn language, In: Language Learning and Concept Acquisition, W. Demopoulos and A. Marras (Eds.), Ablex Publ. Co., 1986.Google Scholar
  75. 75.
    C. Glymour, Inductive inference in the limit, Erkenntnis, 22 (1985), 23–31.CrossRefGoogle Scholar
  76. 76.
    E. Gold, Language identification in the limit, Inf. Control 10 (1967), 447–474.CrossRefMATHGoogle Scholar
  77. 77.
    B. Hasslacher, Discrete fluids, Los Alamos Sci. 15) (1987), 175–217.MathSciNetGoogle Scholar
  78. 78.
    W. Heisenberg, Physics and Philosophy, Harper and Brothers Publishers, 1958.Google Scholar
  79. 79.
    D. Helmbold and P. Long, Tracking drifting concepts by minimizing disagreements, Mach. Learn. 14 (1994), no. 1, 27–45.MATHGoogle Scholar
  80. 80.
    J. Hartmanis and R. Stearns, On the computational complexity of algorithms, Trans. Am. Math. Soc. 117 (1965), 285–306.MATHCrossRefMathSciNetGoogle Scholar
  81. 81.
    J. Hopcroft and J. Ullman, Introduction to Automata Theory Languages and Computation, Addison-Wesley, 1979.Google Scholar
  82. 82.
    R. Irwin, B. Kapron, and J. Royer, On characterizations of the basic feasible functional (Part I), J. Funct. Program. 11 (2001), no. 1, 117–153.MATHCrossRefMathSciNetGoogle Scholar
  83. 83.
    T. Jech, Set Theory, Academic Press, 1978.Google Scholar
  84. 84.
    N. Jessop, Biosphere: A Study of Life, Prentice-Hall, 1989.Google Scholar
  85. 85.
    S. Jain and J. Nessel, Some independence results for control structures in complete numberings, J. Symb. Log. 66 (2001), no. 1, 357–382.MATHCrossRefMathSciNetGoogle Scholar
  86. 86.
    S. Jain, D. Osherson, J. Royer, and A. Sharma, Systems that Learn: An Introduction to Learning Theory, The MIT Press, 1999.Google Scholar
  87. 87.
    B. Kapron and S. Cook, A new characterization of Mehlhorn’s polynomial time functionals, In: Proceedings of the 32nd Annual Symposium on Foundations of Computer Science, San Juan, Puerto Rico, 1–4 October 1991. IEEE Computer Society 1991, pp. 342–347.Google Scholar
  88. 88.
    B. Kapron and S. Cook, A new characterization of type-2 feasibility, SIAM J. Comput. 25 (1996), no. 1, 117–132.MATHCrossRefMathSciNetGoogle Scholar
  89. 89.
    K. Kelly, The Logic of Reliable Inquiry, Oxford Univ. Press, 1996.Google Scholar
  90. 90.
    K. Kelly, The logic of success, Br. J. Philos. Sci. 51 (2001), 639–666.CrossRefGoogle Scholar
  91. 91.
    K. Kelly and C. Glymour, Convergence to the truth and nothing but the truth, Philos. Sci. 56 (1989), 185–220.CrossRefMathSciNetGoogle Scholar
  92. 92.
    K. Kelly and C. Glymour, Theory discovery from data with mixed quantifiers, J. Philos. Logic 19 (1990), no. 1, 1–33.MATHCrossRefMathSciNetGoogle Scholar
  93. 93.
    E. Kinber, On a theory of inductive inference, In: Fundamentals of Computation Theory: Proceedings of the 1977 International FCT-Conference, Poznan-Kornik, Poland September 19–23, 1977, M. Karpinski (Ed.), Lect. Notes Comput. Sci. 56, Springer, 1977, pp. 435–440.Google Scholar
  94. 94.
    D. Kirsh, PDP learnability and innate knowledge of language, In: Connectionis: Theory and Practice, S. Davis (Ed.), Oxford Univ. Press, 1992, pp. 297–322.Google Scholar
  95. 95.
    S. Kleene, Origins of recursive function theory, Ann. Hist. Comput. 3 (1981), no. 1, 52–67.MATHMathSciNetGoogle Scholar
  96. 96.
    S. Kapur, B. Lust, W. Harbert, and G. Martohardjono, Universal grammar and learnability theory: The case of binding domains and the’ subset principle’, In: Knowledge and Language, Vol. I, E. Reuland and W. Abraham (Eds.), Kluwer, 1993, pp. 185–216.Google Scholar
  97. 97.
    S. Kurtz, S. Mahaney, and J. Royer, The structure of complete degrees, In: Complexity Theory Retrospective, A. Selman (Ed.), Springer, 1990, pp. 108–146.Google Scholar
  98. 98.
    M. Kummer and M. Ott, Learning branches and learning to win closed games, In: Proceedings of the Ninth Annual Conference on Computational Learning Theory, ACM Press, 1996, pp. 280–291.Google Scholar
  99. 99.
    G. Kreisel. Mathematical logic, In: Lectures in Modern Mathematics III, T. L. Saaty (Ed.), J. Wiley and Sons, 1965, pp. 95–195.Google Scholar
  100. 100.
    G. Kreisel, A notion of mechanistic theory, Int. J. Theor. Phys. 29 (1974), 11–26.MATHGoogle Scholar
  101. 101.
    K. Kelly and O. Schulte, The computable testability of theories with uncomputable predictions Erkenntnis, 43 (1995), 29–66.CrossRefMathSciNetGoogle Scholar
  102. 102.
    K. Kelly, O. Schulte, and C. Juhl, Learning theory and philosophy of science, Philos. Sci. 64 (1997), 245–267.CrossRefMathSciNetGoogle Scholar
  103. 103.
    M. Kubat, A machine learning based approach to load balancing in computer networks, Cybernet. Syst. 23 (1992), 389–400.CrossRefGoogle Scholar
  104. 104.
    R. Ladner, On the structure of polynomial time reducibility, J. Assoc. Comput. Mach. 22 (1975), 155–171.MATHMathSciNetGoogle Scholar
  105. 105.
    S. Lange and P. Watson, Machine discovery in the presence of incomplete or ambiguous data, In: Algorithmic Learning Theory, K. Jantke and S. Arikawa (Eds.), Lect. Notes Artif. Intell. 872, Springer, 1994, pp. 438–452.Google Scholar
  106. 106.
    Thinking Machines. Introduction to data level parallelism. Technical Report 86.14, Thinking Machines, April 1986.Google Scholar
  107. 107.
    N. Margolus, Physics-like models of computation, Physica 10D, (1984), 81–95.MathSciNetGoogle Scholar
  108. 108.
    Y. Marcoux, Composition is almost (but not quite) as good as s-1-1, Theor. Comput. Sci. 120 (1993), no. 2, 169–195.MATHCrossRefMathSciNetGoogle Scholar
  109. 109.
    D. Moore and J. Case, The complexity of total order structures, J. Comput. Syst. Sci. 17 (1978), 253–269.MATHCrossRefMathSciNetGoogle Scholar
  110. 110.
    D. McDermott, Robot planning, AI Magazine, 13 (1992), no. 2, 55–79.MathSciNetGoogle Scholar
  111. 111.
    T. Mitchell, R. Caruana, D. Freitag, J. McDermott, and D. Zabowski, Experience with a learning, personal assistant, Commun. ACM 37 (1994), no. 7, 81–91.CrossRefGoogle Scholar
  112. 112.
    K. Mehlhorn, Polynomial and abstract subrecursive classes, J. Comput. Syst. Sci. 12 (1976), 147–178.MATHMathSciNetGoogle Scholar
  113. 113.
    E. Mendelson, Introduction to Mathematical Logic. Chapman and Hall, London, 1997.MATHGoogle Scholar
  114. 114.
    A. Meyer and P. Fischer, Computational speed-up by effective operators, J. Symb. Log. 37 (1972), 48–68.CrossRefMathSciNetGoogle Scholar
  115. 115.
    M. Minsky, Cellular vacuum, Int. J. Theor. Phys. 21 (1982), no. 6/8, 537–551.MATHCrossRefGoogle Scholar
  116. 116.
    S. Matwin and M. Kubat, The role of context in concept learning, In: Proceedings of the ICML-96 Pre-Conference Workshop on Learning in Context-Sensitive Domains, Bari, Italy, 1996, M. Kubat and G. Widmer (Eds.), pp. 1–5.Google Scholar
  117. 117.
    E. McCreight and A. Meyer, Classes of computable functions defined by bounds on computation, In: Proceedings of the First Annual ACM Symposium on Theory of Computing, 1969, pp. 79–88.Google Scholar
  118. 118.
    O. Maler, A. Pnueli, and J. Sifakis, On the synthesis of discrete controllers for timed systems, In: STACS 95: 12th Annual Symposium on Theoretical Aspects of Computer Science Munich, Germany, March 2–4, 1995 Proceedings, E. W. Mayr and C. Puech (Eds.), Lect. Notes Comput. Sci. 900, Springer, 1995, pp. 229–242.Google Scholar
  119. 119.
    G. Marcus, S. Pinker, M. Ullman, M. Hollander, T. J. Rosen, and F. Xu, Overregularization in Language Acquisition, Univ. Chicago Press, 1992. [Includes commentary by H. Clahsen]Google Scholar
  120. 120.
    D. Michie and C. Sammut, Machine learning from real-time inputoutput behavior, In: Proceedings of the International Conference on Design to Manufacture in Modern Industry, 1993, pp. 363–369.Google Scholar
  121. 121.
    J. Myhill, Some philosophical implications of mathematical logic: I. three classes of ideas, Rev. Metaphysics 6 (1952), no. 2.Google Scholar
  122. 122.
    J. Myhill, A note on the degrees of partial functions, Proc. Am. Math. Soc. 12 (1961), 519–521.MATHCrossRefMathSciNetGoogle Scholar
  123. 123.
    J. Myhill, Abstract theory of self-reproduction, In: Views on General Systems Theory, M. D. Mesarović (Ed.), J. Wiley and Sons, 1964, pp. 106–118.Google Scholar
  124. 124.
    J. Von Neumann, Theory of Self-Reproducing Automata, Univ. Illinois Press, 1966. [Edited and completed by A. W. Burks]Google Scholar
  125. 125.
    Report of the assessment panel for the international assessment of the U.S. math sciences, Technical Report NSF9895, National Science Foundation, March 1998. []Google Scholar
  126. 126.
    P. Odifreddi, Classical Recursion Theory, North-Holland, 1989.Google Scholar
  127. 127.
    P. Odifreddi, Classical Recursion Theory. Vol. II, Elsivier, 1999.Google Scholar
  128. 128.
    D. Osherson, M. Stob, and S. Weinstein, Ideal learning machines, Cognitive Sci. 6 (1982), 277–290.CrossRefGoogle Scholar
  129. 129.
    D. Osherson, M. Stob, and S. Weinstein, Note on a central lemma of learning theory, J. Math. Psychol. 27 (1983), 86–92.MATHCrossRefGoogle Scholar
  130. 130.
    D. Osherson, M. Stob, and S. Weinstein, Learning theory and natural language, Cognition 17 (1984), no. 1, 1–28.CrossRefGoogle Scholar
  131. 131.
    D. Osherson, M. Stob, and S. Weinstein, Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists, The MIT Press, 1986.Google Scholar
  132. 132.
    D. Osherson and S. Weinstein, Criteria of language learning, Inf. Control 52 (1982), 123–138.MATHCrossRefMathSciNetGoogle Scholar
  133. 133.
    M. Pour-El and M. B. Richards, A computable ordinary differential equation which possesses no computable solution Ann. Math. Logic 17 (1979), 61–90.MATHCrossRefMathSciNetGoogle Scholar
  134. 134.
    M. Pour-El and M. B. Richards, The wave equation with computable initial data such that its unique solution is not computable, Adv. Math. 39 (1981), 215–239.MATHCrossRefMathSciNetGoogle Scholar
  135. 135.
    M. Pour-El and M. B. Richards, Computability in Analysis and Physics, Springer, 1989.Google Scholar
  136. 136.
    S. Pinker, Formal models of language learning, Cognition 7 (1979), no. 3, 217–283.CrossRefGoogle Scholar
  137. 137.
    L. Pitt, A Characterization of Probabilistic Inference, PhD Thesis, Yale University, 1984.Google Scholar
  138. 138.
    L. Pitt, Probabilistic inductive inference, J. Assoc. Comput. Mach. 36 (1989), 383–433.MATHMathSciNetGoogle Scholar
  139. 139.
    K. Plunkett and V. Marchman, U-shaped learning and frequency effects in a multi-layered perceptron: Implications for child language acquisition, Cognition 38 (1991), no. 1, 43–102.CrossRefGoogle Scholar
  140. 140.
    L. Pratt, J. Mostow, and C. Kamm, Direct transfer of learned information among neural networks, In: Proceedings of the 9th National Conference on Artificial Intelligence (AAAI-91), 1991.Google Scholar
  141. 141.
    H. Putnam, Probability and confirmation, In: Voice of America, Forum on Philosophy of Science, Vol. 10, 1963. [Reprinted as [142]]Google Scholar
  142. 142.
    H. Putnam, Probability and confirmation, In: Mathematics, Matter, and Method, Cambridge Univ. Press, 1975.Google Scholar
  143. 143.
    J. Royer and J. Case, Subrecursive Programming Systems: Complexity and Succinctness, Birkhäuser, 1994.Google Scholar
  144. 144.
    G. Riccardi, The Independence of Control Structures in Abstract Programming Systems, PhD Thesis, SUNY Buffalo, 1980.Google Scholar
  145. 145.
    G. Riccardi, The independence of control structures in abstract programming systems, J. Comput. Syst. Sci. 22 (1981), 107–143.MATHCrossRefMathSciNetGoogle Scholar
  146. 146.
    G. Riccardi, The independence of control structures in programmable numberings of the partial recursive functions, Z. Math. Logik Grundlagen Math. 48 (1982), 285–296.CrossRefMathSciNetGoogle Scholar
  147. 147.
    H. Rogers, Gödel numberings of partial recursive functions, J. Symb. Log. 23 (1958), 331–341.CrossRefGoogle Scholar
  148. 148.
    H. Rogers, Theory of Recursive Functions and Effective Computability, McGraw Hill, 1967. [Reprinted: The MIT Press, 1987]Google Scholar
  149. 149.
    J. Royer, A Connotational Theory of Program Structure, Lect. Notes Comput. Sci. 273, Springer, 1987.Google Scholar
  150. 150.
    J. Royer, Semantics versus syntax versus computations: Machine models for type-2 polynomial-time bounded functionals, J. Comput. Syst. Sci. 54 (1997), 424–436.MATHCrossRefMathSciNetGoogle Scholar
  151. 151.
    C. Sammut, Acquiring expert knowledge by learning from recorded behaviors, In: Japanese Knowledge Acquisition Workshop, 1992.Google Scholar
  152. 152.
    C. Sammut, Automatic construction of reactive control systems using symbolic machine learning, Knowledge Engineering Rev. 11 (1996), no. 1, 27–42.CrossRefGoogle Scholar
  153. 153.
    O. Schulte, Means-ends epistemology, Br. J. Philos. Sci. 50 (1999), 1–31.MATHCrossRefMathSciNetGoogle Scholar
  154. 154.
    O. Schulte, Inferring conservation principles in particle physics: A case study in the problem of induction, Br. J. Philos. Sci. 51 (2000), 771–806.MATHCrossRefMathSciNetGoogle Scholar
  155. 155.
    J. Searle, Minds, brains, and programs, Behav. Brain. Sci. 3 (91980), 417–424.Google Scholar
  156. 156.
    A. Seth, Complexity Theory of Higher Type Functionals, PhD Thesis, University of Bombay, 1994.Google Scholar
  157. 157.
    N. Shapiro, Review of “Limiting recursion” by E.M. Gold and “Trial and error predicates and the solution to a problem of Mostowski” by H.Putnam, J. Symb. Log. 36 (1971), 342.CrossRefGoogle Scholar
  158. 158.
    C. Sammut, S. Hurst, D. Kedzier, and D. Michie. Learning to fly, In: Proceedings of the Ninth International Conference on Machine Learning, D. Sleeman and P. Edwards (Eds.), Morgan Kaufmann, 1992, pp. 385–393.Google Scholar
  159. 159.
    T. Slaman, Long range goals, COMP-THY Archives, #13, April 1998. []Google Scholar
  160. 160.
    C. Smith, A Recursive Introduction to the Theory of Computation, Springer, 1994.Google Scholar
  161. 161.
    T. J. Sejnowski and Ch. Rosenberg, NETtalk: A parallel network that learns to read aloud, Technical Report JHU-EECS-86-01, Johns Hopkins University, 1986.Google Scholar
  162. 162.
    S. Strauss and R. Stavy (Eds.), U-Shaped Behavioral Growth, Academic Press, 1982.Google Scholar
  163. 163.
    J. Stoy, Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory, The MIT Press, 1977.Google Scholar
  164. 164.
    K. Svozil, Are quantum fields cellular automata? Physics Letters A, 119 (1986), no. 4, 153–156.CrossRefMathSciNetGoogle Scholar
  165. 165.
    J. B. Salem and S. Wolfram, Thermodynamics and hydrodynamics with cellular automata, In: Theory and Applications of Cellular Automata, S. Wolfram (Ed.), World Scientific, 1986.Google Scholar
  166. 166.
    N. A. Taatgen and J. R. Anderson, Why do children learn to say “Broke”? A model of learning the past tense without feedback, Cognition, 86 (2002), no. 2, 123–155.CrossRefGoogle Scholar
  167. 167.
    F. Tsung and G. Cottrell, A sequential adder using recurrent networks, In: IJCNN-89-WASHINGTON D.C.: International Joint Conference on Neural Networks. Vol. 2, IEEE Service Center, Piscataway, New Jersey, June 18–22, 1989, pp. 133–139.Google Scholar
  168. 168.
    W. Thomas, On the synthesis of strategies in infinite games, In: STACS 95: 12th Annual Symposium on Theoretical Aspects of Computer Science Munich, Germany, March 2–4, 1995 Proceedings, E. W. Mayr and C. Puech (Eds.), Lect. Notes Comput. Sci. 900, Springer, 1995, pp. 1–13.Google Scholar
  169. 169.
    S. Thrun, Is learning the n-th thing any easier than learning the first, In: Advances in Neural Information Processing Systems, 8, Morgan Kaufmann, 1996.Google Scholar
  170. 170.
    T. Toffoli and N. Margolus, Cellular Automata Machines, The MIT Press, 1987.Google Scholar
  171. 171.
    T. Toffoli, Cellular automata machines, Technical Report 208, Comp. Comm. Sci. Dept., University of Michigan, 1977.Google Scholar
  172. 172.
    T. Toffoli, Computation and construction universality of reversible cellular automata, J. Comput. Syst. Sci. 15 (1997), 213–231.MathSciNetGoogle Scholar
  173. 173.
    T. Toffoli, CAM: A high-performance cellular-automaton machine, Physica 10D, (1984), 195–204.MathSciNetGoogle Scholar
  174. 174.
    S. Thrun and J. Sullivan, Discovering structure in multiple learning tasks: The TC algorithm, In: Proceedings of the Thirteenth International Conference on Machine Learning (ICML-96), Morgan Kaufmann, 1996, pp. 489–497.Google Scholar
  175. 175.
    T. Urbančič and I. Bratko, Reconstructing human skill with machine learning, In: Proceedings of the Eleventh European Conference on Artificial Intelligence, A. Cohn (Ed.), John Wiley and Sons, 1994.Google Scholar
  176. 176.
    G. Y. Vichniac, Simulating physics with cellular automata, Physica 10D, (1984), 96–116.MathSciNetGoogle Scholar
  177. 177.
    D. Šuc, Machine reconstruction of human control strategies, In: Frontiers in Artificial Intelligence and Applications. Vol. 9, IOS Press, 2003.Google Scholar
  178. 178.
    A. Waibel Connectionist glue: Modular design of neural speech systems, In: Proceedings of the 1988 Connectionist Models Summer School, D. Touretzky, G. Hinton, and T. Sejnowski (Eds.), Morgan Kaufmann, 1989. pp. 417–425.Google Scholar
  179. 179.
    A. Waibel, Consonant recognition by modular construction of large phonemic time-delay neural networks, In: Advances in Neural Information Processing Systems I, D. S. Touretzky (Ed.), Morgan Kaufmann, 1989, pp. 215–223.Google Scholar
  180. 180.
    K. Weihrauch and N. Zhong, Is wave propagation computable or can wave computers beat the Turing machine, Proc. London Math. Soc. 85 (2002), 312–332.MATHCrossRefMathSciNetGoogle Scholar
  181. 181.
    K. Wexler and P. Culicover, Formal Principles of Language Acquisition, The MIT Press, 1980.Google Scholar
  182. 182.
    K. Wexler, On extensional learnability, Cognition, 11 (1982), no. 1, 89–95.CrossRefGoogle Scholar
  183. 183.
    R. Wiehagen, Limes-Erkennung rekursiver Funktionen durch spezielle Strategien, Electron. Inform.-verarb. Kybernetik 12 (1976), 93–99.MATHMathSciNetGoogle Scholar
  184. 184.
    R. Wiehagen, Zur Theorie der Algorithmischen Erkennung, PhD Thesis, Humboldt University of Berlin, 1978.Google Scholar
  185. 185.
    S. Wolfram, Statistical mechanics of cellular automata, Rev. Modern Phys. 55 (1983), no. 33, 601–644.CrossRefMathSciNetMATHGoogle Scholar
  186. 186.
    S. Wrobel, Concept Formation and Knowledge Revision, Kluwer, 1994.Google Scholar
  187. 187.
    P. Young, Easy constructions in complexity theory: Gap and speedup theorems, Proc. Am. Math. Soc. 37 (1973), 555–563.MATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  • John Case
    • 1
  1. 1.University of DelawareNewarkUSA

Personalised recommendations