Skip to main content

Inductive Inference Systems for Learning Classes of Algorithmically Generated Sets and Structures

  • Chapter
Book cover Induction, Algorithmic Learning Theory, and Philosophy

Part of the book series: Logic, Epistemology, and the Unity of Science ((LEUS,volume 9))

  • 857 Accesses

Abstract

Computability theorists have extensively studied sets A the elements of which can be enumerated by Turing machines. These sets, also called computably enumerable sets, can be identified with their Gödel codes. Although each Turing machine has a unique Gödel code, different Turing machines can enumerate the same set. Thus, knowing a computably enumerable set means knowing one of its infinitely many Gödel codes. In the approach to learning theory stemming from E.M. Gold’s seminal paper [9], an inductive inference learner for a computably enumerable set A is a system or a device, usually algorithmic, which when successively (one by one) fed data for A outputs a sequence of Gödel codes (one by one) that at a certain point stabilize at codes correct for A. The convergence is called semantic or behaviorally correct, unless the same code for A is eventually output, in which case it is also called syntactic or explanatory. There are classes of sets that are semantically inferable, but not syntactically inferable.

Here, we are also concerned with generalizing inductive inference from sets, which are collections of distinct elements that are mutually independent, to mathematical structures in which various elements may be interrelated. This study was recently initiated by F. Stephan and Yu. Ventsov. For example, they systematically investigated inductive inference of the ideals of computable rings. With F. Stephan we continued this line of research by studying inductive inference of computably enumerable vector subspaces and other closure systems.

In particular, we showed how different convergence criteria interact with different ways of supplying data to the learner. Positive data for a set A are its elements, while negative data for A are the elements of its complement. Inference fromtext means that only positive data are supplied to the learner. Moreover, in the limit, all positive data are given. Inference fromswitching means that changes from positive to negative data or vice versa are allowed, but if there are only finitely many such changes, then in the limit all data of the eventually requested type (either positive or negative) are supplied. Inference from an informant means that positive and negative data are supplied alternately, but in the limit all data are supplied. For sets, inference from switching is more restrictive than inference from an informant, but more powerful than inference from text. On the other hand, for example, the class of computably enumerable vector spaces over an infinite field, which is syntactically inferable from text does not change if we allow semantic convergence, or inference from switching, but not both at the same time. While many classes of inferable algebraic structures have nice algebraic characterizations when learning from text or from switching is considered, we do not know of such characterizations for learning from an informant.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Angluin, D. (1980). “Inductive Inference of Formal Languages from Positive Data”, Information and Control 45, 117–135.

    Article  Google Scholar 

  2. Baliga, G., Case, J. and Jain, S. (1995). “Language Learning with Some Negative Information”, Journal of Computer and System Sciences 51, 273–285.

    Article  Google Scholar 

  3. Blum, L. and Blum, M. (1975). “Toward a Mathematical Theory of Inductive Inference”, Information and Control 28, 125–155.

    Article  Google Scholar 

  4. Case, J. and Lynes, C. (1982). “Machine Inductive Inference and Language Identification”, in Nielsen, M. and Schmidt, E.M. [18], 107–115.

    Google Scholar 

  5. Case, J. and Smith, C. (1983). “Comparison of Identification Criteria for Machine Inductive Inference”, Theoretical Computer Science 25, 193–220.

    Article  Google Scholar 

  6. Cesa-Bianchi, N., Numao, M. and Reischuk, R. (eds.) (2002). Algorithmic Learning Theory: 13th International Conference, Lecture Notes in Artificial Intelligence 2533, Berlin: Springer-Verlag.

    Google Scholar 

  7. Downey, R.G. and Remmel, J.B. (1998). “Computable Algebras and Closure Systems: Coding Properties”, in Ershov, Yu.L., Goncharov, S.S., Nerode, A. and Remmel, J.B. [8], 977–1039.

    Google Scholar 

  8. Ershov, Yu.L., Goncharov, S.S., Nerode, A. and Remmel, J.B. (eds.) (1998). Handbook of Recursive Mathematics 2, Amsterdam: Elsevier.

    Google Scholar 

  9. Gold, E.M. (1967). “Language Identification in the Limit”, Information and Control 10, 447–474.

    Article  Google Scholar 

  10. Griffor, E.R. (ed.) (1999). Handbook of Computability Theory, Amsterdam: Elsevier.

    Google Scholar 

  11. Harizanov, V.S. and Stephan, F. (2002). “On the Learnability of Vector Spaces”, in Cesa-Bianchi, N., Numao, M. and Reischuk, R. [6], 233–247.

    Google Scholar 

  12. Jain, S. and Stephan, F. (2003). “Learning by Switching Type of Information”, Information and Computation 185, 89–104.

    Article  Google Scholar 

  13. Jain, S., Osherson, D.N., Royer, J.S. and Sharma, A. (1999). Systems That Learn: An Introduction to Learning Theory, 2nd ed., Cambridge (Mass.): MIT Press.

    Google Scholar 

  14. Kalantari, I. and Retzlaff, A. (1977). “Maximal Vector Spaces Under Automorphisms of the Lattice of Recursively Enumerable Vector Spaces”, Journal of Symbolic Logic 42, 481–491.

    Article  Google Scholar 

  15. Kaplansky, I. (1974). Commutative Rings, Chicago: The University of Chicago Press.

    Google Scholar 

  16. Metakides, G. and Nerode, A. (1977). “Recursively Enumerable Vector Spaces”, Annals of Mathematical Logic 11, 147–171.

    Article  Google Scholar 

  17. Motoki, T. (1991). “Inductive Inference from All Positive and Some Negative Data”, Information Processing Letters 39, 177–182.

    Article  Google Scholar 

  18. Nielsen, M. and Schmidt, E.M. (eds.) (1982). Automata, Languages and Programming: Proceedings of the 9th International Colloquium, Lecture Notes in Computer Science 140, Berlin: Springer-Verlag.

    Google Scholar 

  19. Odifreddi, P. (1989). Classical Recursion Theory, Amsterdam: North-Holland.

    Google Scholar 

  20. Osherson, D.N. and Weinstein, S. (1982). “Criteria of Language Learning”, Information and Control 52, 123–138.

    Article  Google Scholar 

  21. Osherson, D.N., Stob, M. and Weinstein, S. (1986). Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists, Cambridge (Mass.): MIT Press.

    Google Scholar 

  22. Sharma, A. (1998). “A Note on Batch and Incremental Learnability”, Journal of Computer and System Sciences 56, 272–276.

    Article  Google Scholar 

  23. Soare, R.I. (1987). Recursively Enumerable Sets and Degrees. A Study of Computable Functions and Computably Generated Sets, Berlin: Springer-Verlag.

    Google Scholar 

  24. Stephan, F. and Ventsov, Yu. (2001). “Learning Algebraic Structures from Text” , Theoretical Computer Science 268, 221–273.

    Article  Google Scholar 

  25. Stoltenberg-Hansen, V. and Tucker, J.V. (1999). “Computable Rings and Fields”, in Griffor, E.R. [10], 363–447.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer

About this chapter

Cite this chapter

Harizanov, V.S. (2007). Inductive Inference Systems for Learning Classes of Algorithmically Generated Sets and Structures. In: Friend, M., Goethe, N.B., Harizanov, V.S. (eds) Induction, Algorithmic Learning Theory, and Philosophy. Logic, Epistemology, and the Unity of Science, vol 9. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-6127-1_2

Download citation

Publish with us

Policies and ethics