Skip to main content

Consistency Conditions for Inductive Inference of Recursive Functions

  • Conference paper
  • First Online:
Book cover New Frontiers in Artificial Intelligence (JSAI 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4384))

Included in the following conference series:

  • 611 Accesses

Abstract

A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem.

Therefore, in the present paper several variations of consistent learning are introduced and studied. These variations allow a so-called δ–delay relaxing the consistency demand to all but the last δ data.

Additionally, we introduce the notion of coherent learning (again with δ–delay) requiring the learner to correctly reflect only the last datum (only the n − δth datum) seen.

Our results are threefold. First, it is shown that all models of coherent learning with δ–delay are exactly as powerful as their corresponding consistent learning models with δ–delay. Second, we provide characterizations for consistent learning with δ–delay in terms of complexity. Finally, we establish strict hierarchies for all consistent learning models with δ–delay in dependence on δ.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barzdin, J.M.: Две теоремы о предельном синтезе функций Теория. In: Barzdin, J.M. (ed.) Алгоритмов и Программ, vol. 1, pp. 82–88. Latvian State University (1974)

    Google Scholar 

  2. Barzdin, J.M.: Inductive inference of automata, functions and programs (republished in Amer. Math. Soc. Transl. (2) 109, 1977, pp.107- 112). In: Proc. of the 20-th International Congress of Mathematicians, Vancouver, Canada, pp. 455–460 (1974)

    Google Scholar 

  3. Barzdin, J.M., Freivalds, R.V.: On the prediction of general recursive functions. Soviet Math. Dokl. 13, 1224–1228 (1972)

    MATH  Google Scholar 

  4. Blum, L., Blum, M.: Toward a mathematical theory of inductive inference. Inform. Control 28(2), 125–155 (1975)

    Article  MathSciNet  Google Scholar 

  5. Blum, M.: A machine-independent theory of the complexity of recursive functions. Journal of the ACM 14(2), 322–336 (1967)

    Article  MathSciNet  Google Scholar 

  6. Freivalds, R., Kinber, E.B., Wiehagen, R.: How inductive inference strategies discover their errors. Inform. Comput. 118(2), 208–226 (1995)

    Article  MathSciNet  Google Scholar 

  7. Fulk, M.A.: Saving the phenomenon: Requirements that inductive inference machines not contradict known data. Inform. Comput. 79(3), 193–209 (1988)

    Article  MathSciNet  Google Scholar 

  8. Gold, E.M.: Language identification in the limit. Inform. Control 10(5), 447–474 (1967)

    Article  MathSciNet  Google Scholar 

  9. Grabowski, J.: Starke Erkennung. In: Linder, R., Thiele, H. (eds.) Strukturerkennung diskreter kybernetischer Systeme. Seminarberichte der Sektion Mathematik der Humboldt-Universität zu Berlin, vol. 82, pp. 168–184 (1986)

    Google Scholar 

  10. Helm, J.P.: On effectively computable operators. Zeitschrift für mathematische Logik und Grundlagen der Mathematik (ZML) 17, 231–244 (1971)

    Article  MathSciNet  Google Scholar 

  11. Jain, S., Osherson, D., Royer, J.S., Sharma, A.: Systems that Learn: An Introduction to Learning Theory, 2nd edn. MIT Press, Cambridge (1999)

    Book  Google Scholar 

  12. Jantke, K.P., Beick, H.-R.: Combining postulates of naturalness in inductive inference. Elektronische Informationsverarbeitung und Kybernetik 17(8/9), 465–484 (1981)

    MathSciNet  MATH  Google Scholar 

  13. Lange, S., Zeugmann, T.: Incremental learning from positive data. J. of Comput. Syst. Sci. 53(1), 88–103 (1996)

    Article  MathSciNet  Google Scholar 

  14. Minicozzi, E.: Some natural properties of strong identification in inductive inference. Theoret. Comput. Sci. 2, 345–360 (1976)

    Article  MathSciNet  Google Scholar 

  15. Odifreddi, P.G.: Classical Recursion Theory. North Holland, Amsterdam (1989)

    MATH  Google Scholar 

  16. Odifreddi, P.G.: Classical Recursion Theory, vol. 2. North Holland, Amsterdam (1999)

    MATH  Google Scholar 

  17. Osherson, D.N., Stob, M., Weinstein, S.: Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, Cambridge (1986)

    Google Scholar 

  18. Rogers, H.: Theory of Recursive Functions and Effective Computability (Reprinted, MIT Press 1987). McGraw-Hill, New York (1967)

    MATH  Google Scholar 

  19. Stephan, F., Zeugmann, T.: Learning classes of approximations to non-recursive functions. Theoret. Comput. Sci. 288(2), 309–341 (2002)

    Article  MathSciNet  Google Scholar 

  20. Wiehagen, R.: Zur Theorie der Algorithmischen Erkennung. Dissertation B, Humboldt-Universität zu Berlin (1978)

    Google Scholar 

  21. Wiehagen, R., Liepe, W.: Charakteristische Eigenschaften von erkennbaren Klassen rekursiver Funktionen. Elektronische Informationsverarbeitung und Kybernetik 12(8/9), 421–438 (1976)

    MathSciNet  MATH  Google Scholar 

  22. Wiehagen, R., Zeugmann, T.: Ignoring data may be the only way to learn efficiently. J. of Experimental and Theoret. Artif. Intell. 6(1), 131–144 (1994)

    Article  Google Scholar 

  23. Wiehagen, R., Zeugmann, T.: Learning and consistency. In: Lange, S., Jantke, K.P. (eds.) Algorithmic Learning for Knowledge-Based Systems. LNCS, vol. 961, pp. 1–24. Springer, Heidelberg (1995)

    Chapter  Google Scholar 

  24. Zeugmann, T.: On the nonboundability of total effective operators. Zeitschrift für mathematische Logik und Grundlagen der Mathematik (ZML) 30, 169–172 (1984)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Takashi Washio Ken Satoh Hideaki Takeda Akihiro Inokuchi

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Akama, Y., Zeugmann, T. (2007). Consistency Conditions for Inductive Inference of Recursive Functions. In: Washio, T., Satoh, K., Takeda, H., Inokuchi, A. (eds) New Frontiers in Artificial Intelligence. JSAI 2006. Lecture Notes in Computer Science(), vol 4384. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69902-6_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-69902-6_22

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-69901-9

  • Online ISBN: 978-3-540-69902-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics