Abstract
A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem.
Therefore, in the present paper several variations of consistent learning are introduced and studied. These variations allow a so-called δ–delay relaxing the consistency demand to all but the last δ data.
Additionally, we introduce the notion of coherent learning (again with δ–delay) requiring the learner to correctly reflect only the last datum (only the n − δth datum) seen.
Our results are threefold. First, it is shown that all models of coherent learning with δ–delay are exactly as powerful as their corresponding consistent learning models with δ–delay. Second, we provide characterizations for consistent learning with δ–delay in terms of complexity. Finally, we establish strict hierarchies for all consistent learning models with δ–delay in dependence on δ.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Barzdin, J.M.: Две теоремы о предельном синтезе функций Теория. In: Barzdin, J.M. (ed.) Алгоритмов и Программ, vol. 1, pp. 82–88. Latvian State University (1974)
Barzdin, J.M.: Inductive inference of automata, functions and programs (republished in Amer. Math. Soc. Transl. (2) 109, 1977, pp.107- 112). In: Proc. of the 20-th International Congress of Mathematicians, Vancouver, Canada, pp. 455–460 (1974)
Barzdin, J.M., Freivalds, R.V.: On the prediction of general recursive functions. Soviet Math. Dokl. 13, 1224–1228 (1972)
Blum, L., Blum, M.: Toward a mathematical theory of inductive inference. Inform. Control 28(2), 125–155 (1975)
Blum, M.: A machine-independent theory of the complexity of recursive functions. Journal of the ACM 14(2), 322–336 (1967)
Freivalds, R., Kinber, E.B., Wiehagen, R.: How inductive inference strategies discover their errors. Inform. Comput. 118(2), 208–226 (1995)
Fulk, M.A.: Saving the phenomenon: Requirements that inductive inference machines not contradict known data. Inform. Comput. 79(3), 193–209 (1988)
Gold, E.M.: Language identification in the limit. Inform. Control 10(5), 447–474 (1967)
Grabowski, J.: Starke Erkennung. In: Linder, R., Thiele, H. (eds.) Strukturerkennung diskreter kybernetischer Systeme. Seminarberichte der Sektion Mathematik der Humboldt-Universität zu Berlin, vol. 82, pp. 168–184 (1986)
Helm, J.P.: On effectively computable operators. Zeitschrift für mathematische Logik und Grundlagen der Mathematik (ZML) 17, 231–244 (1971)
Jain, S., Osherson, D., Royer, J.S., Sharma, A.: Systems that Learn: An Introduction to Learning Theory, 2nd edn. MIT Press, Cambridge (1999)
Jantke, K.P., Beick, H.-R.: Combining postulates of naturalness in inductive inference. Elektronische Informationsverarbeitung und Kybernetik 17(8/9), 465–484 (1981)
Lange, S., Zeugmann, T.: Incremental learning from positive data. J. of Comput. Syst. Sci. 53(1), 88–103 (1996)
Minicozzi, E.: Some natural properties of strong identification in inductive inference. Theoret. Comput. Sci. 2, 345–360 (1976)
Odifreddi, P.G.: Classical Recursion Theory. North Holland, Amsterdam (1989)
Odifreddi, P.G.: Classical Recursion Theory, vol. 2. North Holland, Amsterdam (1999)
Osherson, D.N., Stob, M., Weinstein, S.: Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, Cambridge (1986)
Rogers, H.: Theory of Recursive Functions and Effective Computability (Reprinted, MIT Press 1987). McGraw-Hill, New York (1967)
Stephan, F., Zeugmann, T.: Learning classes of approximations to non-recursive functions. Theoret. Comput. Sci. 288(2), 309–341 (2002)
Wiehagen, R.: Zur Theorie der Algorithmischen Erkennung. Dissertation B, Humboldt-Universität zu Berlin (1978)
Wiehagen, R., Liepe, W.: Charakteristische Eigenschaften von erkennbaren Klassen rekursiver Funktionen. Elektronische Informationsverarbeitung und Kybernetik 12(8/9), 421–438 (1976)
Wiehagen, R., Zeugmann, T.: Ignoring data may be the only way to learn efficiently. J. of Experimental and Theoret. Artif. Intell. 6(1), 131–144 (1994)
Wiehagen, R., Zeugmann, T.: Learning and consistency. In: Lange, S., Jantke, K.P. (eds.) Algorithmic Learning for Knowledge-Based Systems. LNCS, vol. 961, pp. 1–24. Springer, Heidelberg (1995)
Zeugmann, T.: On the nonboundability of total effective operators. Zeitschrift für mathematische Logik und Grundlagen der Mathematik (ZML) 30, 169–172 (1984)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Akama, Y., Zeugmann, T. (2007). Consistency Conditions for Inductive Inference of Recursive Functions. In: Washio, T., Satoh, K., Takeda, H., Inokuchi, A. (eds) New Frontiers in Artificial Intelligence. JSAI 2006. Lecture Notes in Computer Science(), vol 4384. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69902-6_22
Download citation
DOI: https://doi.org/10.1007/978-3-540-69902-6_22
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-69901-9
Online ISBN: 978-3-540-69902-6
eBook Packages: Computer ScienceComputer Science (R0)