Abstract
Aclass C of recursive functions is called robustly learnable in the sense I (where I is any success criterion of learning) if not only C itself but even all transformed classes T(C) where T is any general recursive operator, are learnable in the sense I. It was already shown before, see [14,19], that for I = Ex (learning in the limit) robust learning is rich in that there are classes being both not contained in any recursively enumerable class of recursive functions and, nevertheless, robustly learnable. For several criteria I, the present paper makes much more precise where we can hope for robustly learnable classes and where we cannot. This is achieved in two ways. First, for I = Ex, it is shown that only consistently learnable classes can be uniformly robustly learnable. Second, some other learning types I are classified as to whether or not they contain rich robustly learnable classes. Moreover, the first results on separating robust learning from uniformly robust learning are derived.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
D. Angluin and C. Smith. Inductive inference: Theory and methods. Computing Surveys, 15:237–289, 1983.
M. Anthony and N. Biggs. Computational Learning Theory. Cambridge University Press, 1992.
J. Bārzdiņ š. Inductive inference of automata, functions and programs. In Int. Math. Congress, Vancouver, pages 771–776, 1974.
J. Bārzdiņš. Two theorems on the limiting synthesis of functions. In Theory of Algorithms and Programs, vol. 1, pages 82–88. Latvian State University, 1974. In Russian.
J. Bārzdiņš and R. Freivalds. Prediction and limiting synthesis of recursively enumerable classes of functions. Latvijas Valsts Univ. Zimatm. Raksti, 210: 101–111, 1974.
L. Blum and M. Blum. Toward a mathematical theory of inductive inference. Information and Control, 28:125–155, 1975.
M. Blum. Amac hine-independent theory of the complexity of recursive functions. Journal of the ACM, 14:322–336, 1967.
J. Case, S. Jain, M. Ott, A. Sharma, and F. Stephan. Robust learning aided by context. Journal of Computer and System Sciences (Special Issue for COLT’98), 60:234–257, 2000.
J. Case and C. Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science, 25:193–220, 1983.
C.C. Florencio. Consistent identification in the limit of some Penn and Buszkowski’s classes is NP-hard. In Proceedings of the International Conference on Computational Linguistics, 1999.
R. Freivalds. Inductive inference of recursive functions: Qualitative theory. In J. Bārzdiņš and D. Bjorner, editors, Baltic Computer Science, volume 502 of Lecture Notes in Computer Science, pages 77–110. Springer-Verlag, 1991.
R. Freivalds, J. Bārzdiņš, and K. Podnieks. Inductive inference of recursive functions: Complexity bounds. In J. Bārzdiņš and D. Bjørner, editors, Baltic Computer Science, volume 502 of Lecture Notes in Computer Science, pages 111–155. Springer-Verlag, 1991.
M. Fulk. Saving the phenomenon: Requirements that inductive machines not contradict known data. Information and Computation, 79:193–209, 1988.
M. Fulk. Robust separations in inductive inference. In 31st Annual IEEE Symposium on Foundations of Computer Science, pages 405–410. IEEE Computer Society Press, 1990.
E.M. Gold. Language identification in the limit. Information and Control, 10: 447–474, 1967.
J. Grabowski. Starke Erkennung. In R. Lindner and H. Thiele, editors, Strukturerkennung diskreter kybernetischer Systeme, Teil I, pages 168–184. Seminarbericht Nr.82, Department of Mathematics, Humboldt University of Berlin, 1986. In German.
S. Jain. Robust behaviorally correct learning. Information and Computation, 153(2):238–248, September 1999.
S. Jain, D. Osherson, J. Royer, and A. Sharma. Systems that Learn: An Introduction to Learning Theory. MIT Press, Cambridge, Mass., second edition, 1999.
S. Jain, C. Smith, and R. Wiehagen. On the power of learning robustly. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 187–197. ACM Press, 1998.
K.P. Jantke and H.-R. Beick. Combining postulates of naturalness in inductive inference. Journal of Information Processing and Cybernetics (EIK), 17:465–484, 1981.
R. Klette and R. Wiehagen. Research in the theory of inductive inference by GDR mathematicians-A survey. Information Sciences, 22:149–169, 1980.
S. Kurtz and C. Smith. On the role of search for learning. In R. Rivest, D. Haussler, and M. Warmuth, editors, Proceedings of the Second Annual Workshop on Computational Learning Theory, pages 303–311. Morgan Kaufmann, 1989.
S. Kurtz and C. Smith. A refutation of Bārzdiņš’ conjecture. In K.P. Jantke, editor, Analogical and Inductive Inference, Proceedings of the Second International Workshop (AII’ 89), volume 397 of Lecture Notes in Artificial Intelligence, pages 171–176. Springer-Verlag, 1989.
S. Lange. Consistent polynomial-time inference of k-variable pattern languages. In J. Dix, K.P. Jantke, and P. Schmitt, editors, Nonmonotonic and Inductive Logic, 1st International Workshop, Karlsruhe, Germany, volume 543_of Lecture Notes in Computer Science, pages 178–183. Springer-Verlag, 1990.
E. Minicozzi. Some natural properties of strong identification in inductive inference. Theoretical Computer Science, 2:345–360, 1976.
T. Mitchell. Machine Learning. McGraw Hill, 1997.
D. Osherson, M. Stob, and S. Weinstein. Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, 1986.
M. Ott and F. Stephan. Avoiding coding tricks by hyperrobust learning. In P. Vitányi, editor, Fourth European Conference on Computational Learning Theory, volume 1572 of Lecture Notes in Artificial Intelligence, pages 183–197. Springer-Verlag, 1999.
H. Rogers. Theory of Recursive Functions and Effective Computability. McGraw-Hill, 1967. Reprinted by MIT Press in 1987.
W. Stein. Consistent polynomial identification in the limit. In M.M. Richter, C.H. Smith, R. Wiehagen, and T. Zeugmann, editors, Algorithmic Learning Theory: Ninth International Conference (ALT’ 98), volume 1501 of Lecture Notes in Artificial Intelligence, pages 424–438. Springer-Verlag, 1998.
V.N. Vapnik. The Nature of Statistical Learning Theory. Second Edition. Springer-Verlag, 2000.
R. Wiehagen. Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Journal of Information Processing and Cybernetics (EIK), 12:93–99, 1976.
R. Wiehagen. Zur Theorie der Algorithmischen Erkennung. Dissertation B, Humboldt University of Berlin, 1978.
R. Wiehagen and W. Liepe. Charakteristische Eigenschaften von erkennbaren Klassen rekursiver Funktionen. Journal of Information Processing and Cybernetics (EIK), 12:421–438, 1976.
R. Wiehagen and T. Zeugmann. Ignoring data may be the only way to learn efficiently. Journal of Experimental and Theoretical Artificial Intelligence, 6:131–144, 1994.
R. Wiehagen and T. Zeugmann. Learning and consistency. In K.P. Jantke and S. Lange, editors, Algorithmic Learning for Knowledge-Based Systems, volume 961 of Lecture Notes in Artificial Intelligence, pages 1–24. Springer-Verlag, 1995.
T. Zeugmann. On Bārzdiņš’ conjecture. In K.P. Jantke, editor, Analogical and Inductive Inference, Proceedings of the International Workshop, volume 265 of Lecture Notes in Computer Science, pages 220–227. Springer-Verlag, 1986.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Case, J., Jain, S., Stephan, F., Wiehagen, R. (2001). Robust Learning — Rich and Poor. In: Helmbold, D., Williamson, B. (eds) Computational Learning Theory. COLT 2001. Lecture Notes in Computer Science(), vol 2111. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44581-1_10
Download citation
DOI: https://doi.org/10.1007/3-540-44581-1_10
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42343-0
Online ISBN: 978-3-540-44581-4
eBook Packages: Springer Book Archive