Abstract
In their pioneering work, Mukouchi and Arikawa modeled a learning situation in which the learner is expected to refute texts which are not representative of L, the class of languages being identified. Lange and Watson extended this model to consider justified refutation in which the learner is expected to refute texts only if it contains a finite sample unrepresentative of the class L. Both the above studies were in the context of indexed families of recursive languages. We extend this study in two directions. Firstly, we consider general classes of recursively enumerable languages. Secondly, we allow the machine to either identify or refute the unrepresentative texts (respectively, texts containing finite unrepresentative samples). We observe some surprising difierences between our results and the results obtained for learning indexed families by Lange and Watson.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
S. Ben-David. Can finite samples detect singularities of real-valued functions. In Symposium on the Theory of Computation, pages 390–399, 1992.
M. Blum. A machine-independent theory of the complexity of recursive functions. Journal of the ACM, 14:322–336, 1967.
J. Case, S. Jain, and S. Ngo Manguelle. Refinements of inductive inference by Popperian and reliable machines. Kybernetika, 30:23–52, 1994.
J. Case and C. Lynes. Machine inductive inference and language identification. In M. Nielsen and E. M. Schmidt, editors, Proceedings of the 9th International Colloquium on Automata, Languages and Programming, volume 140 of Lecture Notes in Computer Science, pages 107–115. Springer-Verlag, 1982.
M. Fulk. Prudence and other conditions on formal language learning. Information and Computation, 85:1–11, 1990.
E. M. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.
G. Grieser. Reflecting inductive inference machines and its improvement by therapy. In S. Arikawa and A. Sharma, editors, Algorithmic Learning Theory: Seventh International Workshop (ALT’ 96), volume 1160 of Lecture Notes in Artificial Intelligence, pages 325–336. Springer-Verlag, 1996.
K. P. Jantke. Reflecting ans self-confident inductive inference machines. In Algorithmic Learning Theory: Sixth International Workshop (ALT’ 95), volume 997 of Lecture Notes in Artificial Intelligence, pages 282–297. Springer-Verlag, 1995.
S. Kobayashi and T. Yokomori. On approximately identifying concept classes in the limit. In Algorithmic Learning Theory: Sixth International Workshop (ALT’ 95), volume 997 of Lecture Notes in Artificial Intelligence, pages 298–312. Springer-Verlag, 1995.
S. Kobayashi and T. Yokomori. Learning approximately regular languages with reversible languages. Theoretical Computer Science A, 174:251–257, 1997.
S. Lange and P. Watson. Machine discovery in the presence of incomplete or ambiguous data. In S. Arikawa and K. Jantke, editors, Algorithmic learning theory: Fourth International Workshop on Analogical and Inductive Inference (AII’ 94) and Fifth International Workshop on Algorithmic Learning Theory (ALT’ 94), volume 872 of Lecture Notes in Artificial Intelligence, pages 438–452. Springer-Verlag, 1994.
Y. Mukouchi and S. Arikawa. Inductive inference machines that can refute hypothesis spaces. In K.P. Jantke, S. Kobayashi, E. Tomita, and T. Yokomori, editors, Algorithmic Learning Theory: Fourth International Workshop (ALT’ 93), volume 744 of Lecture Notes in Artificial Intelligence, pages 123–136. Springer-Verlag, 1993.
Y. Mukouchi and S. Arikawa. Towards a mathematical theory of machine discovery from facts. Theoretical Computer Science A, 137:53–84, 1995.
E. Minicozzi. Some natural properties of strong identification in inductive inference. Theoretical Computer Science, pages 345–360, 1976.
Y. Mukouchi. Inductive inference of an approximate concept from positive data. In S. Arikawa and K. Jantke, editors, Algorithmic learning theory: Fourth International Workshop on Analogical and Inductive Inference (AII’ 94) and Fifth International Workshop on Algorithmic Learning Theory (ALT’ 94), volume 872 of Lecture Notes in Artificial Intelligence, pages 484–499. Springer-Verlag, 1994.
H. Rogers. Theory of Recursive Functions and Effective Computability. McGraw-Hill, 1967. Reprinted by MIT Press in 1987.
A. Sharma. A note on batch and incremental learnability. Journal of Computer and System Sciences, 1998. to appear.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Jain, S. (1998). Learning with Refutation. In: Richter, M.M., Smith, C.H., Wiehagen, R., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 1998. Lecture Notes in Computer Science(), vol 1501. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49730-7_22
Download citation
DOI: https://doi.org/10.1007/3-540-49730-7_22
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65013-3
Online ISBN: 978-3-540-49730-1
eBook Packages: Springer Book Archive