Skip to main content

Dynamically Delayed Postdictive Completeness and Consistency in Learning

  • Conference paper
Algorithmic Learning Theory (ALT 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5254))

Included in the following conference series:

Abstract

In computational function learning in the limit, an algorithmic learner tries to find a program for a computable function g given successively more values of g, each time outputting a conjectured program for g. A learner is called postdictively complete iff all available data is correctly postdicted by each conjecture.

Akama and Zeugmann presented, for each choice of natural number δ, a relaxation to postdictive completeness: each conjecture is required to postdict only all except the last δ seen data points.

This paper extends this notion of delayed postdictive completeness from constant delays to dynamically computed delays. On the one hand, the delays can be different for different data points. On the other hand, delays no longer need to be by a fixed finite number, but any type of computable countdown is allowed, including, for example, countdown in a system of ordinal notations and in other graphs disallowing computable infinitely descending counts.

We extend many of the theorems of Akama and Zeugmann and provide some feasible learnability results. Regarding fairness in feasible learning, one needs to limit use of tricks that postpone output hypotheses until there is enough time to “think” about them. We see, for polytime learning, postdictive completeness (and delayed variants): 1. allows some but not all postponement tricks, and 2. there is a surprisingly tight boundary, for polytime learning, between what postponement is allowed and what is not. For example: 1. the set of polytime computable functions is polytime postdictively completely learnable employing some postponement, but 2. the set of exptime computable functions, while polytime learnable with a little more postponement, is not polytime postdictively completely learnable! We have that, for w a notation for ω, the set of exptime functions is polytime learnable with w-delayed postdictive completeness. Also provided are generalizations to further, small constructive limit ordinals.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ambainis, A., Case, J., Jain, S., Suraj, M.: Parsimony hierarchies for inductive inference. Journal of Symbolic Logic 69, 287–328 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  2. Akama, Y., Zeugmann, T.: Consistent and coherent learning with δ-delay. Technical Report TCS-TR-A-07-29, Hokkaido Univ. (October 2007)

    Google Scholar 

  3. Bārzdiņš, J.: Inductive inference of automata, functions and programs. In: Int. Math. Congress, Vancouver, pp. 771–776 (1974)

    Google Scholar 

  4. Blum, L., Blum, M.: Toward a mathematical theory of inductive inference. Information and Control 28, 125–155 (1975)

    Article  MATH  MathSciNet  Google Scholar 

  5. Case, J.: Periodicity in generations of automata. Mathematical Systems Theory 8, 15–32 (1974)

    Article  MATH  MathSciNet  Google Scholar 

  6. Case, J., Jain, S., Stephan, F., Wiehagen, R.: Robust learning – rich and poor. Journal of Computer and System Sciences 69, 123–165 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  7. Case, J., Kötzing, T.: Dynamically delayed postdictive completeness and consistency in machine inductive inference (2008), http://www.cis.udel.edu/~case/papers/PcpPcsDelayTR.pdf

  8. Case, J., Kötzing, T., Paddock, T.: Feasible iteration of feasible learning functionals. In: Hutter, M., Servedio, R.A., Takimoto, E. (eds.) ALT 2007. LNCS (LNAI), vol. 4754, pp. 34–48. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  9. Freivalds, R., Smith, C.: On the role of procrastination in machine learning. Information and Computation 107(2), 237–271 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  10. Fulk, M.: Saving the phenomena: Requirements that inductive machines not contradict known data. Inform. and Comp. 79, 193–209 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  11. Jain, S., Osherson, D., Royer, J., Sharma, A.: Systems that Learn: An Introduction to Learning Theory, 2nd edn. MIT Press, Cambridge (1999)

    Google Scholar 

  12. Li, M., Vitanyi, P.: An Introduction to Kolmogorov Complexity and Its Applications, 2nd edn. Springer, Heidelberg (1997)

    MATH  Google Scholar 

  13. Minicozzi, E.: Some natural properties of strong identification in inductive inference. In: Theoretical Computer Science, pp. 345–360 (1976)

    Google Scholar 

  14. Pitt, L.: Inductive inference, DFAs, and computational complexity. In: Jantke, K.P. (ed.) AII 1989. LNCS, vol. 397, pp. 18–44. Springer, Heidelberg (1989)

    Google Scholar 

  15. Rogers, H.: Theory of Recursive Functions and Effective Computability. McGraw Hill, New York (1967) (Reprinted by MIT Press, Cambridge, Massachusetts, 1987)

    MATH  Google Scholar 

  16. Royer, J., Case, J.: Subrecursive Programming Systems. In: Progress in Theoretical Computer Science, Birkhäuser (1994)

    Google Scholar 

  17. Sharma, A., Stephan, F., Ventsov, Y.: Generalized notions of mind change complexity. Information and Computation 189, 235–262 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  18. Wiehagen, R.: Limes-erkennung rekursiver Funktionen durch spezielle Strategien. Elek. Informationverarbeitung und Kyb. 12, 93–99 (1976)

    MATH  MathSciNet  Google Scholar 

  19. Wiehagen, R.: Zur Theorie der Algorithmischen Erkennung. Dissertation B. Humboldt University of Berlin (1978)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Case, J., Kötzing, T. (2008). Dynamically Delayed Postdictive Completeness and Consistency in Learning. In: Freund, Y., Györfi, L., Turán, G., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2008. Lecture Notes in Computer Science(), vol 5254. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87987-9_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-87987-9_32

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-87986-2

  • Online ISBN: 978-3-540-87987-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics