Simulating teams with many conjectures

  • Bala Kalyanasundaram
  • Mahendran Velauthapillai
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 997)


This paper is concerned with the algorithmic learning where the learner is allowed to make finite but bounded number of mind changes. Briefly, in our learning paradigm, a learner is given examples from a recursive function, which the learner attempts to learn by producing programs to compute that function. We say that a team is successful if at least one member of the team learns the target function. The problem, given two teams with bounded number of learners and mind changes whether one team can provably learn more than the other team has been open for the last fifteen years. This paper makes significant progress toward a complete solution of this problem. In the case of error-free learning, this paper solves the open problem. Finally, in the case of EX learning our result shows that there is no team with a≥0 mind changes whose learning power is exactly equal to a single learner with bounded b(≠ a) number of mind changes. In the case of PEX learning we have a positive answer.


Initial Segment Recursive Function Inductive Inference Bounded Number Computational Learn Theory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Angluin, D., and Smith, C.H. (1983), Inductive inference: Theory and Methods, Computing Surveys, 15, 237–269.CrossRefGoogle Scholar
  2. 2.
    Angluin, D., and Smith, C.H. (1987), Inductive inference, in Encyclopedia of Artificial Intelligence (S. Shapiro), pp. 409–418, Wiley, New York.Google Scholar
  3. 3.
    Barzdin, J.A.(1974), Two theorems on the limiting synthesis of functions, in Theory of Algorithms and Programs (Barzdin, Ed.), Vol. 1, 82–88, Latvian State University, Riga, USSR.Google Scholar
  4. 4.
    Barzdin, J.A., and Freivalds, R.V. (1972), On the prediction of general recursive functions, Soviet Math. Dokl., 13, 1224–1228.Google Scholar
  5. 5.
    Blum, L., and Blum, M., (1975), Toward a mathematical theory of inductive inference, Information and Control, 28, 125–155.CrossRefGoogle Scholar
  6. 6.
    Case, J., and Smith, C.H. (1983), Comparison of identification criteria for machine inductive inference, Theoretical Computer Science, 25, 193–220.CrossRefGoogle Scholar
  7. 7.
    R. Daley, and B. Kalyanasundaram, Capabilities of Probabilistic Learners with Bounded Mind Changes, In Proceedings of the 1993 Workshop on Computational Learning Theory, 1993, 182–191.Google Scholar
  8. 8.
    R. Daley, and B. Kalyanasundaram, Use of reduction arguments in determining Popperian FINite learning capabilities, In Proceedings of Algorithmic Learning Theory, 1993, 173–186.Google Scholar
  9. 9.
    R. Daley, and B. Kalyanasundaram, Probabilistic and Pluralistic Learners with Mind Changes, In Proceedings of Mathematical Foundations of Computer Science, 1992, 218–226.Google Scholar
  10. 10.
    R. Daley, B. Kalyanasundaram, and M. Velauthapillai, Breaking the probability 1/2 barrier in FIN-type learning, In Proceedings of the 1992 Workshop on Computational Learning Theory, 1992, 203–217.Google Scholar
  11. 11.
    R. Daley, L. Pitt, M. Velauthapillai, and T. Will, Relations between probabilistic and team one-shot learners, In Proceedings of the 1991 Workshop on Computational Learning Theory, 1991, 228–239.Google Scholar
  12. 12.
    Daley, R.P., and Smith, C.H. (1986), On the complexity of inductive inference, Information and Control. 69, 12–40.Google Scholar
  13. 13.
    R.V. Freivalds, Finite Identification of General Recursive Functions by Probabilistic Strategies, Akademie Verlag, Berlin, 1979.Google Scholar
  14. 14.
    R.V. Freivalds, C.H. Smith, and M. Velauthapillai, Trade-off among Parameters Affecting Inductive Inference, Information and Computation (1989), 82, 323–349.CrossRefGoogle Scholar
  15. 15.
    Gold, E. M., Learning Identification in the Limit, Information and Control, vol 10, 1967, pp. 447–474.CrossRefGoogle Scholar
  16. 16.
    Jantke, K. P. and Beick, H. R. (1981), Combining postulates of naturalness in inductive inference, Electron. Inform. Kebernet. 17, 465–484.Google Scholar
  17. 17.
    S. Jain, and A. Sharma, Finite learning by a team, In Proceedings of the 1990 Workshop on Computational Learning Theory, 1990, 163–177.Google Scholar
  18. 18.
    Machtey, M., and Young, P. (1978), An Introduction to General Theory of Algorithms, North-Holland, New York.Google Scholar
  19. 19.
    L. Pitt, Probabilistic inductive inference, J. ACM 36(2), 1989, 383–433.Google Scholar
  20. 20.
    L. Pitt, and C. Smith, Probability and plurality for aggregations of learning machines, Information and Computation 77(1), 1988, 77–92.Google Scholar
  21. 21.
    C. H. Smith, The Power of Pluralism for Automatic Program Synthesis, Journal of the Association for Computing Machinery, vol 29, 1982, pp. 1144–1165.Google Scholar
  22. 22.
    L. G. Valiant, A theory of Learnable, Communications of the ACM, vol 27, 1987, pp.1134–1142.Google Scholar
  23. 23.
    R. Wiehagen, R. Freivalds, and E. Kinber, On the power of probabilistic strategies in inductive inference. Theoretical Computer Science, 1984, 111–113.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Bala Kalyanasundaram
    • 1
  • Mahendran Velauthapillai
    • 2
  1. 1.Department of Computer ScienceUniversity of PittsburghPittsburghUSA
  2. 2.Department of Computer ScienceGeorgetown UniversityWashington, DCUSA

Personalised recommendations