Advertisement

Projection-Based PILP: Computational Learning Theory with Empirical Results

  • Hiroaki Watanabe
  • Stephen H. Muggleton
Conference paper
  • 770 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7207)

Abstract

Evaluations of advantages of Probabilistic Inductive Logic Programming (PILP) against ILP have not been conducted from a computational learning theory point of view. We propose a PILP framework, projection-based PILP, in which surjective projection functions are used to produce a “lossy” compression dataset from an ILP dataset. We present sample complexity results including conditions when projection-based PILP needs fewer examples than PAC. We experimentally confirm the theoretical bounds for the projection-based PILP in the Blackjack domain using Cellist, a system which machine learns Probabilistic Logic Automata. In our experiments projection-based PILP shows lower predictive error than the theoretical bounds and achieves substantially lower predictive error than ILP. To the authors’ knowledge this is the first paper describing both a computer learning theory and related empirical results on an advantage of PILP against ILP.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ehrenfeucht, A., Haussler, D., Kearns, M., Valiant, L.: A general lower bound on the number of examples needed for learning. Informution and Computation 82, 247–261 (1989)MathSciNetzbMATHCrossRefGoogle Scholar
  2. 2.
    Kearns, M.J., Schapire, R.E.: Efficient distribution-free learning of probabilistic concepts. J. Comput. Syst. Sci. 48, 464–497 (1994)MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Mitchell, T.M.: Machine learning. McGraw-Hill (1997)Google Scholar
  4. 4.
    Muggleton, S., David Page Jr., C.: A learnability model for universal representations. In: Proceedings of the 4th International Workshop of Inductive Logic Programming, pp. 139–160. GMD (1997)Google Scholar
  5. 5.
    Plotkin, G.: A note on inductive genralization. Machine Intelligence 5, 153–163 (1970)MathSciNetGoogle Scholar
  6. 6.
    De Raedt, L., Kersting, K.: Probabilistic Inductive Logic Programming. In: Ben-David, S., Case, J., Maruoka, A. (eds.) ALT 2004. LNCS (LNAI), vol. 3244, pp. 19–36. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  7. 7.
    Russell, S.J., Norvig, P.: Artifical intelligence: A modern approach, 2nd edn. Prentice Hall (2003)Google Scholar
  8. 8.
    Valiant, L.G.: A theory of the learnable. Commun. ACM 27, 1134–1142 (1984)zbMATHCrossRefGoogle Scholar
  9. 9.
    Watanabe, H., Muggleton, S.: Can ILP Be Applied to Large Datasets? In: De Raedt, L. (ed.) ILP 2009. LNCS, vol. 5989, pp. 249–256. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  10. 10.
    Watanabe, O.: Sequential sampling techniques for algorithmic learning theory. Theoretical Computer Science 2348(1,2), 3–14 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Hiroaki Watanabe
    • 1
  • Stephen H. Muggleton
    • 1
  1. 1.Imperial College LondonLondonUK

Personalised recommendations