Advertisement

Robust k-DNF Learning via Inductive Belief Merging

  • Frédéric Koriche
  • Joël Quinqueton
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2837)

Abstract

A central issue in logical concept induction is the prospect of inconsistency. This problem may arise due to noise in the training data, or because the target concept does not fit the underlying concept class. In this paper, we introduce the paradigm of inductive belief merging which handles this issue within a uniform framework. The key idea is to base learning on a belief merging operator that selects the concepts which are as close as possible to the set of training examples. From a computational perspective, we apply this paradigm to robust k-DNF learning. To this end, we develop a greedy algorithm which approximates the optimal concepts to within a logarithmic factor. The time complexity of the algorithm is polynomial in the size of k. Moreover, the method bidirectional and returns one maximally specific concept and one maximally general concept. We present experimental results showing the effectiveness of our algorithm on both nominal and numerical datasets.

Keywords

Version Space Concept Class Concept Learning Minimal Cover Target Concept 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Mitchell, T.M.: Generalization as search. Artificial Intelligence 18, 203–226 (1982)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Raedt, L.D.: Logical settings for concept-learning. Artificial Intelligence 95, 187–201 (1997)zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Bundy, A., Silver, B., Plummer, D.: An analytical comparison of some rule-learning programs. Artificial Intelligence 27, 137–181 (1985)zbMATHCrossRefGoogle Scholar
  4. 4.
    Haussler, D.: Quantifying inductive bias: AI learning algorithms and Valiant’s learning framework. Artificial Intelligence 36, 177–221 (1988)zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Sablon, G., Readt, L.D., Bruynooghe, M.: Iterative version spaces. Artificial Intelligence 69, 393–409 (1994)zbMATHCrossRefGoogle Scholar
  6. 6.
    Clark, P., Niblett, T.: Induction in noisy domains. In: Proceedings of the 2nd European Working Session on Learning, pp. 11–30. Sigma Press (1987)Google Scholar
  7. 7.
    Hirsh, H., Cohen, W.W.: 12. In: Learning from data with bounded inconsistency: theoretical and experimental results, vol. I, pp. 355–380. MIT Press, Cambridge (1994)Google Scholar
  8. 8.
    Hirsh, H.: Generalizing version spaces. Machine Learning 17, 5–46 (1994)zbMATHGoogle Scholar
  9. 9.
    Sebag, M.: Delaying the choice of bias: a disjunctive version space approach. In: Proceedings of the 13th International Conference on Machine Learning, pp. 444–452. Morgan Kaufmann, San Francisco (1996)Google Scholar
  10. 10.
    Sebag, M.: Constructive induction: A version space-based approach. In: Proceedings of the 16th International Joint Conference on Artificial Intelligence, pp. 708–713. Morgan Kaufmann, San Francisco (1999)Google Scholar
  11. 11.
    Lin, J.: Integration of weighted knowledge bases. Artificial Intelligence 83, 363–378 (1996)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Revesz, P.Z.: On the semantics of arbitration. Journal of Algebra and Computation 7(2), 133–160 (1997)CrossRefMathSciNetGoogle Scholar
  13. 13.
    Konieczny, S., Lang, J., Marquis, P.: Distance based merging: A general framework and some complexity results. In: Proceedings of the 8th International Conference on Principles of Knowledge Representation and Reasoning, pp. 97–108. Morgan Kaufmann, San Francisco (2002)Google Scholar
  14. 14.
    Valiant, L.G.: Learning disjuctions of conjunctions. In: Proceedings of the 9th International Joint Conference on Artificial Intelligence, pp. 207–232 (1985)Google Scholar
  15. 15.
    Valiant, L.G.: Circuits of the Mind. Oxford University Press, Oxford (1994)zbMATHGoogle Scholar
  16. 16.
    Chvatal, V.: A greedy heuristic for the set covering problem. Mathematics of Operation Research 4, 233–235 (1979)zbMATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Lund, C., Yannakakis, M.: On the hardness of approximating minimization problems. In: Proceedings of the 25th ACM Symposium on the Theory of Computing, pp. 286–295. ACM Press, New York (1993)Google Scholar
  18. 18.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  19. 19.
    Quinlan, J.R.: Improved use of continuous attributes in C4.5. Journal of Artificial Intelligence Research 4, 77–90 (1996)zbMATHGoogle Scholar
  20. 20.
    Dougherty, J., Kohavi, R., Sahami, M.: Supervised and unsupervised discretization of continuous features. In: Proceedings of the 12th International Conference on Machine Learning, pp. 194–202. Morgan Kaufmann, San Francisco (1995)Google Scholar
  21. 21.
    Kearns, M., Mansour, Y., Ng, A.Y., Ron, D.: An experimental and theoretical comparison of model selection methods. Machine Learning 27, 7–50 (1997)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Frédéric Koriche
    • 1
  • Joël Quinqueton
    • 1
  1. 1.LIRMM, UMR 5506Université Montpellier II CNRSMontpellier Cedex 5France

Personalised recommendations