Argument-Based Machine Learning

  • Ivan Bratko
  • Jure Žabkar
  • Martin Možina

The most common form of machine learning (ML) is learning from examples, also called inductive learning . Usually the problem of learning from examples is stated as: Given examples, find a theory that is consistent with the examples. We say that such a theory is induced from the examples. Roughly, we say that a theory is consistent with the examples if the examples can be derived from the theory. In the case of learning from imperfect, noisy data, we may not insist on perfect consistency between the examples and the theory. In such cases, a shorter and only “approximately” consistent theory may be more appropriate.


Inductive Logic Programing Beam Search Covering Algorithm Positive Argument Negative Argument 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.



This work was carried out under the auspices of the European Commission’s Information Society Technologies (IST) programme, through Project ASPIC (IST-FP6- 002307). It was also supported by the Slovenian research agency ARRS.


  1. 1.
    Kevin D. Ashley and Edwina L. Rissland. Law, learning and representation. Artificial Intelligence, 150:17–58, 2003.MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Prolog Programming for Artificial Intelligence Pearson Education / Addison-Wesley, 2001.Google Scholar
  3. 3.
    Ivan Bratko and Martin Možina. Argumentation and machine learning. In: Deliverable 2.1 for the ASPIC project, 2004.Google Scholar
  4. 4.
    Stefanie Brüninghaus and Kevin D. Ashley. Predicting the outcome of case-based legal arguments.In G. Sartor, editor, Proceedings of the 9th International Conference on Artificial Intelligence and Law (ICAIL), pages 233–242, Edinburgh, United Kingdom, June 2003.Google Scholar
  5. 5.
    B. Cestnik. Estimating probabilities: A crucial task in machine learning. In Proceedings of the Ninth European Conference on Artificial Intelligence, pages 147–149, 1990.Google Scholar
  6. 6.
    Peter Clark and Robin Boswell. Rule induction with CN2: Some recent improvements. In Machine Learning - Proceeding of the Fifth Europen Conference (EWSL-91), pages 151–163, Berlin, 1991.Google Scholar
  7. 7.
    Peter Clark and Tim Niblett.The CN2 induction algorithm. Machine Learning Journal, 4(3):261–283, 1989.Google Scholar
  8. 8.
    J. Demašar and B. Zupan. Orange: From experimental machine learning to interactive data mining. White Paper [], Faculty of Computer and Information Science, University of Ljubljana, 2004.
  9. 9.
    Sašo Džeroski, Bojan Cestnik, and Igor Petrovski. Using the m-estimate in rule induction. CIT. J. Comput. Inf. Technol., 1:37–46, 1993.Google Scholar
  10. 10.
    C. Fellbaum. WordNet. An Electronic Lexical Database. MIT Press, 1998.Google Scholar
  11. 11.
    Sergio A. Gomez and Carlos I. Chesnevar. Integrating defeasible argumentation and machine learning techniques. Technical report, Universidad Nacional del Sur, 2004.Google Scholar
  12. 12.
    Sergio A. Gomez and Carlos I. Chesnevar. Integrating defeasible argumentation with fuzzy art neural networks for pattern classification. Journal of Computer Science and Technology, 4(1):45–51, April 2004.Google Scholar
  13. 13.
    Matej Guid, Martin Možina, Jana Krivec, Aleksander Sadikov, and Ivan Bratko. Learning positional features for annotating chess games. Computers and Games Conference 2008, Bejing, 2008.Google Scholar
  14. 14.
    David D. Jensen and Paul R. Cohen. Multiple comparisons in induction algorithms. Machine Learning, 38(3):309–338, March 2000.MATHCrossRefGoogle Scholar
  15. 15.
    Martin Možina, Janez Demašar, Jure Žabkar, and Ivan Bratko. Why is rule learning optimistic and how to correct it. In Johannes Fuernkranz, Tobias Scheffer, and Myra Spiliopoulou, editors, Proceedings of 17th European Conference on Machine Learning (ECML 2006), pages 330–340, Berlin, 2006. Springer-Verlag.Google Scholar
  16. 16.
    Martin Možina, Matej Guid, Jana Krivec, Aleksander Sadikov, and Ivan Bratko. Fighting knowledge acquisition bottleneck with Argument Based Machine Learning Proc. ECAI’08, Patras, 2008.Google Scholar
  17. 17.
    Martin Možina, Claudio Giuliano, and Ivan Bratko. Arguments extracted from text in argument based machine learning: a case study. SAMT Workshop, Koblenz, 2008.Google Scholar
  18. 18.
    Martin Možina, Jure Žabkar, and Ivan Bratko. D3.4: Implementation of and experiments with abml and mlba. ASPIC Deliverable D3.4, 2006.Google Scholar
  19. 19.
    Martin Možina, Jure Žabkar, and Ivan Bratko. Argument based machine learning. Artificial Intelligence, 171:922–937, 2007.CrossRefMathSciNetGoogle Scholar
  20. 20.
    P. M. Murphy and D. W. Aha. UCI Repository of machine learning databases ttfootnotesize []. Irvine, CA: University of California, Department of Information and Computer Science, 1994.
  21. 21.
    J. Watson Secrets of Modern Chess Strategy. Gambit Publications, 1999.Google Scholar

Copyright information

© Springer-Verlag US 2009

Authors and Affiliations

  1. 1.Faculty of Computer and Information ScienceUniversity of LjubljanaLjubljanaSlovenia

Personalised recommendations