Finding Probabilistic Rule Lists using the Minimum Description Length Principle

  • John O. R. AogaEmail author
  • Tias Guns
  • Siegfried Nijssen
  • Pierre Schaus
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11198)


An important task in data mining is that of rule discovery in supervised data. Well-known examples include rule-based classification and subgroup discovery. Motivated by the need to succinctly describe an entire labeled dataset, rather than accurately classify the label, we propose an MDL-based supervised rule discovery task. The task concerns the discovery of a small rule list where each rule captures the probability of the Boolean target attribute being true. Our approach is built on a novel combination of two main building blocks: (i) the use of the Minimum Description Length (MDL) principle to characterize good-and-small sets of probabilistic rules, (ii) the use of branch-and-bound with a best-first search strategy to find better-than-greedy and optimal solutions for the proposed task. We experimentally show the effectiveness of our approach, by providing a comparison with other supervised rule learning algorithms on real-life datasets.


  1. 1.
    Agrawal, R., Imieliński, T., Swami, A.: Mining association rules between sets of items in large databases. Int. Conf. Manag. Data (SIGMOD) 22(2), 207–216 (1993)Google Scholar
  2. 2.
    Esipova, N., Ray, J., Pugliese, A.: Number of potential migrants worldwide tops 700 million. Gallup, USA (2018)Google Scholar
  3. 3.
    Fano, R.M.: The transmission of information. Massachusetts Institute of Technology, Research Laboratory of Electronics Cambridge, Cambridge (1949)Google Scholar
  4. 4.
    Fürnkranz, J., Gamberger, D., Lavrač, N.: Foundations of Rule Learning. Springer Publishing Company, Incorporated (2014)Google Scholar
  5. 5.
    Grünwald, P.D.: The minimum description length principle. MIT press, Cambridge (2007)Google Scholar
  6. 6.
    Guns, T., Nijssen, S., De Raedt, L.: k-Pattern set mining under constraints. IEEE Trans. Knowl. Data Eng. 25(2), 402–418 (2013)CrossRefGoogle Scholar
  7. 7.
    Lavrac, N., Kavsek, B., Flach, P.A., Todorovski, L.: Subgroup discovery with CN2-SD. J. Mach. Learn. Res. 5, 153–188 (2004)MathSciNetGoogle Scholar
  8. 8.
    Rissanen, J.: Modeling by shortest data description. Automatica 14(5), 465–471 (1978)CrossRefGoogle Scholar
  9. 9.
    Szathmary, L., Napoli, A., Kuznetsov, S.O.: ZART: a multifunctional itemset mining algorithm. In: Eklund, P.W., Diatta, J., Liquiere, M. (eds.) Proceedings of the 5th International Conference on Concept Lattices and Their Applications, CLA 2007. vol. 331 (2007)Google Scholar
  10. 10.
    Vreeken, J., van Leeuwen, M., Siebes, A.: Krimp: mining itemsets that compress. Data Min. Knowl. Discov. 23(1), 169–214 (2011)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Yang, H., Rudin, C., Seltzer, M.: Scalable bayesian rule lists. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, ICML’17. Proceedings of Machine Learning Research, vol. 70, pp. 3921–3930. PMLR (2017)Google Scholar
  12. 12.
    Zimmermann, A., Nijssen, S.: Supervised pattern mining and applications to classification. In: Aggarwal, C.C., Han, J. (eds.) Frequent Pattern Mining, pp. 425–442. Springer, Cham (2014). Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.ICTEAM, UCLouvainOttignies-Louvain-la-NeuveBelgium
  2. 2.VUBBrusselsBelgium
  3. 3.KU LeuvenLeuvenBelgium

Personalised recommendations