Skip to main content

SAT-Based Rigorous Explanations for Decision Lists

  • Conference paper
  • First Online:
Theory and Applications of Satisfiability Testing – SAT 2021 (SAT 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12831))

Abstract

Decision lists (DLs) find a wide range of uses for classification problems in Machine Learning (ML), being implemented in anumber of ML frameworks. DLs are often perceived as interpretable. However, building on recent results for decision trees (DTs), we argue that interpretability is an elusive goal for some DLs. As a result, for some uses of DLs, it will be important to compute (rigorous) explanations. Unfortunately, and in clear contrast with the case of DTs, this paper shows that computing explanations for DLs is computationally hard. Motivated by this result, the paper proposes propositional encodings for computing abductive explanations (AXps) and contrastive explanations (CXps) of DLs. Furthermore, the paper investigates the practical efficiency of a MARCO-like approach for enumerating explanations. The experimental results demonstrate that, for DLs used in practical settings, the use of SAT oracles offers a very efficient solution, and that complete enumeration of explanations is most often feasible.

This work was supported by the AI Interdisciplinary Institute ANITI, funded by the French program “Investing for the Future – PIA3” under Grant agreement no. ANR-19-PI3A-0004, and by the H2020-ICT38 project COALA “Cognitive Assisted agile manufacturing for a Labor force supported by trustworthy Artificial intelligence”.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Interpretability is a subjective concept, for which no rigorous accepted definition exists [46]. As clarified later in the paper, for a given pair ML model and instance, we equate interpretability with how succinct is the justification for the model’s prediction.

  2. 2.

    The prototype is available at https://github.com/alexeyignatiev/xdl-tool.

  3. 3.

    Recent alternative approaches to sparse decision lists [1, 2, 65] have also been considered but were eventually discarded for two reasons: (1) they can only deal with binary data and (2) they produce sparse decision lists containing a couple of rules and a few literals in total—i.e. these methods do not provide models that would be of interest for our work.

  4. 4.

    https://orangedatamining.com/.

References

  1. Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M.I., Rudin, C.: Learning certifiably optimal rule lists. In: KDD, pp. 35–44 (2017)

    Google Scholar 

  2. Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M.I., Rudin, C.: Learning certifiably optimal rule lists for categorical data. J. Mach. Learn. Res. 18, 234:1–234:78 (2017). http://jmlr.org/papers/v18/17-716.html

  3. Audemard, G., Koriche, F., Marquis, P.: On tractable XAI queries based on compiled representations. In: KR, pp. 838–849 (2020)

    Google Scholar 

  4. Audemard, G., Lagniez, J., Simon, L.: Improving glucose for incremental SAT solving with assumptions: application to MUS extraction. In: SAT, pp. 309–317 (2013)

    Google Scholar 

  5. Bailey, J., Stuckey, P.J.: Discovery of minimal unsatisfiable subsets of constraints using hitting set dualization. In: PADL, pp. 174–186 (2005)

    Google Scholar 

  6. Belov, A., Lynce, I., Marques-Silva, J.: Towards efficient MUS extraction. AI Commun. 25(2), 97–116 (2012)

    Article  MathSciNet  Google Scholar 

  7. Belov, A., Marques-Silva, J.: Accelerating MUS extraction with recursive model rotation. In: FMCAD, pp. 37–40 (2011)

    Google Scholar 

  8. Biere, A., Heule, M., van Maaren, H., Walsh, T. (eds.): Frontiers in Artificial Intelligence and Applications, vol. 336. IOS Press, Amsterdam (2021)

    Google Scholar 

  9. Birnbaum, E., Lozinskii, E.L.: Consistent subsets of inconsistent systems: structure and behaviour. J. Exp. Theor. Artif. Intell. 15(1), 25–46 (2003)

    Article  Google Scholar 

  10. Bouckaert, R.R., et al.: WEKA - experiences with a java open-source project. J. Mach. Learn. Res. 11, 2533–2541 (2010). http://portal.acm.org/citation.cfm?id=1953016

  11. Camburu, O., Giunchiglia, E., Foerster, J., Lukasiewicz, T., Blunsom, P.: Can I trust the explainer? verifying post-hoc explanatory methods. CoRR abs/1910.02065 (2019). http://arxiv.org/abs/1910.02065

  12. Chen, C., Rudin, C.: An optimization approach to learning falling rule lists. In: AISTATS, pp. 604–612 (2018)

    Google Scholar 

  13. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: KDD, pp. 785–794 (2016)

    Google Scholar 

  14. Clark, P., Boswell, R.: Rule induction with CN2: some recent improvements. In: EWSL, pp. 151–163 (1991)

    Google Scholar 

  15. Clark, P., Niblett, T.: The CN2 induction algorithm. Mach. Learn. 3, 261–283 (1989)

    Google Scholar 

  16. Cohen, W.W.: Efficient pruning methods for separate-and-conquer rule learning systems. In: Bajcsy, R. (ed.) Proceedings of the 13th International Joint Conference on Artificial Intelligence, 28 August–3 September 1993, Chambéry, France. pp. 988–994. Morgan Kaufmann (1993)

    Google Scholar 

  17. Cohen, W.W.: Fast effective rule induction. In: ICML, pp. 115–123 (1995)

    Google Scholar 

  18. Cohen, W.W., Singer, Y.: A simple, fast, and effictive rule learner. In: AAAI, pp. 335–342 (1999)

    Google Scholar 

  19. Darwiche, A., Hirth, A.: On the reasons behind decisions. In: ECAI, pp. 712–720 (2020). https://doi.org/10.3233/FAIA200158

  20. Darwiche, A., Marquis, P.: A knowledge compilation map. J. Artif. Intell. Res. 17, 229–264 (2002)

    Article  MathSciNet  Google Scholar 

  21. Davies, J., Bacchus, F.: Solving MAXSAT by solving a sequence of simpler SAT instances. In: CP, pp. 225–239 (2011)

    Google Scholar 

  22. Demsar, J., et al.: Orange: data mining toolbox in python. J. Mach. Learn. Res. 14(1), 2349–2353 (2013). http://dl.acm.org/citation.cfm?id=2567736, https://orangedatamining.com/

  23. Auditing black-box predictive models. https://blog.fastforwardlabs.com/2017/03/09/fairml-auditing-black-box-predictive-models.html (2016)

  24. Friedler, S., Scheidegger, C., Venkatasubramanian, S.: On algorithmic fairness, discrimination and disparate impact (2015)

    Google Scholar 

  25. Ignatiev, A.: Towards trustable explainable AI. In: IJCAI, pp. 5154–5158 (2020)

    Google Scholar 

  26. Ignatiev, A., Janota, M., Marques-Silva, J.: Quantified maximum satisfiability. Constraints An Int. J. 21(2), 277–302 (2016)

    Article  MathSciNet  Google Scholar 

  27. Ignatiev, A., Morgado, A., Marques-Silva, J.: Propositional abduction with implicit hitting sets. In: ECAI, pp. 1327–1335 (2016)

    Google Scholar 

  28. Ignatiev, A., Morgado, A., Marques-Silva, J.: PySAT: A Python toolkit for prototyping with SAT oracles. In: SAT, pp. 428–437 (2018)

    Google Scholar 

  29. Ignatiev, A., Morgado, A., Marques-Silva, J.: RC2: an efficient MaxSAT solver. J. Satisf. Boolean Model. Comput. 11(1), 53–64 (2019)

    MathSciNet  Google Scholar 

  30. Ignatiev, A., Morgado, A., Weissenbacher, G., Marques-Silva, J.: Model-based diagnosis with multiple observations. In: IJCAI, pp. 1108–1115 (2019)

    Google Scholar 

  31. Ignatiev, A., Narodytska, N., Asher, N., Marques-Silva, J.: From contrastive to abductive explanations and back again. In: AI*IA (2020). preliminary version available from https://arxiv.org/abs/2012.11067

  32. Ignatiev, A., Narodytska, N., Marques-Silva, J.: Abduction-based explanations for machine learning models. In: AAAI, pp. 1511–1519 (2019)

    Google Scholar 

  33. Ignatiev, A., Narodytska, N., Marques-Silva, J.: On relating explanations and adversarial examples. In: NeurIPS, pp. 15857–15867 (2019)

    Google Scholar 

  34. Ignatiev, A., Narodytska, N., Marques-Silva, J.: On validating, repairing and refining heuristic ML explanations. CoRR abs/1907.02509 (2019). http://arxiv.org/abs/1907.02509

  35. Ignatiev, A., Pereira, F., Narodytska, N., Marques-Silva, J.: A sat-based approach to learn explainable decision sets. In: IJCAR, pp. 627–645 (2018)

    Google Scholar 

  36. Ignatiev, A., Previti, A., Liffiton, M.H., Marques-Silva, J.: Smallest MUS extraction with minimal hitting set dualization. In: CP, pp. 173–182 (2015)

    Google Scholar 

  37. Izza, Y., Ignatiev, A., Marques-Silva, J.: On explaining decision trees. CoRR abs/2010.11034 (2020)

    Google Scholar 

  38. Junker, U.: QUICKXPLAIN: preferred explanations and relaxations for over-constrained problems. In: AAAI, pp. 167–172 (2004)

    Google Scholar 

  39. Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: KDD, pp. 1675–1684 (2016)

    Google Scholar 

  40. Lakkaraju, H., Bastani, O.: “How do I fool you?”: manipulating user trust via misleading black box explanations. In: AIES, pp. 79–85 (2020)

    Google Scholar 

  41. Liffiton, M.H., Malik, A.: Enumerating infeasibility: finding multiple MUSes quickly. In: CPAIOR, pp. 160–175 (2013)

    Google Scholar 

  42. Liffiton, M.H., Mneimneh, M.N., Lynce, I., Andraus, Z.S., Marques-Silva, J., Sakallah, K.A.: A branch and bound algorithm for extracting smallest minimal unsatisfiable subformulas. Constraints An Int. J. 14(4), 415–442 (2009)

    Article  MathSciNet  Google Scholar 

  43. Liffiton, M.H., Previti, A., Malik, A., Marques-Silva, J.: Fast, flexible MUS enumeration. Constraints An Int. J. 21(2), 223–250 (2016)

    Article  MathSciNet  Google Scholar 

  44. Liffiton, M.H., Sakallah, K.A.: On finding all minimally unsatisfiable subformulas. In: SAT, pp. 173–186 (2005)

    Google Scholar 

  45. Liffiton, M.H., Sakallah, K.A.: Algorithms for computing minimal unsatisfiable subsets of constraints. J. Autom. Reasoning 40(1), 1–33 (2008)

    Article  MathSciNet  Google Scholar 

  46. Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2018)

    Article  Google Scholar 

  47. Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: NeurIPS, pp. 4765–4774 (2017)

    Google Scholar 

  48. Lynce, I., Marques-Silva, J.: On computing minimum unsatisfiable cores. In: SAT (2004)

    Google Scholar 

  49. Marques-Silva, J., Gerspacher, T., Cooper, M.C., Ignatiev, A., Narodytska, N.: Explaining Naive Bayes and other linear classifiers with polynomial time and delay. In: NeurIPS (2020)

    Google Scholar 

  50. Marques-Silva, J., Heras, F., Janota, M., Previti, A., Belov, A.: On computing minimal correction subsets. In: IJCAI, pp. 615–622 (2013)

    Google Scholar 

  51. Marques-Silva, J., Lynce, I.: On improving MUS extraction algorithms. In: SAT, pp. 159–173 (2011)

    Google Scholar 

  52. Mencia, C., Ignatiev, A., Previti, A., Marques-Silva, J.: MCS extraction with sublinear oracle queries. In: SAT, pp. 342–360 (2016)

    Google Scholar 

  53. Mencia, C., Previti, A., Marques-Silva, J.: Literal-based MCS extraction. In: IJCAI, pp. 1973–1979 (2015)

    Google Scholar 

  54. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  55. Morgado, A., Liffiton, M.H., Marques-Silva, J.: MaxSAT-based MCS enumeration. In: HVC, pp. 86–101 (2012)

    Google Scholar 

  56. de Moura, L.M., Bjørner, N.: Z3: an efficient SMT solver. In: TACAS, pp. 337–340 (2008)

    Google Scholar 

  57. Narodytska, N., Shrotri, A., Meel, K.S., Ignatiev, A., Marques-Silva, J.: Assessing heuristic machine learning explanations with model counting. In: Janota, M., Lynce, I. (eds.) SAT 2019. LNCS, vol. 11628, pp. 267–278. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-24258-9_19

    Chapter  Google Scholar 

  58. Penn Machine Learning Benchmarks. https://github.com/EpistasisLab/penn-ml-benchmarks

  59. Prestwich, S.D.: CNF encodings. In: Handbook of Satisfiability: Second Edition, Frontiers in Artificial Intelligence and Applications, vol. 336, pp. 75–100. IOS Press (2021)

    Google Scholar 

  60. Previti, A., Marques-Silva, J.: Partial MUS enumeration. In: AAAI (2013)

    Google Scholar 

  61. Reiter, R.: A theory of diagnosis from first principles. Artif. Intell. 32(1), 57–95 (1987)

    Article  MathSciNet  Google Scholar 

  62. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: KDD, pp. 1135–1144 (2016)

    Google Scholar 

  63. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI, pp. 1527–1535 (2018)

    Google Scholar 

  64. Rivest, R.L.: Learning decision lists. Mach. Learn. 2(3), 229–246 (1987). https://doi.org/10.1007/BF00058680

  65. Rudin, C., Ertekin, S.: Learning customized and optimized lists of rules with mathematical programming. Math. Program. Comput. 10(4), 659–702 (2018). https://doi.org/10.1007/s12532-018-0143-8

  66. Shih, A., Choi, A., Darwiche, A.: A symbolic approach to explaining Bayesian network classifiers. In: IJCAI, pp. 5103–5111 (2018)

    Google Scholar 

  67. Shih, A., Choi, A., Darwiche, A.: Compiling Bayesian network classifiers into decision graphs. In: AAAI, pp. 7966–7974 (2019)

    Google Scholar 

  68. Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHAP: adversarial attacks on post hoc explanation methods. In: AIES, pp. 180–186 (2020)

    Google Scholar 

  69. UCI Machine Learning Repository. https://archive.ics.uci.edu/ml

  70. Umans, C., Villa, T., Sangiovanni-Vincentelli, A.L.: Complexity of two-level logic minimization. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 25(7), 1230–1246 (2006)

    Google Scholar 

  71. Wang, F., Rudin, C.: Falling rule lists. In: AISTATS (2015)

    Google Scholar 

  72. Yang, F., Yang, Z., Cohen, W.W.: Differentiable learning of logical rules for knowledge base reasoning. In: NeurIPS, pp. 2319–2328 (2017)

    Google Scholar 

  73. Yang, H., Rudin, C., Seltzer, M.I.: Scalable bayesian rule lists. In: ICML, pp. 3921–3930 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexey Ignatiev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ignatiev, A., Marques-Silva, J. (2021). SAT-Based Rigorous Explanations for Decision Lists. In: Li, CM., Manyà, F. (eds) Theory and Applications of Satisfiability Testing – SAT 2021. SAT 2021. Lecture Notes in Computer Science(), vol 12831. Springer, Cham. https://doi.org/10.1007/978-3-030-80223-3_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-80223-3_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-80222-6

  • Online ISBN: 978-3-030-80223-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics