1 Introduction

Predictive models are being increasingly used to support consequential decision-making in a number of contexts, e.g., denying a loan, rejecting a job applicant, or prescribing life-altering medication. As a result, there is mounting social and legal pressure [64, 72] to provide explanations that help the affected individuals to understand “why a prediction was output”, as well as “how to act” to obtain a desired outcome. Answering these questions, for the different stakeholders involved, is one of the main goals of explainable machine learning [15, 19, 32, 37, 42, 53, 54].

In this context, several works have proposed to explain a model’s predictions of an affected individual using counterfactual explanations, which are defined as statements of “how the world would have (had) to be different for a desirable outcome to occur” [76]. Of specific importance are nearest counterfactual explanations, presented as the most similar instances to the feature vector describing the individual, that result in the desired prediction from the model [25, 35]. A closely related term is algorithmic recourse—the actions required for, or “the systematic process of reversing unfavorable decisions by algorithms and bureaucracies across a range of counterfactual scenarios”—which is argued as the underwriting factor for temporally extended agency and trust [70].

Counterfactual explanations have shown promise for practitioners and regulators to validate a model on metrics such as fairness and robustness [25, 58, 69]. However, in their raw form, such explanations do not seem to fulfill one of the primary objectives of “explanations as a means to help a data-subject act rather than merely understand” [76].

The translation of counterfactual explanations to recourse actions, i.e., to a recommendable set of actions to help an individual achieve a favorable outcome, was first explored in [69], where additional feasibility constraints were imposed to support the concept of actionable features (e.g., to prevent asking the individual to reduce their age or change their race). While a step in the right direction, this work and others that followed [25, 41, 49, 58] implicitly assume that the set of actions resulting in the desired output would directly follow from the counterfactual explanation. This arises from the assumption that “what would have had to be in the past” (retrodiction) not only translates to “what should be in the future” (prediction) but also to “what should be done in the future” (recommendation) [63]. We challenge this assumption and attribute the shortcoming of existing approaches to their lack of consideration for real-world properties, specifically the causal relationships governing the physical world in which actions are performed.

1.1 Motivating Examples

Example 1

Consider, for example, the setting in Fig. 1 where an individual has been denied a loan and seeks an explanation and recommendation on how to proceed. This individual has an annual salary (\(X_1\)) of \(\$75,000\) and an account balance (\(X_2\)) of \(\$25,000\) and the predictor grants a loan based on the binary output of \(h(X_1,X_2) = \mathrm {sgn}(X_1 + 5 \cdot X_2 - \$225,000)\). Existing approaches may identify nearest counterfactual explanations as another individual with an annual salary of \(\$100,000\) (\(+33\%\)) or a bank balance of \(\$30,000\) (\(+20\%\)), therefore encouraging the individual to reapply when either of these conditions are met. On the other hand, assuming actions take place in a world where home-seekers save \(30\%\) of their salary, up to external fluctuations in circumstance, (i.e., \(X_2 \,{:}{=}\,0.3X_1 + U_2\)), a salary increase of only \(+14\%\) to \(\$85,000\) would automatically result in \(\$3,000\) additional savings, with a net positive effect on the loan-granting algorithm’s decision.

Fig. 1.
figure 1

Illustration of an example bivariate causal generative process, showing both the graphical model \(\mathcal {G}\) (left), and the corresponding structural causal model (SCM) \(\mathcal {M}\) (right) [45]. In this example, \(X_1\) represents an individual’s annual salary, \(X_2\) represents their bank balance, and \(\hat{Y}\) denotes the output of a fixed deterministic predictor \(h\), predicting an individual’s eligibility to receive a loan. \(U_1\) and \(U_2\) denote unobserved (exogenous) random variables.

Example 2

Consider now another instance of the setting of Fig. 1 in which an agricultural team wishes to increase the yield of their rice paddy. While many factors influence yield (temperature, solar radiation, water supply, seed quality, ...), assume that the primary actionable capacity of the team is their choice of paddy location. Importantly, the altitude (\(X_1\)) at which the paddy sits has an effect on other variables. For example, the laws of physics may imply that a 100m increase in elevation results in an average decrease of 1\(^{\circ }\)C in temperature (\(X_2\)). Therefore, it is conceivable that a counterfactual explanation suggesting an increase in elevation for optimal yield, without consideration for downstream effects of the elevation increase on other variables (e.g., a decrease in temperature), may actually result in the prediction not changing.

These two examples illustrate the pitfalls of generating recourse actions directly from counterfactual explanations without consideration for the (causal) structure of the world in which the actions will be performed. Actions derived directly from counterfactual explanations may ask too much effort from the individual (Example 1) or may not even result in the desired output (Example 2).

We also remark that merely accounting for correlations between features (instead of modeling their causal relationships) would be insufficient as this would not align with the asymmetrical nature of causal interventions: for Example 1, increasing bank balance (\(X_2\)) would not lead to a higher salary (\(X_1\)), and for Example 2, increasing temperature (\(X_2\)) would not affect altitude (\(X_1\)), contrary to what would be predicted by a purely correlation-based approach.

1.2 Summary of Contributions and Structure of This Chapter

In the present work, we remedy this situation via a fundamental reformulation of the recourse problem: we rely on causal reasoning (Sect. 2.2) to incorporate knowledge of causal dependencies between features into the process of recommending recourse actions that, if acted upon, would result in a counterfactual instance that favorably changes the output of the predictive model (Sect. 2.1).

First, we illuminate the intrinsic limitations of an approach in which recourse actions are directly derived from counterfactual explanations (Sect. 3.1). We show that actions derived from pre-computed (nearest) counterfactual explanations may prove sub-optimal in the sense of higher-than-necessary cost, or, even worse, ineffective in the sense of not actually achieving recourse. To address these limitations, we emphasize that, from a causal perspective, actions correspond to interventions which not only model changes to the intervened-upon variables, but also downstream effects on the remaining (non-intervened-upon) variables. This insight leads us to propose a new framework of recourse through minimal interventions in an underlying structural causal model (SCM) (Sect. 3.2). We complement this formulation with a negative result showing that recourse guarantees are generally only possible if the true SCM is known (Sect. 3.3).

Second, since real-world SCMs are rarely known we focus on the problem of algorithmic recourse under imperfect causal knowledge (Sect. 4). We propose two probabilistic approaches which allow to relax the strong assumption of a fully-specified SCM. In the first (Sect. 4.1), we assume that the true SCM, while unknown, is an additive Gaussian noise model [23, 47]. We then use Gaussian processes (GPs) [79] to average predictions over a whole family of SCMs to obtain a distribution over counterfactual outcomes which forms the basis for individualised algorithmic recourse. In the second (Sect. 4.2), we consider a different subpopulation-based (i.e., interventional rather than counterfactual) notion of recourse which allows us to further relax our assumptions by removing any assumptions on the form of the structural equations. This approach proceeds by estimating the effect of interventions on individuals similar to the one for which we aim to achieve recourse (i.e., the conditional average treatment effect [1]), and relies on conditional variational autoencoders [62] to estimate the interventional distribution. In both cases, we assume that the causal graph is known or can be postulated from expert knowledge, as without such an assumption causal reasoning from observational data is not possible [48, Prop. 4.1]. To find minimum cost interventions that achieve recourse with a given probability, we propose a gradient-based approach to solve the resulting optimisation problems (Sect. 4.3).

Our experiments (Sect. 5) on synthetic and semi-synthetic loan approval data, show the need for probabilistic approaches to achieve algorithmic recourse in practice, as point estimates of the underlying true SCM often propose invalid recommendations or achieve recourse only at higher cost. Importantly, our results also suggest that subpopulation-based recourse is the right approach to adopt when assumptions such as additive noise do not hold. A user-friendly implementation of all methods that only requires specification of the causal graph and a training set is available at https://github.com/amirhk/recourse.

2 Preliminaries

In this work, we consider algorithmic recourse through the lens of causality. We begin by reviewing the main concepts.

2.1 XAI: Counterfactual Explanations and Algorithmic Recourse

Let \(\mathbf {X}\,=\,(X_1, ..., X_d)\) denote a tuple of random variables, or features, taking values \(\mathbf {x}\,=\,(x_1, ..., x_d)\in \mathcal {X}\,=\,\mathcal {X}_1\times ...\times \mathcal {X}_d\). Assume that we are given a binary probabilistic classifier \(h:\mathcal {X}\rightarrow [0,1]\) trained to make decisions about i.i.d. samples from the data distribution \(P_{\mathbf {X}}\).Footnote 1

For ease of illustration, we adopt the setting of loan approval as a running example, i.e., \(h(\mathbf {x})\ge 0.5\) denotes that a loan is granted and \(h(\mathbf {x})<0.5\) that it is denied. For a given (“factual”) individual \(\mathbf {x}^\texttt {F}\) that was denied a loan, \(h(\mathbf {x}^\texttt {F})<0.5\), we aim to answer the following questions: “Why did individual \(\mathbf {x}^\texttt {F}\) not get the loan?” and “What would they have to change, preferably with minimal effort, to increase their chances for a future application?”.

A popular approach to this task is to find so-called (nearest) counterfactual explanations [76], where the term “counterfactual” is meant in the sense of the closest possible world with a different outcome [36]. Translating this idea to our setting, a nearest counterfactual explanation \(\mathbf {x}^\texttt {CFE}\) for an individual \(\mathbf {x}^\texttt {F}\) is given by a solution to the following optimisation problem:

$$\begin{aligned} \mathbf {x}^\texttt {CFE}\in \mathop {\mathrm {arg\,min}}_{\mathbf {x}\in \mathcal {X}} \quad \text {dist}(\mathbf {x}, \mathbf {x}^\texttt {F}) \quad \mathop {\mathrm {subject}\;\mathrm {to}}\quad h(\mathbf {x})\ge 0.5, \end{aligned}$$
(1)

where \(\text {dist}(\cdot ,\cdot )\) is a distance on \(\mathcal {X}\times \mathcal {X}\), and additional constraints may be added to reflect plausibility, feasibility, or diversity of the obtained counterfactual explanations [22, 24, 25, 39, 41, 49, 58]. Most existing approaches have focused on providing solutions to (1) by exploring semantically meaningful choices of \(\mathrm {dist}(\cdot , \cdot )\) for measuring similarity between individuals (e.g., \(\ell _0, \ell _1, \ell _\infty \), percentile-shift), accommodating different predictive models \(h\) (e.g., random forest, multilayer perceptron), and realistic plausibility constraints \(\mathcal {P}\subseteq \mathcal {X}\).Footnote 2

Although nearest counterfactual explanations provide an understanding of the most similar set of features that result in the desired prediction, they stop short of giving explicit recommendations on how to act to realize this set of features. The lack of specification of the actions required to realize \(\mathbf {x}^\texttt {CFE}\) from \(\mathbf {x}^\texttt {F}\) leads to uncertainty and limited agency for the individual seeking recourse. To shift the focus from explaining a decision to providing recommendable actions to achieve recourse, Ustun et al. [69] reformulated (1) as:

$$\begin{aligned} \begin{aligned} \boldsymbol{\delta }^*\in \mathop {\mathrm {arg\,min}}_{\boldsymbol{\delta }\in \mathcal {F}} \quad \text {cost}^\texttt {F}(\boldsymbol{\delta }) \quad \mathop {\mathrm {subject}\;\mathrm {to}}\quad h(\mathbf {x}^\texttt {F}+ \boldsymbol{\delta }) \ge 0.5, \quad \mathbf {x}^\texttt {F}+ \boldsymbol{\delta }\in \mathcal {P}, \end{aligned} \end{aligned}$$
(2)

where \(\text {cost}^\texttt {F}(\cdot )\) is a user-specified cost function that encodes preferences between feasible actions from \(\mathbf {x}^\texttt {F}\), and \(\mathcal {F}\) and \(\mathcal {P}\) are optional sets of feasibility and plausibility constraints,Footnote 3 restricting the actions and the resulting counterfactual explanation, respectively. The feasibility constraints in (2), as introduced in [69], aim at restricting the set of features that the individual may act upon. For instance, recommendations should not ask individuals to change their gender or reduce their age. Henceforth, we refer to the optimization problem in (2) as CFE-based recourse problem, where the emphasis is shifted from minimising a distance as in (1) to optimising a personalised cost function \(\text {cost}^\texttt {F}(\cdot )\) over a set of actions \(\boldsymbol{\delta }\) which individual \(\mathbf {x}^\texttt {F}\) can perform.

The seemingly innocent reformulation of the counterfactual explanation problem in (1) as a recourse problem in (2) is founded on two key assumptions.

Assumption 1

The feature-wise difference between factual and nearest counterfactual instances, \(\mathbf {x}^\texttt {CFE}- \mathbf {x}^\texttt {F}\), directly translates to minimal action sets \(\boldsymbol{\delta }^*\), such that performing the actions in \(\boldsymbol{\delta }^*\) starting from \(\mathbf {x}^\texttt {F}\) will result in \(\mathbf {x}^\texttt {CFE}\).

Assumption 2

There is a 1-1 mapping between \(\mathrm {dist}(\cdot , \mathbf {x}^\texttt {F})\) and \(\mathrm {cost}^\texttt {F}(\cdot )\), whereby more effortful actions incur larger distance and higher cost.

Unfortunately, these assumptions only hold in restrictive settings, rendering solutions of (2) sub-optimal or ineffective in many real-world scenarios. Specifically, Assumption 1 implies that features \(X_i\) for which \(\delta ^*_i\,=\,0\) are unaffected. However, this generally holds only if (i) the individual applies effort in a world where changing a variable does not have downstream effects on other variables (i.e., features are independent of each other); or (ii) the individual changes the value of a subset of variables while simultaneously enforcing that the values of all other variables remain unchanged (i.e., breaking dependencies between features). Beyond the sub-optimality that arises from assuming/reducing to an independent world in (i), and disregarding the feasibility of non-altering actions in (ii), non-altering actions may naturally incur a cost which is not captured in the current definition of cost, and hence Assumption 2 does not hold either. Therefore, except in trivial cases where the model designer actively inputs pair-wise independent features (independently manipulable inputs) to the classifier \(h\) (see Fig. 2a), generating recommendations from counterfactual explanations in this manner, i.e., ignoring the potentially rich causal structure over \(\mathbf {X}\) and the resulting downstream effects that changes to some features may have on others (see Fig. 2b), warrants reconsideration. A number of authors have argued for the need to consider causal relations between variables when generating counterfactual explanations [25, 39, 41, 69, 76], however, this has not yet been formalized.

Fig. 2.
figure 2

A view commonly adopted for counterfactual explanations (a) treats features as independently manipulable inputs to a given fixed and deterministic classifier h. In the causal approach to algorithmic recourse taken in this work, we instead view variables as causally related to each other by a structural causal model (SCM) \(\mathcal {M}\) with associated causal graph \(\mathcal {G}\) (b).

2.2 Causality: Structural Causal Models, Interventions, and Counterfactuals

To reason formally about causal relations between features \(\mathbf {X}\,=\,(X_1, ..., X_d)\), we adopt the structural causal model (SCM) framework [45].Footnote 4 Specifically, we assume that the data-generating process of \(\mathbf {X}\) is described by an (unknown) underlying SCM \(\mathcal {M}\) of the general form

$$\begin{aligned} \mathcal {M}\,=\,(\mathbf {S}, P_{\mathbf {U}}), \quad \mathbf {S}\,=\,\{X_r :\,=\, f_r(\mathbf {X}_{\text {pa}(r)}, U_r)\}_{r\,=\,1}^d, \quad P_\mathbf {U}\,=\, P_{U_1}\times \ldots \times P_{U_d}, \end{aligned}$$
(3)

where the structural equations \(\mathbf {S}\) are a set of assignments generating each observed variable \(X_r\) as a deterministic function \(f_r\) of its causal parents \(\mathbf {X}_{\text {pa}(r)}\subseteq \mathbf {X}\setminus X_r\) and an unobserved noise variable \(U_r\). The assumption of mutually independent noises (i.e., a fully factorised \(P_\mathbf {U}\)) entails that there is no hidden confounding and is referred to as causal sufficiency. An SCM is often illustrated by its associated causal graph \(\mathcal {G}\), which is obtained by drawing a directed edge from each node in \(\mathbf {X}_{\text {pa}(r)}\) to \(X_r\) for \(r\in [d]:=\{1,\ldots , d\}\), see Fig. 1 and Fig. 2b for examples. We assume throughout that \(\mathcal {G}\) is acyclic. In this case, \(\mathcal {M}\) implies a unique observational distribution \(P_\mathbf {X}\), which factorises over \(\mathcal {G}\), defined as the push-forward of \(P_\mathbf {U}\) via \(\mathbf {S}\).Footnote 5

Importantly, the SCM framework also entails interventional distributions describing a situation in which some variables are manipulated externally. E.g., using the do-operator, an intervention which fixes \(\mathbf {X}_\mathcal {I}\) to \({\boldsymbol{\theta }}\) (where \(\mathcal {I}\subseteq [d]\)) is denoted by \(do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})\). The corresponding distribution of the remaining variables \(\mathbf {X}_{-\mathcal {I}}\) can be computed by replacing the structural equations for \(\mathbf {X}_\mathcal {I}\) in \(\mathbf {S}\) to obtain the new set of equations \(\mathbf {S}^{do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})}\). The interventional distribution \(P_{\mathbf {X}_{-\mathcal {I}}|do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})}\) is then given by the observational distribution implied by the manipulated SCM \(\left( \mathbf {S}^{do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})}, P_\mathbf {U}\right) \).

Similarly, an SCM also implies distributions over counterfactuals—statements about a world in which a hypothetical intervention was performed all else being equal. For example, given observation \(\mathbf {x}^\texttt {F}\) we can ask what would have happened if \(\mathbf {X}_\mathcal {I}\) had instead taken the value \({\boldsymbol{\theta }}\). We denote the counterfactual variable by \(\mathbf {X}(do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }}))|\mathbf {x}^\texttt {F}\), whose distribution can be computed in three steps [45]:

  1. 1.

    Abduction: compute the posterior distribution \(P_{\mathbf {U}|\mathbf {x}^\texttt {F}}\) of the exogenous variables \(\mathbf {U}\) given the factual observation \(\mathbf {x}^\texttt {F}\);

  2. 2.

    Action: perform the intervention \(do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})\) by replacing the structural equations for \(\mathbf {X}_\mathcal {I}\) by \(\mathbf {X}_\mathcal {I}:={\boldsymbol{\theta }}\) to obtain the new structural equations \(\mathbf {S}^{do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})}\);

  3. 3.

    Prediction: the counterfactual distribution \(P_{\mathbf {X}(do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }}))|\mathbf {x}^\texttt {F}}\) is the distribution induced by the resulting SCM \(\left( \mathbf {S}^{do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})}, P_{\mathbf {U}|\mathbf {x}^\texttt {F}}\right) \).

For instance, the counterfactual variable for individual \(\mathbf {x}^\texttt {F}\) had action \(a\,=\,do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})\in \mathcal {F}\) been performed would be \(\mathbf {X}^\texttt {SCF}(a) := \mathbf {X}(a) | \mathbf {x}^\texttt {F}\). For a worked-out example of computing counterfactuals in SCMs, we refer to Sect. 3.2.

3 Causal Recourse Formulation

3.1 Limitations of CFE-Based Recourse

Here, we use causal reasoning to formalize the limitations of the CFE-based recourse approach in (2). To this end, we first reinterpret the actions resulting from solving the CFE-based recourse problem, i.e., \(\boldsymbol{\delta }^*\), as structural interventions by defining the set of indices \(\mathcal {I}\) of observed variables that are intervened upon.

Definition 1

(CFE-based actions). Given an individual \(\mathbf {x}^\texttt {F}\) in world \(\mathcal {M}\) and a solution \(\boldsymbol{\delta }^*\) of (2), denote by \(\mathcal {I}= \{i ~|~ \delta ^*_i \ne 0\}\) the set of indices of observed variables that are acted upon. A CFE-based action then refers to a set of structural interventions of the form \(a^\texttt {CFE}(\boldsymbol{\delta }^*,\mathbf {x}^\texttt {F}) := \mathrm {do}( \{ X_i \,{:}{=}\, x^F_i + \delta ^*_i \}_{i \in \mathcal {I}} )\).

Using Definition 1, we can derive the following key results that provide necessary and sufficient conditions for CFE-based actions to guarantee recourse.

Proposition 1

A CFE-based action \(a^\texttt {CFE}(\boldsymbol{\delta }^*,\mathbf {x}^\texttt {F})\) in general (i.e., for arbitrary underlying causal models) results in the structural counterfactual \(\mathbf {x}^\texttt {SCF}\,=\, \mathbf {x}^\texttt {CFE}:= \mathbf {x}^\texttt {F}+ \boldsymbol{\delta }^*\) and thus guarantees recourse (i.e., \(h(\mathbf {x}^\texttt {SCF})\ne h(\mathbf {x}^\texttt {F})\)) if and only if the set of descendants of the acted upon variables determined by \(\mathcal {I}\) is the empty set.

Corollary 1

If all features in the true world \(\mathcal {M}\) are mutually independent, (i.e., if they are all root-nodes in the causal graph), then CFE-based actions always guarantee recourse.

While the above results are formally proven in Appendix A of [28], we provide a sketch of the proof below. If the intervened-upon variables do not have descendants, then by definition \(\mathbf {x}^\texttt {SCF}\,=\, \mathbf {x}^\texttt {CFE}\). Otherwise, the value of the descendants will depend on the counterfactual value of their parents, leading to a structural counterfactual that does not resemble the nearest counterfactual explanation, \(\mathbf {x}^\texttt {SCF}\ne \mathbf {x}^\texttt {CFE}\), and thus may not result in recourse. Moreover, in an independent world the set of descendants of all the variables is by definition the empty set.

Unfortunately, the independent world assumption is not realistic, as it requires all the features selected to train the predictive model \(h\) to be independent of each other. Moreover, limiting changes to only those variables without descendants may unnecessarily limit the agency of the individual, e.g., in Example 1, restricting the individual to only changing bank balance without e.g., pursuing a new/side job to increase their income would be limiting. Thus, for a given non-independent \(\mathcal {M}\) capturing the true causal dependencies between features, CFE-based actions require the individual seeking recourse to enforce (at least partially) an independent post-intervention model \(\mathcal {M}^{a^\texttt {CFE}}\) (so that Assumption 1 holds), by intervening on all the observed variables for which \(\delta _i \ne 0\) as well as on their descendants (even if their \(\delta _i \,=\, 0\)). However, such requirement suffers from two main issues. First, it conflicts with Assumption 2, since holding the value of variables may still imply potentially infeasible and costly interventions in \(\mathcal {M}\) to sever all the incoming edges to such variables, and even then it may be ineffective and not change the prediction (see Example 2). Second, as will be proven in the next section (see also, Example 1), CFE-based actions may still be suboptimal, as they do not benefit from the causal effect of actions towards changing the prediction. Thus, even when equipped with knowledge of causal dependencies, recommending actions directly from counterfactual explanations in the manner of existing approaches is not satisfactory.

3.2 Recourse Through Minimal Interventions

We have demonstrated that actions which immediately follow from counterfactual explanations may require unrealistic assumptions, or alternatively, result in sub-optimal or even infeasible recommendations. To solve such limitations we rewrite the recourse problem so that instead of finding the minimal (independent) shift of features as in (2), we seek the minimal cost set of actions (in the form of structural interventions) that results in a counterfactual instance yielding the favorable output from \(h\). For simplicity, we present the formulation for the case of an invertible SCM (i.e., one with invertible structural equations \(\mathbf {S}\)) such that the ground-truth counterfactual \(\mathbf {x}^\texttt {SCF}\,=\, \mathbf {S}^{a}(\mathbf {S}^{-1}({\mathbf {x}^\texttt {F}}))\) is a unique point. The resulting optimisation formulation is as follows:

$$\begin{aligned} \begin{aligned} a^*\in \mathop {\mathrm {arg\,min}}_{a\in \mathcal {F}} \quad \text {cost}^\texttt {F}(a) \quad \mathop {\mathrm {subject}\;\mathrm {to}}\quad h(\mathbf {x}^\texttt {SCF}(a))&\ge 0.5, \\ \quad \mathbf {x}^\texttt {SCF}(a)&\,=\,\mathbf {x}(a) | \mathbf {x}^\texttt {F}\in \mathcal {P}, \end{aligned} \end{aligned}$$
(4)

where \(a^*\in \mathcal {F}\) directly specifies the set of feasible actions to be performed for minimally costly recourse, with \(\text {cost}^\texttt {F}(\cdot )\).Footnote 6

Importantly, using the formulation in (4) it is now straightforward to show the suboptimality of CFE-based actions (proof in Appendix A of [28]):

Proposition 2

Given an individual \(\mathbf {x}^\texttt {F}\) observed in world \(\mathcal {M}\), a set of feasible actions \(\mathcal {F}\), and a solution \(a^*\in \mathcal {F}\) of (4), assume that there exists a CFE-based action \(a^\texttt {CFE}(\boldsymbol{\delta }^*,\mathbf {x}^\texttt {F}) \in \mathcal {F}\) (see Definition 1) that achieves recourse, i.e., \(h(\mathbf {x}^\texttt {F}) \ne h(\mathbf {x}^\texttt {CFE})\). Then, \(\text {cost}^\texttt {F}(a^*) \le \text {cost}^\texttt {F}(a^\texttt {CFE}) \).

Thus, for a known causal model capturing the dependencies among observed variables, and a family of feasible interventions, the optimization problem in (4) yields Recourse through Minimal Interventions (MINT). Generating minimal interventions through solving (4) requires that we be able to compute the structural counterfactual, \(\mathbf {x}^\texttt {SCF}\), of the individual \(\mathbf {x}^\texttt {F}\) in world \(\mathcal {M}\), given any feasible action \(a\in \mathcal {F}\). To this end, and for the purpose of demonstration, we consider a class of invertible SCMs, specifically, additive noise models (ANM) [23], where the structural equations \(\mathbf {S}\) are of the form

$$\begin{aligned} \mathbf {S}\,=\,\{X_r:=f_r(\mathbf {X}_{\text {pa}(r)})+U_r\}_{r=1}^d \quad \implies \quad u_r^\texttt {F}=x_r^\texttt {F}-f_r(\mathbf {x}_{\text {pa}(r)}^\texttt {F}), \quad r\in [d], \end{aligned}$$
(5)

and propose to use the three steps of structural counterfactuals in [45] to assign a single counterfactual \(\mathbf {x}^\texttt {SCF}(a):=\mathbf {x}(a)|\mathbf {x}^\texttt {F}\) to each action \(a\,=\,do(\mathbf {X}_\mathcal {I}={\boldsymbol{\theta }})\in \mathcal {F}\) as below.

Fig. 3.
figure 3

The structural causal model (graph and equations) for the working example and demonstration in Sect. 3.2.

Working Example. Consider the model in Fig. 3, where \(\{U_i\}_{i=1}^4\) are mutually independent exogenous variables, and \(\{f_i\}_{i=1}^4\) are deterministic (linear or nonlinear) functions. Let \(\mathbf {x}^\texttt {F}= (x^\texttt {F}_1, x^\texttt {F}_2, x^\texttt {F}_3, x^\texttt {F}_4)^\top \) be the observed features belonging to the (factual) individual seeking recourse. Also, let \(\mathcal {I}\) denote the set of indices corresponding to the subset of endogenous variables that are intervened upon according to the action set \(a\). Then, we obtain a structural counterfactual, \(\mathbf {x}^\texttt {SCF}(a) := \mathbf {x}(a) | \mathbf {x}^\texttt {F}= \mathbf {S}^{a}(\mathbf {S}^{-1}({\mathbf {x}^\texttt {F}}))\), by applying the Abduction-Action-Prediction steps [46] as follows:

Step 1. Abduction uniquely determines the value of all exogenous variables \(\mathbf {U}\) given the observed evidence \(\mathbf {X}=\mathbf {x}^\texttt {F}\):

$$\begin{aligned} \begin{aligned} u_1&\,=\, x^\texttt {F}_1 , \\ u_2&\,=\, x^\texttt {F}_2 , \\ u_3&\,=\, x^\texttt {F}_3 - f_3(x^\texttt {F}_1, x^\texttt {F}_2), \\ u_4&\,=\, x^\texttt {F}_4 - f_4(x^\texttt {F}_3) . \end{aligned} \end{aligned}$$
(6)

Step 2. Action modifies the SCM according to the hypothetical interventions, \(\mathrm {do}(\{X_i \,{:}{=}\, a_i\}_{i \in \mathcal {I}})\) (where \(a_i = x^F_i + \delta _i\)), yielding \(\mathbf {S}^{a}\):

$$\begin{aligned} \begin{aligned} X_1&{:}{=}[1 \in \mathcal {I}] \cdot a_1 + [1 \notin \mathcal {I}] \cdot U_1 , \\ X_2&{:}{=}[2 \in \mathcal {I}] \cdot a_2 + [2 \notin \mathcal {I}] \cdot U_2 , \\ X_3&{:}{=}[3 \in \mathcal {I}] \cdot a_3 + [3 \notin \mathcal {I}] \cdot \big ( f_3(X_1, X_2) + U_3 \big ), \\ X_4&{:}{=}[4 \in \mathcal {I}] \cdot a_4 + [4 \notin \mathcal {I}] \cdot \big ( f_4(X_3) + U_4 \big ) , \end{aligned} \end{aligned}$$
(7)

where \([\cdot ]\) denotes the Iverson bracket.

Step 3. Prediction recursively determines the values of all endogenous variables based on the computed exogenous variables \(\{u_i\}_{i=1}^4\) from Step 1 and \(\mathbf {S}^{a}\) from Step 2, as:

$$\begin{aligned} \begin{aligned} x^\texttt {SCF}_1&{:}{=}[1 \in \mathcal {I}] \cdot a_1 + [1 \notin \mathcal {I}] \cdot \big ( u_1 \big ) , \\ x^\texttt {SCF}_2&{:}{=}[2 \in \mathcal {I}] \cdot a_2 + [2 \notin \mathcal {I}] \cdot \big ( u_2 \big ) , \\ x^\texttt {SCF}_3&{:}{=}[3 \in \mathcal {I}] \cdot a_3 + [3 \notin \mathcal {I}] \cdot \big ( f_3(x^\texttt {SCF}_1, x^\texttt {SCF}_2) + u_3 \big ), \\ x^\texttt {SCF}_4&{:}{=}[4 \in \mathcal {I}] \cdot a_4 + [4 \notin \mathcal {I}] \cdot \big ( f_4(x^\texttt {SCF}_3) + u_4 \big ) . \end{aligned} \end{aligned}$$
(8)

General Assignment Formulation for ANMs. As we have not made any restricting assumptions about the structural equations (only that we operate with additive noise modelsFootnote 7 where noise variables are pairwise independent), the solution for the working example naturally generalizes to SCMs corresponding to other DAGs with more variables. The assignment of structural counterfactual values can generally be written as:

$$\begin{aligned} \begin{aligned} x^\texttt {SCF}_i \,=\, [i \in \mathcal {I}] \cdot (x^\texttt {F}_i + \delta _i) + [i \notin \mathcal {I}] \cdot \big ( x^\texttt {F}_i + f_i(\boldsymbol{\mathrm {pa}}^\texttt {SCF}_i) - f_i(\boldsymbol{\mathrm {pa}}^\texttt {F}_i) \big ). \end{aligned} \end{aligned}$$
(9)

In words, the counterfactual value of the i-th feature, \(x^\texttt {SCF}_i\), takes the value \(x^\texttt {F}_i +\delta _i\) if such feature is intervened upon (i.e., \(i \in \mathcal {I}\)). Otherwise, \(x^\texttt {SCF}_i\) is computed as a function of both the factual and counterfactual values of its parents, denoted respectively by \(f_i(\boldsymbol{\mathrm {pa}}^\texttt {F}_i)\) and \(f_i(\boldsymbol{\mathrm {pa}}^\texttt {SCF}_i)\). The closed-form expression in (9) can replace the counterfactual constraint in (4), i.e.,

$$\mathbf {x}^\texttt {SCF}(a) := \mathbf {x}(a) | \mathbf {x}^\texttt {F}= \mathbf {S}^{a}(\mathbf {S}^{-1}({\mathbf {x}^\texttt {F}})),$$

after which the optimization problem may be solved by building on existing frameworks for generating nearest counterfactual explanations, including gradient-based, evolutionary-based, heuristics-based, or verification-based approaches as referenced in Sect. 2.1. It is important to note that unlike CFE-based actions where the precise value of all covariates post-intervention are specified, MINT-based actions require that the user focus only on the features upon which interventions are to be performed, which may better align with factors under the users control (e.g., some features may be non-actionable but mutable through changes to other features; see also [6]).

3.3 Negative Result: No Recourse Guarantees for Unknown Structural Equations

In practice, the structural counterfactual \(\mathbf {x}^\texttt {SCF}(a)\) can only be computed using an approximate (and likely imperfect) SCM \(\mathcal {M}\,=\,(\mathbf {S}, P_\mathbf {U})\), which is estimated from data assuming a particular form of the structural equation as in (5). However, assumptions on the form of the true structural equations \(\mathbf {S}_\star \) are generally untestable—not even with a randomized experiment—since there exist multiple SCMs which imply the same observational and interventional distributions, but entail different structural counterfactuals.

Example 3

(adapted from 6.19 in [48]). Consider the following two SCMs \(\mathcal {M}_A\) and \(\mathcal {M}_B\) which arise from the general form in Fig. 1 by choosing \(U_1, U_2 \sim \text {Bernoulli}(0.5)\) and \(U_3\sim \text {Uniform}(\{0, \ldots , K\})\) independently in both \(\mathcal {M}_A\) and \(\mathcal {M}_B\), with structural equations

$$\begin{aligned} X_1&:= U_1,&\text {in} \quad&\{\mathcal {M}_A, \mathcal {M}_B\},\\ X_2&:= X_1(1-U_2),&\text {in} \quad&\{\mathcal {M}_A, \mathcal {M}_B\},\\ X_3&:= \mathbb {I}_{X_1\ne X_2}(\mathbb {I}_{U_3>0}X_1+\mathbb {I}_{U_3=0}X_2) + \mathbb {I}_{X_1=X_2}U_3,&\text {in} \quad&\mathcal {M}_A,\\ X_3&:= \mathbb {I}_{X_1\ne X_2}(\mathbb {I}_{U_3>0}X_1+\mathbb {I}_{U_3=0}X_2) + \mathbb {I}_{X_1=X_2}(K-U_3),&\text {in} \quad&\mathcal {M}_B. \end{aligned}$$

Then \(\mathcal {M}_A\) and \(\mathcal {M}_B\) both imply exactly the same observational and interventional distributions, and thus are indistinguishable from empirical data. However, having observed \(\mathbf {x}^\texttt {F}\,=\,(1, 0, 0)\), they predict different counterfactuals had \(X_1\) been 0, i.e., \(\mathbf {x}^\texttt {SCF}(X_1\,=\,0)\,=\,(0,0,0)\) and (0, 0, K), respectively.Footnote 8

Confirming or refuting an assumed form of \(\mathbf {S}_\star \) would thus require counterfactual data which is, by definition, never available. Thus, Example 3 proves the following proposition by contradiction.

Proposition 3

(Lack of Recourse Guarantees). If the set of descendants of intervened-upon variables is non-empty, algorithmic recourse can be guaranteed in general (i.e., without further restrictions on the underlying causal model) only if the true structural equations are known, irrespective of the amount and type of available data.

Remark 1

The converse of Proposition 3 does not hold. E.g., given \(\mathbf {x}^\texttt {F}\,=\,(1,0,1)\) in Example 3, abduction in either model yields \(U_3>0\), so the counterfactual of \(X_3\) cannot be predicted exactly.

Building on the framework of [28], we next present two novel approaches for causal algorithmic recourse under unknown structural equations. The first approach in Sect. 4.1 aims to estimate the counterfactual distribution under the assumption of ANMs (5) with Gaussian noise for the structural equations. The second approach in Sect. 4.2 makes no assumptions about the structural equations, and instead of approximating the structural equations, it considers the effect of interventions on a sub-population similar to \(\mathbf {x}^\texttt {F}\). We recall that the causal graph is assumed to be known throughout.

4 Recourse Under Imperfect Causal Knowledge

4.1 Probabilistic Individualised Recourse

Since the true SCM \(\mathcal {M}_\star \) is unknown, one approach to solving (4) is to learn an approximate SCM \(\mathcal {M}\) within a given model class from training data \(\{\mathbf {x}^i\}_{i\,=\,1}^n\). For example, for an ANM (5) with zero-mean noise, the functions \(f_r\) can be learned via linear or kernel (ridge) regression of \(X_r\) given \(\mathbf {X}_{\text {pa}(r)}\) as input. We refer to these approaches as \(\mathcal {M}_{\textsc {lin}}\) and \(\mathcal {M}_{\textsc {kr}}\), respectively. \(\mathcal {M}\) can then be used in place of \(\mathcal {M}_\star \) to infer the noise values as in (5), and subsequently to predict a single-point counterfactual \(\mathbf {x}^\texttt {SCF}(a)\) to be used in (4). However, the learned causal model \(\mathcal {M}\) may be imperfect, and thus lead to wrong counterfactuals due to, e.g., the finite sample of the observed data, or more importantly, due to model misspecification (i.e., assuming a wrong parametric form for the structural equations).

To solve such limitation, we adopt a Bayesian approach to account for the uncertainty in the estimation of the structural equations. Specifically, we assume additive Gaussian noise and rely on probabilistic regression using a Gaussian process (GP) prior over the functions \(f_r\); for an overview of regression with GPs, we refer to [79, § 2].

Definition 2

(GP-SCM). A Gaussian process SCM (GP-SCM) over \(\mathbf {X}\) refers to the model

$$\begin{aligned} X_r := f_r(\mathbf {X}_{\text {pa}(r)})+ U_r, \quad \quad f_r\sim \mathcal {GP}(0, k_{r}), \quad \quad U_r\sim \mathcal {N}(0, \sigma ^2_r), \quad \quad r\in [d], \end{aligned}$$
(10)

with covariance functions \(k_{r}:\mathcal {X}_{\text {pa}(r)}\times \mathcal {X}_{\text {pa}(r)}\rightarrow \mathbb {R}\), e.g., RBF kernels for continuous \(X_{\text {pa}(r)}\).

While GPs have previously been studied in a causal context for structure learning [16, 73], estimating treatment effects [2, 56], or learning SCMs with latent variables and measurement error [61], our goal here is to account for the uncertainty over \(f_r\) in the computation of the posterior over \(U_r\), and thus to obtain a counterfactual distribution, as summarised in the following propositions.

Proposition 4

(GP-SCM Noise Posterior). Let \(\{\mathbf {x}^i\}_{i\,=\,1}^n\) be an observational sample from (10). For each \(r\in [d]\) with non empty parent set \(|\text {pa}(r)|>0\), the posterior distribution of the noise vector \(\mathbf {u}_r\,=\,(u_r^1, ...,u_r^n)\), conditioned on \(\mathbf {x}_r\,=\,(x_r^1, ..., x_r^n)\) and \(\mathbf {X}_{\text {pa}(r)}\,=\,(\mathbf {x}_{\text {pa}(r)}^1,...,\mathbf {x}_{\text {pa}(r)}^n)\), is given by

$$\begin{aligned} \mathbf {u}_r|\mathbf {X}_{\text {pa}(r)}, \mathbf {x}_r \sim \mathcal {N}\left( \sigma ^2_r (\mathbf {K}+\sigma ^2_r \mathbf {I})^{-1}\mathbf {x}_r, \sigma ^2_r\left( \mathbf {I}-\sigma ^2_r(\mathbf {K}+\sigma ^2_r \mathbf {I})^{-1}\right) \right) , \end{aligned}$$
(11)

where \(\mathbf {K}:=\big (k_r\big (\mathbf {x}_{\text {pa}(r)}^i, \mathbf {x}_{\text {pa}(r)}^j\big )\big )_{ij}\) denotes the Gram matrix.

Next, in order to compute counterfactual distributions, we rely on ancestral sampling (according to the causal graph) of the descendants of the intervention targets \(\mathbf {X}_\mathcal {I}\) using the noise posterior of (11). The counterfactual distribution of each descendant \(X_r\) is given by the following proposition.

Proposition 5

(GP-SCM Counterfactual Distribution). Let \(\{\mathbf {x}^i\}_{i=1}^n\) be an observational sample from (10). Then, for \(r\in [d]\) with \(|\text {pa}(r)|>0\), the counterfactual distribution over \(X_r\) had \(\mathbf {X}_{\text {pa}(r)}\) been \(\tilde{\mathbf {x}}_{\text {pa}(r)}\) (instead of \(\mathbf {x}^\texttt {F}_{\text {pa}(r)}\)) for individual \(\mathbf {x}^\texttt {F}\in \{\mathbf {x}^i\}_{i=1}^n\) is given by

$$\begin{aligned} \begin{aligned}&X_r(\mathbf {X}_{\text {pa}(r)}\,=\,\tilde{\mathbf {x}}_{\text {pa}(r)})| \mathbf {x}^\texttt {F}, \{\mathbf {x}^i\}_{i=1}^n \\&\quad \quad \quad \sim \mathcal {N}\big (\mu ^\texttt {F}_r+\tilde{\mathbf {k}}^T(\mathbf {K}+\sigma ^2_r\mathbf {I})^{-1}\mathbf {x}_r,\, s^\texttt {F}_r + \tilde{k} - \tilde{\mathbf {k}}^T (\mathbf {K}+\sigma ^2_r\mathbf {I})^{-1} \tilde{\mathbf {k}} \big ), \end{aligned} \end{aligned}$$
(12)

where \(\tilde{k}:=k_r(\tilde{\mathbf {x}}_{\text {pa}(r)}, \tilde{\mathbf {x}}_{\text {pa}(r)})\), \(\tilde{\mathbf {k}}:=\big (k_r(\tilde{\mathbf {x}}_{\text {pa}(r)}, \mathbf {x}_{\text {pa}(r)}^1), \ldots , k_r(\tilde{\mathbf {x}}_{\text {pa}(r)}, \mathbf {x}_{\text {pa}(r)}^n)\big )\), \(\mathbf {x}_r\) and \(\mathbf {K}\) as defined in Proposition 4, and \(\mu ^\texttt {F}_r\) and \(s^\texttt {F}_r\) are the posterior mean and variance of \(u^\texttt {F}_r\) given by (11).

Fig. 4.
figure 4

Illustration of point- and subpopulation-based recourse approaches.

All proofs can be found in Appendix A of [27]. We can now generalise the recourse problem (4) to our probabilistic setting by replacing the single-point counterfactual \(\mathbf {x}^\texttt {SCF}(a)\) with the counterfactual random variable \(\mathbf {X}^\texttt {SCF}(a):=\mathbf {X}(a)|\mathbf {x}^\texttt {F}\). As a consequence, it no longer makes sense to consider a hard constraint of the form \(h(\mathbf {x}^\texttt {SCF}(a))>0.5\), i.e., that the prediction needs to change. Instead, we can reason about the expected classifier output under the counterfactual distribution, leading to the following probabilistic version of the individualised recourse optimisation problem:

$$\begin{aligned} \begin{aligned}&\min _{a=do(\mathbf {X}_\mathcal {I}={\boldsymbol{\theta }})\in \mathcal {F}}\quad \text {cost}^\texttt {F}(a) \\&\mathop {\mathrm {subject}\;\mathrm {to}}\quad \mathbb {E}_{\mathbf {X}^\texttt {SCF}(a)}\left[ h\left( \mathbf {X}^\texttt {SCF}(a)\right) \right] \ge \texttt {thresh}(a). \end{aligned} \end{aligned}$$
(13)

Note that the threshold \(\texttt {thresh}(a)\) is allowed to depend on a. For example, an intuitive choice is

$$\begin{aligned} \textstyle \texttt {thresh}(a) = 0.5 +\gamma _\textsc {lcb} \sqrt{\text {Var}_{\mathbf {X}^\texttt {SCF}(a)}\left[ h\left( \mathbf {X}^\texttt {SCF}(a)\right) \right] } \end{aligned}$$
(14)

which has the interpretation of the lower-confidence bound crossing the decision boundary of 0.5. Note that larger values of the hyperparameter \(\gamma _\textsc {lcb}\) lead to a more conservative approach to recourse, while for \(\gamma _\textsc {lcb}\,=\,0\) merely crossing the decision boundary with \(\ge 50\%\) chance suffices.

4.2 Probabilistic Subpopulation-Based Recourse

The GP-SCM approach in Sect. 4.1 allows us to average over an infinite number of (non-)linear structural equations, under the assumption of additive Gaussian noise. However, this assumption may still not hold under the true SCM, leading to sub-optimal or inefficient solutions to the recourse problem. Next, we remove any assumptions about the structural equations, and propose a second approach that does not aim to approximate an individualized counterfactual distribution, but instead considers the effect of interventions on a subpopulation defined by certain shared characteristics with the given (factual) individual \(\mathbf {x}^\texttt {F}\). The key idea behind this approach resembles the notion of conditional average treatment effects (CATE) [1] (illustrated in Fig. 4) and is based on the fact that any intervention \(do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})\) only influences the descendants \(\text {d}(\mathcal {I})\) of the intervened-upon variables, while the non-descendants \(\text {nd}(\mathcal {I})\) remain unaffected. Thus, when evaluating an intervention, we can condition on \(\mathbf {X}_{\text {nd}(\mathcal {I})}\,=\,\mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}\), thus selecting a subpopulation of individuals similar to the factual subject.

Specifically, we propose to solve the following subpopulation-based recourse optimization problem

$$\begin{aligned} \begin{aligned}&\min _{a\,=\,do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})\in \mathcal {F}}\quad \text {cost}^\texttt {F}(a) \\&\mathop {\mathrm {subject}\;\mathrm {to}}\quad \mathbb {E}_{\mathbf {X}_{\text {d}(\mathcal {I})}|do(\mathbf {X}_\mathcal {I}\,=\, {\boldsymbol{\theta }}), \mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}}\big [h\big (\mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}, {\boldsymbol{\theta }}, \mathbf {X}_{\text {d}(\mathcal {I})}\big )\big ] \ge \texttt {thresh}(a), \end{aligned} \end{aligned}$$
(15)

where, in contrast to (13), the expectation is taken over the corresponding interventional distribution.

In general, this interventional distribution does not match the conditional distribution, i.e.,

$$P_{\mathbf {X}_{\text {d}(\mathcal {I})}\vert do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }}), \mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}}\ne P_{\mathbf {X}_{\text {d}(\mathcal {I})}\vert \mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }}, \mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}},$$

because some spurious correlations in the observational distribution do not transfer to the interventional setting. For example, in Fig. 2b we have that

$$P_{X_2\vert do(X_1\,=\,x_1,X_3\,=\,x_3)}\,=\,P_{X_2\vert X_1\,=\,x_1}\ne P_{X_2\vert X_1\,=\,x_1,X_3\,=\,x_3}.$$

Fortunately, the interventional distribution can still be identified from the observational one, as stated in the following proposition.

Proposition 6

Subject to causal sufficiency, \(P_{\mathbf {X}_{\text {d}(\mathcal {I})}\vert do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }}), \mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}}\) is observationally identifiable (i.e., computable from the observational distribution) via:

$$\begin{aligned} p\big (\mathbf {X}_{\text {d}(\mathcal {I})}\vert do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }}), \mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}\big )\,=\, \left. \prod _{r\in \text {d}(\mathcal {I})} p\left( X_{r} \vert \mathbf {X}_{pa(r)}\right) \right| _{\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }}, \mathbf {X}_{\text {nd}(\mathcal {I})}\,=\,\mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}}. \end{aligned}$$
(16)

As evident from Proposotion 6, tackling the optimization problem in (15) in the general case (i.e., for arbitrary graphs and intervention sets \(\mathcal {I}\)) requires estimating the stable conditionals \(P_{X_r\vert \mathbf {X}_{\text {pa}(r)}}\) (a.k.a. causal Markov kernels) in order to compute the interventional expectation via (16). For convenience (see Sect. 4.3 for details), here we opt for latent-variable implicit density models, but other conditional density estimation approaches may be also be used [e.g., 7, 10, 68]. Specifically, we model each conditional \(p(x_r\vert \mathbf {x}_{\text {pa}(r)})\) with a conditional variational autoencoder (CVAE) [62] as:

$$\begin{aligned} p(x_r\vert \mathbf {x}_{\text {pa}(r)}) \approx p_{\psi _r}(x_r\vert \mathbf {x}_{\text {pa}(r)}) \,=\, \int p_{\psi _r}(x_r\vert \mathbf {x}_{\text {pa}(r)}, \mathbf {z}_r) p(\mathbf {z}_r) d\mathbf {z}_r, \quad p(\mathbf {z}_r):=\mathcal {N}(\mathbf {0}, \mathbf {I}). \end{aligned}$$
(17)

To facilitate sampling \(x_r\) (and in analogy to the deterministic mechanisms \(f_r\) in SCMs), we opt for deterministic decoders in the form of neural nets \(D_r\) parametrised by \(\psi _r\), i.e., \(p_{\psi _r}(x_r\vert \mathbf {x}_{\text {pa}(r)}, \mathbf {z}_r) \,=\, \delta (x_r - D_r(\mathbf {x}_{\text {pa}(r)}, \mathbf {z}_r; \psi _r))\), and rely on variational inference [77], amortised with approximate posteriors \(q_{\phi _r}(\mathbf {z}_r|x_r, \mathbf {x}_{\text {pa}(r)})\) parametrised by encoders in the form of neural nets with parameters \(\phi _r\). We learn both the encoder and decoder parameters by maximising the evidence lower bound (ELBO) using stochastic gradient descend [11, 30, 31, 50]. For further details, we refer to Appendix D of [27]

Remark 2

The collection of CVAEs can be interpreted as learning an approximate SCM of the form

$$\begin{aligned} \mathcal {M}_\textsc {cvae}: \quad \mathbf {S}\,=\,\{X_r := D_r(\mathbf {X}_{\text {pa}(r)}, \mathbf {z}_r;\psi _r)\}_{r=1}^d, \quad \mathbf {z}_r\sim \mathcal {N}(\mathbf {0}, \mathbf {I})\quad \forall r\in [d] \end{aligned}$$
(18)

However, this family of SCMs may not allow to identify the true SCM (provided it can be expressed as above) from data without additional assumptions. Moreover, exact posterior inference over \(\mathbf {z}_r\) given \(\mathbf {x}^\texttt {F}\) is intractable, and we need to resort to approximations instead. It is thus unclear whether sampling from \(q_{\phi _r}(\mathbf {z}_r\vert x^{\texttt {F}}_r, \mathbf {x}^\texttt {F}_{\text {pa}(r)})\) instead of from \(p(\mathbf {z}_r)\) in (17) can be interpreted as a counterfactual within (18). For further discussion on such “pseudo-counterfactuals” we refer to Appendix C of [27]

4.3 Solving the Probabilistic Recourse Optimization Problem

We now discuss how to solve the resulting optimization problems in (13) and (15). First, note that both problems differ only on the distribution over which the expectation in the constraint is taken: in (13) this is the counterfactual distribution of the descendants given in Propostion 5; and in (15) it is the interventional distribution identified in Propostion 6. In either case, computing the expectation for an arbitrary classifier h is intractable. Here, we approximate these integrals via Monte Carlo by sampling \(\mathbf {x}_{\text {d}(\mathcal {I})}^{(m)}\) from the interventional or counterfactual distributions resulting from \(a\,=\,do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})\), i.e.,

$$\begin{aligned} \mathbb {E}_{\mathbf {X}_{\text {d}(\mathcal {I})\vert {\boldsymbol{\theta }}}} \big [h\big (\mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}, {\boldsymbol{\theta }}, \mathbf {X}_{\text {d}(\mathcal {I})}\big )\big ]\approx \frac{1}{M} \sum _{m\,=\,1}^M h\big (\mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}, {\boldsymbol{\theta }}, \mathbf {x}_{\text {d}(\mathcal {I})}^{(m)}\big ). \end{aligned}$$

Brute-Force Approach. A way to solve (13) and (15) is to (i) iterate over \(a\in \mathcal {F}\), with \(\mathcal {F}\) being a finite set of feasible actions (possibly as a result of discretizing in the case of a continuous search space); (ii) approximately evaluate the constraint via Monte Carlo ; and (iii) select a minimum cost action amongst all evaluated candidates satisfying the constraint. However, this may be computationally prohibitive and yield suboptimal interventions due to discretisation.

Gradient-based Approach. Recall that, for actions of the form \(a\,=\,do(\mathbf {X}_\mathcal {I}\,=\,{\boldsymbol{\theta }})\), we need to optimize over both the intervention targets \(\mathcal {I}\) and the intervention values \({\boldsymbol{\theta }}\). Selecting targets is a hard combinatorial optimization problem, as there are \(2^{d'}\) possible choices for \(d'\le d\) actionable features, with a potentially infinite number of intervention values. We therefore consider different choices of targets \(\mathcal {I}\) in parallel, and propose a gradient-based approach suitable for differentiable classifiers to efficiently find an optimal \({\boldsymbol{\theta }}\) for a given intervention set \(\mathcal {I}\).Footnote 9 In particular, we first rewrite the constrained optimization problem in unconstrained form with Lagrangian [29, 33]:

$$\begin{aligned} \mathcal {L}({\boldsymbol{\theta }},\lambda ):=\text {cost}^\texttt {F}(a) + \lambda \big (\texttt {thresh}(a) - \mathbb {E}_{\mathbf {X}_{\text {d}(\mathcal {I})\vert {\boldsymbol{\theta }}}}\big [h\big (\mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}, {\boldsymbol{\theta }}, \mathbf {X}_{\text {d}(\mathcal {I})}\big )\big ] \big ). \end{aligned}$$
(19)

We then solve the saddle point problem \(\min _{{\boldsymbol{\theta }}} \max _\lambda \mathcal {L}({\boldsymbol{\theta }},\lambda )\) arising from (19) with stochastic gradient descent [11, 30]. Since both the GP-SCM counterfactual (12) and the CVAE interventional distributions (17) admit a reparametrization trick [31, 50], we can differentiate through the constraint:

$$\begin{aligned} \nabla _{\boldsymbol{\theta }}\mathbb {E}_{\mathbf {X}_{\text {d}(\mathcal {I})}}\big [h\big (\mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}, {\boldsymbol{\theta }}, \mathbf {X}_{\text {d}(\mathcal {I})}\big )\big ] = \mathbb {E}_{\mathbf {z}\sim \mathcal {N}(\mathbf {0},\mathbf {I})}\big [\nabla _{\boldsymbol{\theta }}h\big (\mathbf {x}^\texttt {F}_{\text {nd}(\mathcal {I})}, {\boldsymbol{\theta }}, \mathbf {x}_{\text {d}(\mathcal {I})}(\mathbf {z})\big )\big ]. \end{aligned}$$
(20)

Here, \(\mathbf {x}_{\text {d}(\mathcal {I})}(\mathbf {z})\) is obtained by iteratively computing all descendants in topological order: either substituting \(\mathbf {z}\) together with the other parents into the decoders \(D_r\) for the CVAEs, or by using the Gaussian reparametrization \(x_{r}(\mathbf {z})\,=\,\mu +\sigma \mathbf {z}\) with \(\mu \) and \(\sigma \) given by (12) for the GP-SCM. A similar gradient estimator for the variance which enters \(\texttt {thresh}(a)\) for \(\gamma _\textsc {lcb}\ne 0\) is derived in Appendix F of [27].

5 Experiments

In our experiments, we compare different approaches for causal algorithmic recourse on synthetic and semi-synthetic data sets. Additional results can be found in Appendix B of [27].

5.1 Compared Methods

We compare the naive point-based recourse approaches \(\mathcal {M}_\textsc {lin}\) and \(\mathcal {M}_\textsc {kr}\) mentioned at the beginning of Sect. 4.1 as baselines with the proposed counterfactual GP-SCM \(\mathcal {M}_\textsc {gp}\) and the CVAE approach for sub-population-based recourse (\(\textsc {cate}_\textsc {cvae}\)). For completeness, we also consider a \(\textsc {cate}_\textsc {gp}\) approach as a GP can also be seen as modelling each conditional as a Gaussian,Footnote 10 and also evaluate the “pseudo-counterfactual” \(\mathcal {M}_\textsc {cvae}\) approach discussed in Remark 2. Finally, we report oracle performance for individualised \(\mathcal {M}_\star \) and sub-population-based recourse methods \(\textsc {cate}_\star \) by sampling counterfactuals and interventions from the true underlying SCM. We note that a comparison with non-causal recourse approaches that assume independent features [58, 69] or consider causal relations to generate counterfactual explanations but not recourse actions [24, 39] is neither natural nor straight-forward, because it is unclear whether descendant variables should be allowed to change, whether keeping their value constant should incur a cost, and, if so, how much, c.f. [28].

Table 1. Experimental results for the gradient-based approach on different 3-variable SCMs. We show average performance \(\pm 1\) standard deviation for \(N_\text {runs} = 100\), \(N_\text {MC-samples}=100\), and \(\gamma _\textsc {lcb}=2\).

5.2 Metrics

We compare recourse actions recommended by the different methods in terms of cost, computed as the L2-norm between the intervention \({\boldsymbol{\theta }}_\mathcal {I}\) and the factual value \(\mathbf {x}^\texttt {F}_\mathcal {I}\), normalised by the range of each feature \(r\in \mathcal {I}\) observed in the training data; and validity, computed as the percentage of individuals for which the recommended actions result in a favourable prediction under the true (oracle) SCM. For our probabilistic recourse methods, we also report the lower confidence bound \(\text {LCB}:=\mathbb {E}[h]-\gamma _{\textsc {lcb}}\sqrt{\text {Var}[h]}\) of the selected action under the given method.

5.3 Synthetic 3-Variable SCMs Under Different Assumptions

In our first set of experiments, we consider three classes of SCM s over three variables with the same causal graph as in Fig. 2b. To test robustness of the different methods to assumptions about the form of the true structural equations, we consider a linear SCM, a non-linear ANM, and a more general, multi-modal SCM with non-additive noise. For further details on the exact form we refer to Appendix E of [27].

Results are shown in Table 1 we observe that the point-based recourse approaches perform (relatively) well in terms of both validity and cost, when their underlying assumptions are met (i.e., \(\mathcal {M}_\textsc {lin}\) on the linear SCM and \(\mathcal {M}_\textsc {kr}\) on the nonlinear ANM). Otherwise, validity significantly drops as expected (see, e.g., the results of \(\mathcal {M}_\textsc {lin}\) on the non-linear \(\text {ANM}\), or of \(\mathcal {M}_\textsc {kr}\) on the non-additive SCM). Moreover, we note that the inferior performance of \(\mathcal {M}_\textsc {kr}\) compared to \(\mathcal {M}_\textsc {lin}\) on the linear SCM suggests an overfitting problem, which does not occur for its more conservative probabilistic counterpart \(\mathcal {M}_\textsc {gp}\). Generally, the individualised approaches \(\mathcal {M}_\textsc {gp}\) and \(\mathcal {M}_\textsc {cvae}\) perform very competitively in terms of cost and validity, especially on the linear and nonlinear ANMs. The subpopulation-based \(\textsc {cate}\) approaches on the other hand, perform particularly well on the challenging non-additive SCM (on which the assumptions of gp approaches are violated) where \(\textsc {cate}_\textsc {cvae}\) achieves perfect validity as the only non-oracle method. As expected, the subpopulation-based approaches generally lead to higher cost than the individualised ones, since the latter only aim to achieve recourse only for a given individual while the former do it for an entire group (see Fig. 4).

Fig. 5.
figure 5

Assumed causal graph for the semi-synthetic loan approval dataset.

5.4 Semi-synthetic 7-Variable SCM for Loan-Approval

We also test our methods on a larger semi-synthetic SCM inspired by the German Credit UCI dataset [43]. We consider the variables age A, gender G, education-level E, loan amount L, duration D, income I, and savings S with causal graph shown in Fig. 5. We model age A, gender G and loan duration D as non-actionable variables, but consider D to be mutable, i.e., it cannot be manipulated directly but is allowed to change (e.g., as a consequence of an intervention on L). The SCM includes linear and non-linear relationships, as well as different types of variables and noise distributions, and is described in more detail in Appendix B of [27].

Table 2. Experimental results for the 7-variable SCM for loan-approval. We show average performance \(\pm 1\) standard deviation for \(N_\text {runs} = 100\), \(N_\text {MC-samples}=100\), and \(\gamma _\textsc {lcb}=2.5\). For linear and non-linear logistic regression as classifiers, we use the gradient-based approach, whereas for the non-differentiable random forest classifier we rely on the brute-force approach (with 10 discretised bins per dimension) to solve the recourse optimisation problems.

The results are summarised in Table 2, where we observe that the insights discussed above similarly apply for data generated from a more complex SCM, and for different classifiers.

Fig. 6.
figure 6

Trade-off between validity and cost which can be controlled via \(\gamma _\text {LCB}\) for the probabilistic recourse methods.

Finally, we show the influence of \(\gamma _\textsc {lcb}\) on the performance of the proposed probabilistic approaches in Fig. 6. We observe that lower values of \(\gamma _\textsc {lcb}\) lead to lower validity (and cost), especially for the \(\textsc {cate}\) approaches. As \(\gamma _\textsc {lcb}\) increases validity approaches the corresponding oracles \(\mathcal {M}_\star \) and \(\textsc {cate}_\star \), outperforming the point-based recourse approaches. In summary, our probabilistic recourse approaches are not only more robust, but also allow controlling the trade-off between validity and cost using \(\gamma _\textsc {lcb}\).

6 Discussion

In this paper, we have focused on the problem of algorithmic recourse, i.e., the process by which an individual can change their situation to obtain a desired outcome from a machine learning model. Using the tools from causal reasoning (i.e., structural interventions and counterfactuals), we have shown that in their current form, counterfactual explanations only bring about agency for the individual to achieve recourse in unrealistic settings. In other words, counterfactual explanations imply recourse actions that may neither be optimal nor even result in favorably changing the prediction of \(h\) when acted upon. This shortcoming is primarily due to the lack of consideration of causal relations governing the world and thus, the failure to model the downstream effect of actions in the predictions of the machine learning model. In other words, although “counterfactual” is a term from causal language, we observed that existing approaches fall short in terms of taking causal reasoning into account when generating counterfactual explanations and the subsequent recourse actions. Thus, building on the statement by Wachter et al. [76] that counterfactual explanations “do not rely on knowledge of the causal structure of the world,” it is perhaps more appropriate to refer to existing approaches as contrastive, rather than counterfactual, explanations [14, 40]. See [26, §2] for more discussion.

To directly take causal consequences of actions into account, we have proposed a fundamental reformulation of the recourse problem, where actions are performed as interventions and we seek to minimize the cost of performing actions in a world governed by a set of (physical) laws captured in a structural causal model. Our proposed formulation in (4), complemented with several examples and a detailed discussion, allows for recourse through minimal interventions (MINT), that when performed will result in a structural counterfactual that favourably changes the output of the model.

The primary limitation of this formulation in (4) is its reliance on the true causal model of the world, subsuming both the graph, and the structural equations. In practice, the underlying causal model is rarely known, which suggests that the counterfactual constraint in (4), i.e., \(\mathbf {x}^\texttt {SCF}(a) := \mathbf {x}(a) | \mathbf {x}^\texttt {F}= \mathbf {S}^{a}(\mathbf {S}^{-1}({\mathbf {x}^\texttt {F}}))\), may not be (deterministically) identifiable. As negative result, however, we showed that algorithmic recourse cannot be guaranteed in the absence of perfect knowledge about the underlying \(\text {SCM}\) governing the world, which unfortunately is not available in practice. To address this limitation, we proposed two probabilistic approaches to achieve recourse under more realistic assumptions. In particular, we derived i) an individual-level recourse approach based on GPs that approximates the counterfactual distribution by averaging over the family of additive Gaussian SCMs; and ii) a subpopulation-based approach, which assumes that only the causal graph is known and makes use of CVAEs to estimate the conditional average treatment effect of an intervention on a subpopulation of individuals similar to the one seeking recourse. Our experiments showed that the proposed probabilistic approaches not only result in more robust recourse interventions than approaches based on point estimates of the SCM, but also allows to trade-off validity and cost.

Assumptions, Limitations, and Extensions. Throughout the present work, we have assumed a known causal graph and causal sufficiency. While this may not hold for all settings, it is the minimal necessary set of assumptions for causal reasoning from observational data alone. Access to instrumental variables or experimental data may help further relax these assumptions [3, 13, 66]. Moreover, if only a partial graph is available or some relations are known to be confounded, one will need to restrict recourse actions to the subset of interventions that are still identifiable [59, 60, 67]. An alternative approach could address causal sufficiency violations by relying on latent variable models to estimate confounders from multiple causes [78] or proxy variables [38], or to work with bounds on causal effects instead [5, 65, 74].

Perhaps more concerningly, our work highlights the implicit causal assumptions made by existing approaches (i.e., that of independence, or feasible and cost-free interventions), which may portray a false sense of recourse guarantees where one does not exists (see Example 2 and all of Sect. 3.1). Our work aims to highlight existing imperfect assumptions, and to offer an alternative formulation, backed with proofs and demonstrations, which would guarantee recourse if assumptions about the causal structure of the world were satisfied. Future research on causal algorithmic recourse may benefit from the rich literature in causality that has developed methods to verify and perform inference under various assumptions [45, 48].

This is not to say that counterfactual explanations should be abandoned altogether. On the contrary, we believe that counterfactual explanations hold promise for “guided audit of the data” [76] and evaluating various desirable model properties, such as robustness [21, 58] or fairness [20, 25, 58, 69, 75]. Besides this, it has been shown that designers of interpretable machine learning systems use counterfactual explanations for predicting model behavior [34] or uncovering inaccuracies in the data profile of individuals [70]. Complementing these offerings of counterfactual explanations, we offer minimal interventions as a way to guarantee algorithmic recourse in general settings, which is not implied by counterfactual explanations.

On the Counterfactual vs Interventional Nature of Recourse. Given that we address two different notions of recourse—counterfactual/individualised (rung 3) vs. interventional/subpopulation-based (rung 2)—one may ask which framing is more appropriate. Since the main difference is whether the background variables \(\mathbf {U}\) are assumed fixed (counterfactual) or not (interventional) when reasoning about actions, we believe that this question is best addressed by thinking about the type of environment and interpretation of \(\mathbf {U}\): if the environment is static, or if \(\mathbf {U}\) (mostly) captures unobserved information about the individual, the counterfactual notion seems to be the right one; if, on the other hand, \(\mathbf {U}\) also captures environmental factors which may change, e.g., between consecutive loan applications, then the interventional notion of recourse may be more appropriate. In practice, both notions may be present (for different variables), and the proposed approaches can be combined depending on the available domain knowledge since each parent-child causal relation is treated separately. We emphasise that the subpopulation-based approach is also practically motivated by a reluctance to make (parametric) assumptions about the structural equations which are untestable but necessary for counterfactual reasoning. It may therefore be useful to avoid problems of misspecification, even for counterfactual recourse, as demonstrated experimentally for the non-additive SCM.

7 Conclusion

In this work, we explored one of the main, but often overlooked, objectives of explanations as a means to allow people to act rather than just understand. Using counterexamples and the theory of structural causal models (SCM), we showed that actionable recommendations cannot, in general, be inferred from counterfactual explanations. We show that this shortcoming is due to the lack of consideration of causal relations governing the world and thus, the failure to model the downstream effect of actions in the predictions of the machine learning model. Instead, we proposed a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions (MINT), and presented a new optimization formulation for the common class of additive noise models. Our technical contributions were complemented with an extensive discussion on the form, feasibility, and scope of interventions in real-world settings. In follow-up work, we further investigated the epistemological differences between counterfactual explanations and consequential recommendations and argued that their technical treatment requires consideration at different levels of the causal history [52] of events [26]. Whereas MINT provided exact recourse under strong assumptions (requiring the true SCM), we next explored how to offer recourse under milder and more realistic assumptions (requiring only the causal graph). We present two probabilistic approaches that offer recourse with high probability. The first captures uncertainty over structural equations under additive Gaussian noise, and uses Bayesian model averaging to estimate the counterfactual distribution. The second removes any assumptions on the structural equations by instead computing the average effect of recourse actions on individuals similar to the person who seeks recourse, leading to a novel subpopulation-based interventional notion of recourse. We then derive a gradient-based procedure for selecting optimal recourse actions, and empirically show that the proposed approaches lead to more reliable recommendations under imperfect causal knowledge than non-probabilistic baselines. This contribution is important as it enables recourse recommendations to be generated in more practical settings and under uncertain assumptions.

As a final note, while for simplicity, we have focused in this chapter on credit loan approvals, recourse can have potential applications in other domains such as healthcare [8, 9, 17, 51], justice (e.g., pretrial bail) [4], and other settings (e.g., hiring) [12, 44, 57] whereby actionable recommendations for individuals are sought.