## Abstract

Explanations are backed by many different relations: causation, grounding, and arguably others too. But why are these different relations capable of backing explanations? In virtue of what are they explanatory? In this paper, I propose and defend a monistic account of explanation-backing relations. On my account, there is a single relation which backs all cases of explanation, and which explains why those other relations are explanation-backing.

This is a preview of subscription content, log in to check access.

## Notes

- 1.
This assumes, of course, that grounding is a relation.

- 2.
Following Schaffer (2016, pp. 83–87) and Strevens (2008, p. 6), I use the term ‘explanation’ in reference to the propositional objects provided by explanatory acts. I do not use the term ‘explanation’ in reference to the explanatory acts themselves, or our concept of explanation, or the worldly relation which backs explanation. The worldly relation which backs explanation is what I am calling the explanatory determination relation.

- 3.
EPM could also be formulated using ‘because’ (Skow 2016) rather than ‘explains’.

- 4.
In addition, EPM is independent of exactly what explanation-backing relations there are: it is compatible with views that posit lots of explanation-backing relations, and it is also compatible with views that posit only one. But the fewer explanation-backing relations there are, the easier it is to argue for EPM: if just causation backs explanation, for example, then EPM follows automatically. So in order to stack the deck against EPM, I will assume that there are many different explanation-backing relations.

- 5.
There are other accounts. For instance, one might take backing to be a primitive nonexplanatory relation that obtains between explanation-backing relations and explanations. Or one might take backing to be a kind of conceptual or analytic connection between worldly relations and the explanations they back. Or one might take backing to be grounding: a relation backs an explanation just in case that relation grounds that explanation. These accounts of backing are worth exploring, but for brevity’s sake, I will not do so here.

- 6.
More formally, let

*e*be an explanation; so*e*is a proposition of the form ‘\(a_{1}\), \(a_{2}\), ..., \(a_{n}\) explain*b*’. Let*R*be a relation such that \(Ra_{1}\ldots a_{n}b\) holds. The proposition that \(Ra_{1}\ldots a_{n}b\) is what I call an ‘instance’ of*R*. Then relation*R*‘backs’ explanation*e*just in case instance \(Ra_{1}\ldots a_{n}b\) backs*e*. And \(Ra_{1}\ldots a_{n}b\) ‘backs’*e*just in case \(Ra_{1}\ldots a_{n}b\) explains why*e*holds. - 7.
It is worth pointing out that for at least two reasons, \({\mathcal {E}}\) is not the grounding relation. First, the Backing condition in EPM implies that \({\mathcal {E}}\) backs the rock-throwing explanation. So by the above account of backing—in particular, the account described in footnote 6—the rock-throwing explanation is backed by \({\mathcal {E}}(t;s)\). Therefore,

*t*and*s*stand in the relation \({\mathcal {E}}\). Grounding is usually assumed to be synchronic: it only obtains between items at the same time. Therefore,*t*and*s*do not stand in the grounding relation, since*t*is an event at one time and*s*is an event at another. And so \({\mathcal {E}}\) is not the relation of ground. Second, relatedly but more generally, \({\mathcal {E}}\) and the grounding relation have different features. For instance, as I just argued, \({\mathcal {E}}\) can relate items at different times. And unlike grounding, \({\mathcal {E}}\) is non-necessitating: \({\mathcal {E}}\) can obtain between items when one merely raises the probability of the other, for instance. - 8.
Though I take \({\mathcal {E}}\) to be primitive, I do not assume that it is metaphysically fundamental, or that it cannot be understood in terms of anything more basic. By ‘primitive’, I mean ‘methodologically primitive’ (Dasgupta 2017, pp. 83–84): it is useful for philosophical theorizing—in this case, for theorizing about explanation—but for present purposes, I leave it unanalyzed.

- 9.
The double-sum proof, discussed by Lange (2017, pp. 281–282), is as follows. Let

*S*equal \(1+2+\cdots +n\). Thenwhere the expression below the line features

*n*copies of \(n+1\). Therefore, \(2S=n(n+1)\), and so \(1+2+\cdots +n=n(n+1)/2\). - 10.
For lack of space, I will not present more cases here. But EC yields the intuitively correct verdicts in cases of trumping, preemption, double preemption, omissions, and simultaneous overdetermination. So EC is remarkably accurate.

- 11.
One might adopt the following alternative to Iterative: for all

*p*and*q*, if*p*is an explanatory determiner of*q*, then*p*—but not*q*—is an explanatory determiner of the fact that*p*is an explanatory determiner of*q*. This condition is akin to an analogous condition for grounding (Bennett 2011). - 12.
I adopt the following two characterizations of aptness, based on characterizations proposed by Hitchcock for his account of causation (2007). First, apt models do not imply false counterfactuals: if a model is apt, then all the counterfactuals it implies about explanatory determination are true. Second, apt models include enough variables to capture the essential structure of the situation being modeled: if there are enough variables in a model to capture the essential structure of explanatory determination, then that model is apt.

- 13.
While arguing that some mathematical explanations are not backed by any familiar determination relations, D’Alessandro leaves open the possibility that some unfamiliar relation might back the explanations he discusses (2020, p. 784). EPM posits just such a relation: \({\mathcal {E}}\).

- 14.
I am not quite sure what to make of explanations which appeal to essences. Those sorts of explanations sometimes strike me as unsatisfying: in this case, for instance, the appeal to essences does not feel like it solves—in a very satisfying way—the explanatory challenge at hand. But for present purposes, I will assume that explanations like these are perfectly legitimate.

- 15.
See Sect. 6 for a challenge to those who think that despite all I have done so far, explanatory determination is

*still*not a suitable posit. - 16.
Strictly speaking, neurotransmitter release only

*partially*consists of the opening of synaptic vesicles, and the calcium influx only*partially*causes that opening. - 17.
Or alternatively, this explanation seems to be backed by the constitution relation.

- 18.
Since I am interested in formalizing explanatory determination, rather than explanation, my formalization is based on Halpern and Pearl’s account of causation rather than their account of explanation.

- 19.
So for each

*i*(\(1\le i\le n)\), \(x_{i}\) is in \(\mathcal {R}(X_{i})\). - 20.
This definition is different from the corresponding one proposed by Halpern and Pearl (2005b). I adopt it because it has more intuitive implications for cases of explanatory determination.

- 21.
This definition of restriction allows for

*non-recursive*structural equation models: these are models whose associated directed graph has loops. Those loops, in turn, allow the explanatory determination relation to be non-asymmetric. And that is a good thing: explanations, I think, can be circular, and so explanatory determination can be circular too. - 22.
In other words, the complete description \((\mathbf {u},\mathbf {v})\) assigns values to all variables; so let \(\mathbf {z}^{\prime }\) be the values which that complete description assigns to the variables in \(\mathbf {Z}^{\prime }\).

## References

Baker, A. (2005). Are there genuine mathematical explanations of physical phenomena?

*Mind*,*114*(454), 223–238.Bennett, K. (2011). Construction area (no hard hat required).

*Philosophical Studies*,*154*, 79–104.Bennett, K. (2017).

*Making things up*. New York, NY: Oxford University Press.Cartwright, N. (2007).

*Hunting causes and using them: Approaches in philosophy and economics*. New York, NY: Cambridge University Press.Craver, C. (2007).

*Explaining the Brain: mechanisms and the mosaic unity of neuroscience*. New York, NY: Oxford University Press.Craver, C. (2014). The ontic account of scientific explanation. In M. I. Kaiser, O. R. Scholz, D. Plenge, & A. Hüttemann (Eds.),

*Explanation in the special sciences*(pp. 27–52). New York, NY: Springer.D’Alessandro, W. (2020). Viewing-as explanations and ontic dependence.

*Philosophical Studies*,*177*, 769–792.Dasgupta, S. (2017). Constitutive explanation.

*Philosophical Issues*,*27*, 74–97.Grimm, S. (2006). Is understanding a species of knowledge?

*The British Journal for the Philosophy of Science*,*57*, 515–535.Halpern, J. Y., & Pearl, J. (2005a). Causes and explanations: A structural-model approach. Part I: Causes.

*The British Journal for the Philosophy of Science*,*56*, 843–887.Halpern, J. Y., & Pearl, J. (2005b). Causes and explanations: A structural-model approach. Part II: Explanations.

*The British Journal for the Philosophy of Science*,*56*, 889–911.Hempel, C. G. (1965).

*Aspects of scientific explanation and other essays in the philosophy of science*. New York, NY: The Free Press.Hitchcock, C. (2007). Prevention, preemption, and the principle of sufficient reason.

*The Philosophical Review*,*116*(4), 495–532.Kim, J. (1988). Explanatory realism, causal realism, and explanatory exclusion.

*Midwest Studies in Philosophy*,*12*, 225–239.Kvanvig, J. (2003).

*The value of knowledge and the pursuit of understanding*. New York, NY: Cambridge University Press.Lange, M. (2017).

*Because without cause: Non-causal explanations in science and mathematics*. New York, NY: Oxford University Press.Lewis, D. (1973). Causation.

*The Journal of Philosophy*,*70*(17), 556–567.Mantzavinos, C. (2016).

*Explanatory pluralism*. New York, NY: Cambridge University Press.Nickel, B. (2010). How general do theories of explanation need to be?

*Noûs*,*44*(2), 305–328.Rosen, G. (2015). Real definition.

*Analytic Philosophy*,*56*(3), 189–209.Schaffer, J. (2016). Grounding in the image of causation.

*Philosophical Studies*,*173*, 49–100.Schaffer, J. (2017). Laws for metaphysical explanation.

*Philosophical Issues*,*27*, 302–321.Skow, B. (2016).

*Reasons why*. New York, NY: Oxford University Press.Strevens, M. (2008).

*Depth: An account of scientific explanation*. Cambridge, MA: Harvard University Press.Strevens, M. (2013). No understanding without explanation.

*Studies in History and Philosophy of Science*,*44*, 510–515.Wilson, A. (2018). Metaphysical causation.

*Noûs*,*52*(4), 723–751.Wilson, J. (2014). No work for a theory of grounding.

*Inquiry*,*57*(5–6), 535–579.Woodward, J. (2003).

*Making things happen*. New York, NY: Oxford University Press.

## Acknowledgements

Thanks to David Albert, Karen Bennett, Scott Brown, Laura Callahan, Sam Carter, Augie Faller, Chris Frugé, Verónica Gómez, Chris Hauser, Boris Kment, Barry Loewer, Jill North, Alex Roberts, Ezra Rubenstein, Ted Sider, Brad Skow, Michael Strevens, Alastair Wilson, James Woodward, Dean Zimmerman, the audience at the FraMEPhys Workshop on Explanatory Pluralism, the audience at the 2019 Central APA, the Rutgers metaphysics reading group, an anonymous reviewer, and especially Jonathan Schaffer, for much helpful feedback and discussion.

## Author information

### Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Appendix

### Appendix

In this appendix, I formalize the explanatory determination relation using structural equation models. The definitions presented here run parallel to the definitions of causation proposed by Halpern and Pearl (2005a, b), as well as the formalisms in Schaffer (2016) and Wilson (2018).^{Footnote 18}

### Definition

(*Signature*) A *signature* is a triple \((\mathcal {U}, \mathcal {V},\mathcal {R})\), where \(\mathcal {U}\) is a finite set of variables – called the ‘exogenous variables’ – \(\mathcal {V}\) is a finite set of variables – called the ‘endogenous variables’ – and \(\mathcal {R}\) is a function which takes each *Y* in \(\mathcal {U}\cup \mathcal {V}\) to a non-empty set \(\mathcal {R}(Y)\) of values for *Y*.

### Definition

(*Explanatory model*) An *explanatory model* is a pair \((\mathcal {S},\mathcal {F})\), where \(\mathcal {S}=(\mathcal {U}, \mathcal {V},\mathcal {R})\) is a signature and \(\mathcal {F}\) is a function which maps each endogenous variable *X* in \(\mathcal {V}\) to a function \(F_{X}:\Big (\big (\times _{U\in \mathcal {U}}\mathcal {R}(U)\big )\times \big (\times _{Y\in (\mathcal {V}\setminus \{X\})}\mathcal {R}(Y)\big )\Big )\rightarrow \mathcal {R}(X)\).

### Definition

(*Context*) Let \(M=(\mathcal {S},\mathcal {F})\) be an explanatory model with signature \(\mathcal {S}=(\mathcal {U}, \mathcal {V},\mathcal {R})\). A *context* is an assignment function that maps each *U* in \(\mathcal {U}\) to a value *u* in \(\mathcal {R}(U)\). Denote a context by a vector \(\mathbf {u}\).

### Definition

(*Modified explanatory model*) Let \(\mathcal {S}=(\mathcal {U}, \mathcal {V},\mathcal {R})\) be a signature, let \(M=(\mathcal {S},\mathcal {F})\) be an explanatory model, let \(\mathbf {X}=(X_{1},\ldots ,X_{n})\) be a sequence of endogenous variables in \(\mathcal {V}\), and let \(\mathbf {x}=(x_{1},\ldots ,x_{n})\) be a sequence of values for those variables.^{Footnote 19} The *modified explanatory model* for *M* (relative to \(\mathbf {X}\) and \(\mathbf {x}\)) is the explanatory model \(M_{\mathbf {X}\leftarrow \mathbf {x}}=(S_{\mathbf {X}\leftarrow \mathbf {x}},\mathcal {F}^{\mathbf {X}\leftarrow \mathbf {x}})\), where \(S_{\mathbf {X}\leftarrow \mathbf {x}}=\Big (\mathcal {U},\mathcal {V},\mathcal {R}^{\mathbf {X}\leftarrow \mathbf {x}}\Big )\), \(\mathcal {R}^{\mathbf {X}\leftarrow \mathbf {x}}\) is the function which takes each variable *Y* in \(\mathcal {U}\cup (\mathcal {V}\setminus \mathbf {X})\) to its old range \(\mathcal {R}(Y)\) while taking each \(X_{i}\) to the new range \(\{x_{i}\}\), and \(\mathcal {F}^{\mathbf {X}\leftarrow \mathbf {x}}\) is the function which takes each \(X_{i}\) to the function \(F_{X_{i}}^{\mathbf {X}\leftarrow \mathbf {x}}=x_{i}\) while taking each *Y* in \(\mathcal {V}\setminus \mathbf {X}\) to a function \(F_{Y}^{\mathbf {X}\leftarrow \mathbf {x}}\) which is just the function \(F_{Y}\) where the values of the variables in \(\mathbf {X}\) have been set to the corresponding values in \(\mathbf {x}\).^{Footnote 20}

### Definition

(*Restriction*) Let \(\mathcal {S}=(\mathcal {U}, \mathcal {V},\mathcal {R})\) be a signature, let \(M=(\mathcal {S},\mathcal {F})\) be an explanatory model, let \(\mathbf {u}\) be a context, and let *X* be an endogenous variable in \(\mathcal {V}\). The *restriction* of *X* (relative to \(\mathbf {u}\)) is the function \(F_{X}^{\mathbf {u}}:\big (\times _{Y\in (\mathcal {V}\setminus \{X\})}\mathcal {R}(Y)\big )\rightarrow \mathcal {R}(X)\) obtained by plugging the values of \(\mathbf {u}\) in for the corresponding exogenous variables in \(F_{X}:\Big (\big (\times _{U\in \mathcal {U}}\mathcal {R}(U)\big )\times \big (\times _{Y\in (\mathcal {V}\setminus \{X\})}\mathcal {R}(Y)\big )\Big )\rightarrow \mathcal {R}(X)\).^{Footnote 21}

### Definition

(*Complete description*) Let \(M=(\mathcal {S},\mathcal {F})\) be an explanatory model with signature \((\mathcal {U}, \mathcal {V},\mathcal {R})\). A *complete description* is an assignment function that maps each variable *X* in \(\mathcal {U}\cup \mathcal {V}\) to a value *x* in \(\mathcal {R}(X)\) such that the resulting values simultaneously satisfy each function \(F_{X}\). Denote a complete description by a vector \((\mathbf {u},\mathbf {v})\), where \(\mathbf {u}\) is a context and \(\mathbf {v}\) maps each variable in \(\mathcal {V}\) to a value in that variable’s range.

### Definition

(*Truth*) Let \(M=(\mathcal {S},\mathcal {F})\) be an explanatory model with signature \(\mathcal {S}=(\mathcal {U}, \mathcal {V},\mathcal {R})\), let \(\mathbf {u}\) be a context, let \(F_{Y}^{\mathbf {u}}\) be the restriction of the endogenous variable *Y* (relative to \(\mathbf {u}\)), let \(\mathbf {X}=(X_{1},\ldots ,X_{n})\) be a sequence of endogenous variables in \(\mathcal {V}\), let \(\mathbf {x}=(x_{1},\ldots ,x_{n})\) be a sequence of values for the corresponding variables in \(\mathbf {X}\), and let \((\mathbf {u},\mathbf {v})\) be a complete description.

The sentence “

*Y*has value*y*” is*true*in*M*(relative to \(\mathbf {u}\)) if and only if \(F_{Y}^{\mathbf {u}}\) is the constant function which outputs*y*for every input of values to its endogenous variables.The sentence “

*Y*has value*y*” is*true*in*M*(relative to \((\mathbf {u},\mathbf {v})\)) if and only if the value of*Y*in \(\mathbf {v}\) is*y*.The counterfactual “If the \(\mathbf {X}\) had the corresponding values in \(\mathbf {x}\), then

*Y*would have value*y*” is*true*in*M*(relative to \(\mathbf {u}\)) if and only if “*Y*has value*y*” is*true*in the modified explanatory model \(M_{\mathbf {X}\leftarrow \mathbf {x}}\) (relative to \(\mathbf {u}\)).

### Definition

(*Explanatory determination in structural equation models*) Let \(M=(\mathcal {S},\mathcal {F})\) be an explanatory model with signature \((\mathcal {U}, \mathcal {V},\mathcal {R})\), let \(\mathbf {u}\) be a context, let \((\mathbf {u},\mathbf {v})\) be a complete description, let \(\mathbf {X}=(X_{1},\ldots ,X_{n})\) and *Y* be endogenous variables, let \(\mathbf {x}=(x_{1},\ldots ,x_{n})\) be values of the variables in \(\mathbf {X}\), and let *y* be a value of *Y*. Then \(\mathbf {X}=\mathbf {x}\) is an *explanatory determiner* of \(Y=y\) (relative to *M* and \((\mathbf {u},\mathbf {v})\)) just in case the following three conditions hold.

- (1)
The sentences “\(X_{1}\) has value \(x_{1}\)”, ...,“\(X_{n}\) has value \(x_{n}\)”, and “

*Y*has value*y*” are true in*M*, relative to the complete description \((\mathbf {u},\mathbf {v})\). - (2)
There exists a partition \((\mathbf {Z},\mathbf {W})\) of \(\mathcal {V}\) with \(\mathbf {X}\) a subset of \(\mathbf {Z}\), and there exists an assignment \((\mathbf {x}^{\prime },\mathbf {w}^{\prime })\) of values to the variables in \((\mathbf {X},\mathbf {W})\), such that the following two conditions hold.

- (i)
The counterfactual “If the \(\mathbf {X}\) variables had the corresponding values in \(\mathbf {x}^{\prime }\), and the \(\mathbf {W}\) variables had the corresponding values in \(\mathbf {w}^{\prime }\), then

*Y*would have value*y*” is false in*M*(relative to \(\mathbf {u}\)). - (ii)
For each subset \(\mathbf {Z}^{\prime }\) of \(\mathbf {Z}\), let \(\mathbf {z}^{\prime }\) be the values of the corresponding variables in \(\mathbf {Z}^{\prime }\) as specified by the complete description \((\mathbf {u},\mathbf {v})\).

^{Footnote 22}Then the counterfactual “If the \(\mathbf {X}\) variables had the corresponding values in \(\mathbf {x}\), and the \(\mathbf {W}\) variables had the corresponding values in \(\mathbf {w}^{\prime }\), and the \(\mathbf {Z}^{\prime }\) variables had the corresponding values in \(\mathbf {z}^{\prime }\), then*Y*would have value*y*” is true in*M*(relative to \(\mathbf {u}\)).

- (i)

With all that as background, here is the fully rigorous version of the Structural Equation Account of explanatory determination.

Structural Equation Account (SEA)For all

pandq,pis an explanatory determiner ofqjust in case there is an apt explanatory modelMsuch that \(\mathbf {X}=\mathbf {x}\) is an explanatory determiner of \(Y=y\) (relative toMand \((\mathbf {u},\mathbf {v})\)), where \(\mathbf {X}=x\) representsp, \(Y=y\) representsq, and \((\mathbf {u},\mathbf {v})\) is a complete description of the actual values of the variables inM.

## Rights and permissions

## About this article

### Cite this article

Wilhelm, I. Explanatory priority monism.
*Philos Stud* (2020). https://doi.org/10.1007/s11098-020-01478-z

Published:

### Keywords

- Metaphysics
- Explanation
- Causation
- Grounding