Abstract
According to the dual-process account of moral judgment, deontological and utilitarian judgments stem from two different cognitive systems. Deontological judgments are effortless, intuitive and emotion-driven, whereas utilitarian judgments are effortful, reasoned and dispassionate. The most notable evidence for dual-process theory comes from neuroimaging studies by Joshua Greene and colleagues. Greene has suggested that these empirical findings undermine deontology and support utilitarianism. It has been pointed out, however, that the most promising interpretation of his argument does not make use of the empirical findings. In this paper, I engage with recent attempts by Greene to vindicate the moral significance of dual-process theory and the supporting neuroscientific findings. I consider their potential moral significance with regard to three aspects of Greene’s case against deontology: the argument from morally irrelevant factors, the functionalist argument and the argument from confabulation. I conclude that Greene fails to demonstrate how neuroscience and dual-process theory in general can advance moral theorizing.
Similar content being viewed by others
Notes
I am (like Berker and Greene) specifically considering the significance of the neuroscientific findings that support dual-process theory, rather than that of neuroscience in general.
Greene et al. [8].
For an overview of the evidence, see Greene [3].
Greene [10], p. 60, see also pp. 70–71.
Kahane [13], p. 106.
These are at least the two most natural and charitable interpretations of the argument put forth in Greene [10]. For further possible interpretations, see Berker [14]. For another general discussion of Greene’s argument, see Sauer [15]. A similar argument, based on the same empirical findings, was developed by Singer ([16]).
Berker [14], p. 319; Kahane [13, 17]; Mason [18]; Tersman [19]. Note that Greene acknowledges that utilitarianism depends on intuitions, too, but he takes these intuitions to be about general principles rather than particular actions and to differ psychologically from deontological intuitions ([1], pp. 19–20; [3], p. 724).
At least, he has not attempted to defend the argument from evolutionary history, and he has more recently expressed doubts concerning the evolutionary hypothesis ([4], p. 68).
Greene [20], p. 176.
Greene [6], p. 365. By contrast, a dilemma was originally classified as ‘personal’ if “the action in question (a) could reasonably be expected to lead to serious bodily harm (b) to a particular person or a member or members of a particular group of people (c) where this harm is not the result of deflecting an existing threat onto a different party.” ([5], p. 2107). While our responses are sensitive to the conjunction of two factors (personal force plus intention), the argument from morally irrelevant factors focuses primarily on the irrelevance of the former factor.
Berker [14].
I am here adopting the dialectic of Berker’s argument, who contends that Greene’s arguments either “rely on a shoddy inference” or on premises “that render the neuroscientific results irrelevant to the overall argument.“([14], p. 294)
It is probably fair to say that Greene’s notes on this issue, which are labelled as ‘work in progress’, are somewhat sketchy. Below, I present what I hope is their most charitable interpretation.
Berker [14], p. 325.
Berker [14], p. 325.
Kumar and Campbell [21], pp. 314–15.
Berker [14], p. 326.
Greene [1], p. 18.
Greene [1], p. 20.
Greene [1], p. 20.
Greene [1], p. 15.
Greene [3], p. 714.
Greene [2], pp. 1–27, pp. 293–295.
Bruni et al. call this the ‘collective usefulness’ view: “According to this view, certain forms of moral thinking are to be recommended because they serve instrumentally to further widely shared goals, such as a reduction in conflict, or an increase in social cohesion.” ([25], p. 160). Note that there is a long tradition in utilitarian thought of embracing at least some common-sense moral rules as useful rules-of-thumb (Sunstein [26], p. 533).
Greene [1], p. 21.
See note 33 above.
Greene seems to be aware of this problem and promises to address it in his book ([1], p. 24). But as I explain below, I find his treatment of these issues in his book unconvincing.
Greene [1], p. 11.
Greene [1], p. 11. Shortly after, he explicitly endorses the ‘argument from morally irrelevant factors’-interpretation suggested by Berker. And he writes that his characterization of the argument from morally irrelevant factors is modelled on the incest argument ([1], p. 15). Elsewhere ([3], p. 712), his presentation of the incest case is also embedded in a discussion of the argument from morally irrelevant factors.
Greene [1], p. 22.
Greene [3], p. 714.
This is also noted by Greene ([3], p. 714)
See Prinz [29], pp. 223–229. Greene could object that this is an inter-tribal conflict and as such not suited for our automatic mode, anyway. Indeed, at one point, Greene writes: “Of course, Us versus Them is a very old problem. But historically it’s been a tactical problem rather than a moral one.” ([2], p. 15) But this statement is puzzling. What does it mean for a problem to be a tactical rather than moral one? And why is the intra-tribal tragedy of the commons (presumably as ‘tactical’ a problem as one can imagine) a moral problem rather than a tactical one? And does this mean that ‘familiarity’ is not the decisive criterion, at least not the only one? In any case, even if we abstract from this specific problem with the cultural trial-and-error process, other problems remain.
Greene [3], p. 714.
Similarly, Bruni, Mameli and Rini conclude that Greene’s suggested heuristic is unconvincing until it has been corroborated by “a very ambitious empirical research program” ([25], p. 171).
Greene [2], p. 291.
Greene [2], p. 189.
Greene [2], pp. 213–217, p. 261.
Greene [2], pp. 224–245. Note that the evolutionary debunking arguments that feature in Greene’s book differ from Greene’s earlier evolutionary debunking arguments.
Similarly, Wielenberg [33], p. 914.
Greene [2], pp. 254–285.
Similarly, Tobia [34], p. 749. By contrast, the idea behind seeking shared ground is intelligible. Given that one of the to-be-solved problems are the conflicts resulting from disagreement, identifying shared values may be a way of mitigating these conflicts. Interestingly, however, Greene himself appears to favor an epistemological rationale, which is less intelligible given the functionalist framework ([2], pp. 188–189).
Surprisingly, neither Berker nor Greene consider this option, even though Greene’s presentation of the argument clearly invokes the neuroscientific findings. Berker only remarks that there is no need to discuss this argument, as it presupposes the success of the debunking of deontological intuitions, which Berker disputes ([14], 315). But this is rather uncharitable. As just noted, Greene can still use the argument from morally irrelevant factors to debunk some deontological intuitions (while admitting its limitations) and then combine it with the argument from confabulation. If the latter uses the neuroscientific findings, this would suffice to refute Berker’s main criticism.
Greene [10], p. 68.
Greene [10], p. 68.
In particular Greene et al. [5].
Greene [3], p. 718, emphasis added. Note that deontologists are not claimed to only rationalize their deontological intuitions but both their deontological as well as their consequentialist intuitions. After all, deontologists provide explanations of why Footbridge is morally different from Switch, rather than just why Footbridge calls for a deontological response.
Besides, the argument is an ad hominem attack, which, even if sound, has no place in scholarly debate [42].
Greene [1], pp. 8 and 14. His 2014 paper is also obviously intended to demonstrate the moral significance of dual-process theory.
Greene [1], p. 4.
Greene [1], p. 17.
Hence the title of his paper, “The Normative Insignificance of Neuroscience”. See also in particular Berker [14], pp. 294, 325–327.
And Berker can hardly be faulted for not appreciating the normative significance of this sort of experimental moral psychology given that Greene did not much emphasize its normative significance until after Berker had raised concerns about the normative significance of neuroscience. The focus had no doubt been on the neuroscientific findings and dual-process theory. The question which factors trigger deontological responses initially played only a subordinate role. It was necessary to make a provisional guess on this question in order to test the dual-process hypothesis. ([5], p. 2107; see also [1], p. 27; [6]; [3], p. 701 n17). It was only later that Greene and colleagues set out to develop a more precise account of the principles that govern people’s responses to trolley dilemmas [45]. This study had already been published when Berker wrote his article, and he mentions it in a footnote ([14], p. 323 n73). But Greene’s most complete statement of why the empirical findings matter, his ‘The Secret Joke of Kant’s Soul’ [10], predates this study and does not take it into account.
Greene [46], p. 849. Greene initially took the findings to refute moral realism rather than deontology.
References
Greene, J. 2010. Notes on ‘The Normative Insignificance of Neuroscience’ by Selim Berker. Unpublished manuscript.
Greene, J. 2013. Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. New York: Penguin Press.
Greene, J. (2014). Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics. Ethics 124 (4), 695–726.
Greene, J. 2017. The rat-a-gorical imperative: Moral intuition and the limits of affective learning. Cognition 167 (1): 66–77.
Greene, J., B.R. Sommerville, L.E. Nystrom, J.M. Darley, and J.D. Cohen. 2001. An fMRI Investigation of Emotional Engagement in Moral Judgment. Science 293 (5537): 2105–2108.
Greene, J. 2009. Dual-process morality and the personal/impersonal distinction: A reply to McGuire, Langdon, Coltheart, and Mackenzie. Journal of Experimental Psychology 45 (3): 581–584.
McGuire, J., R. Langdon, M. Coltheart, and C. Mackenzie. 2009. A reanalysis of the personal/impersonal distinction in moral psychology research. Journal of Experimental Social Psychology 45 (3): 577–580.
Greene, J., L.E. Nystrom, A.D. Engell, J.M. Darley, and J.D. Cohen. 2004. The Neural Bases of Cognitive Conflict and Control in Moral Judgment. Neuron 44 (2): 389–400.
Klein, C. 2011. The Dual Track Theory of Moral Decision-Making: a Critique of the Neuroimaging Evidence. Neuroethics 4 (2): 143–162.
Greene, J. 2008. The Secret Joke of Kant's Soul. In Moral Psychology: Volume 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development, ed. W. Sinnott-Armstrong, 35–80. Cambridge, MA: MIT Press.
Greene, J. 2005. Cognitive Neuroscience and the Structure of the Moral Mind. In Innateness and the Structure of the Mind: Volume 1, ed. P. Carruthers, S. Laurence, and S. Stich, 338–352. New York: Oxford University Press.
Greene, J. 2005. Emotion and Cognition in Moral Judgment: Evidence from Neuroimaging. In Neurobiology of Human Values, ed. J.-P. Changeux, A. Damasio, W. Singer, and Y. Christen, 57–66. Berlin/Heidelberg: Springer.
Kahane, G. 2011. Evolutionary Debunking Arguments. Noûs 45 (1): 103–125.
Berker, S. 2009. The Normative Insignificance of Neuroscience. Philosophy and Public Affairs 37 (4): 293–329.
Sauer, H. 2012. Morally irrelevant factors: What’s left of the dual-process model of moral cognition. Philosophical Psychology 25 (6): 783–811.
Singer, P. 2005. Ethics and Intuitions. The Journal of Ethics 9 (3–4): 331–352.
Kahane, G. 2014. Evolution and Impartiality. Ethics 124 (2): 327–341.
Mason, K. 2011. Moral Psychology and Moral Intuitions: A Pox On All Your Houses. Australasian Journal of Philosophy 89 (3): 441–458.
Tersman, F. 2008. The reliability of moral intuitions: A challenge from neuroscience. Australasian Journal of Philosophy 86 (3): 389–405.
Greene, J. 2016. Solving the Trolley Problem. In A Companion to Experimental Philosophy, ed. J. Sytsma and W. Buckwalter, 175–189. Malden, MA: Wiley Blackwell.
Kumar, V., and R. Campbell. 2012. On the normative significance of experimental moral psychology. Philosophical Psychology 25 (3): 311–330.
Cushman, F., L. Young, and M. Hauser. 2006. The Role of Conscious Reasoning and Intuition in Moral Judgment: Testing Three Principles of Harm. Psychological Science 17 (12): 1082–1089.
Hauser, M., Cushman, F., Young, L., Kang-Xing, J., & Mikhail, J. (2007). A Dissociation Between Moral Judgments and Justifications. Mind & Language, 22(1), 1–21.
Lott, M. 2016. Moral Implications from Cognitive (Neuro)Science? No Clear Route. Ethics 127 (1): 241–256.
Bruni, T., M. Mameli, and R.A. Rini. 2014. The Science of Morality and its Normative Implications. Neuroethics 7 (2): 159–172.
Sunstein, C. 2005. Moral heuristics. Behavioral and Brain Sciences 28 (4): 531–542.
Haidt, J., F. Bjorklund, and S. Murphy. 2000. Moral dumbfounding: When intuition finds no reason. Unpublished manuscript.
Haidt, J. 2001. The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment. Psychological Review 108 (4): 814–834.
Prinz, J. 2007. The Emotional Construction of Morals. Oxford: Oxford University Press.
Sperber, D. 1996. Explaining Culture. A Naturalistic Approach. Oxford: Blackwell.
Railton, P. 2014. The Affective Dog and Its Rational Tail: Intuition and Attunement. Ethics 124 (4): 813–859.
Sauer, H. 2012. Educated intuitions. Automaticity and rationality in moral judgment. Philosophical Explorations 15 (3): 255–275.
Wielenberg, E.J. 2014. Joshua Greene: Moral Tribes: Emotion, Reason, and the Gap between Us and Them. Ethics 124 (4): 910–916.
Tobia, K. 2015. Moral Tribes: Emotion, Reason, and the Gap between Us and Them. Philosophical Psychology 28 (5): 746–750.
Persson, I., and J. Savulescu. 2012. Unfit for the Future: The Need for Moral Enhancement. Oxford: Oxford University Press.
Estlund, D. (2014). Utopophobia. Philosophy & Public Affairs, 42(2), 113–134.
Uhlmann, E.L., D.A. Pizarro, D. Tannenbaum, and P.H. Ditto. 2009. The motivated use of moral principles. Judgment and Decision making 4 (6): 476–491.
Wheatley, T., and J. Haidt. 2005. Hypnotic Disgust Makes Moral Judgments More Severe. Psychological Science 16 (10): 780–784.
Wilson, T.D. 2002. Strangers to ourselves: discovering the adaptive unconscious. Cambridge, MA/London: Harvard University Press.
Dean, R. 2010. Does Neuroscience Undermine Deontological Theory? Neuroethics 3 (1): 43–60.
Mihailov, E. 2015. Is Deontology a Moral Confabulation. Neuroethics 9 (1): 1–13.
Königs, P. 2018. Two types of debunking arguments. Philosophical Psychology 31 (3): 383–402.
Kahane, G. 2013. The armchair and the trolley: an argument for experimental ethics. Philosophical Studies 162 (2): 421–445.
Rini, R.A. 2013. Making Psychology Normatively Significant. The Journal of Ethics 17 (3): 257–274.
Greene, J., F.A. Cushman, L.E. Stewart, K. Lowenberg, L.E. Nystrom, and J.D. Cohen. 2009. Pushing Moral Buttons: The interaction between personal force and intention in moral judgment. Cognition 111 (3): 364–371.
Greene, J. 2003. From neural ‘is’ to moral ‘ought’: what are the moral implications of neuroscientific moral psychology? Nature Reviews Neuroscience 4 (10): 847–850.
Acknowledgements
I would like to thank Katharina Brecht, Sabine Döring, Malte Hendrickx, Michael Wenzler and the anonymous referees of this article.
Funding
Studienstiftung des deutschen Volkes.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Königs, P. On the normative insignificance of neuroscience and dual-process theory. Neuroethics 11, 195–209 (2018). https://doi.org/10.1007/s12152-018-9362-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12152-018-9362-y