Skip to main content
Log in

On the normative insignificance of neuroscience and dual-process theory

  • Original Paper
  • Published:
Neuroethics Aims and scope Submit manuscript

Abstract

According to the dual-process account of moral judgment, deontological and utilitarian judgments stem from two different cognitive systems. Deontological judgments are effortless, intuitive and emotion-driven, whereas utilitarian judgments are effortful, reasoned and dispassionate. The most notable evidence for dual-process theory comes from neuroimaging studies by Joshua Greene and colleagues. Greene has suggested that these empirical findings undermine deontology and support utilitarianism. It has been pointed out, however, that the most promising interpretation of his argument does not make use of the empirical findings. In this paper, I engage with recent attempts by Greene to vindicate the moral significance of dual-process theory and the supporting neuroscientific findings. I consider their potential moral significance with regard to three aspects of Greene’s case against deontology: the argument from morally irrelevant factors, the functionalist argument and the argument from confabulation. I conclude that Greene fails to demonstrate how neuroscience and dual-process theory in general can advance moral theorizing.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Especially in [1,2,3,4].

  2. I am (like Berker and Greene) specifically considering the significance of the neuroscientific findings that support dual-process theory, rather than that of neuroscience in general.

  3. Greene et al. [5]. The study also tested reaction times, but the data were misinterpreted [6, 7].

  4. Greene et al. [8].

  5. For an overview of the evidence, see Greene [3].

  6. Greene [1], pp. 8, 20; [3], pp. 705–706. This renders criticism specifically of the neuroimaging studies less damaging (e.g. [9]).

  7. Greene [10], p. 60, see also pp. 70–71.

  8. Greene [11], p. 345; [12], p. 59; [10], p. 43; Greene et al. [8], pp. 389–390. Similar evolutionary debunking arguments are suggested for our moral condemnation of incest and our retributive intuitions [10].

  9. Kahane [13], p. 106.

  10. These are at least the two most natural and charitable interpretations of the argument put forth in Greene [10]. For further possible interpretations, see Berker [14]. For another general discussion of Greene’s argument, see Sauer [15]. A similar argument, based on the same empirical findings, was developed by Singer ([16]).

  11. Berker [14], p. 319; Kahane [13, 17]; Mason [18]; Tersman [19]. Note that Greene acknowledges that utilitarianism depends on intuitions, too, but he takes these intuitions to be about general principles rather than particular actions and to differ psychologically from deontological intuitions ([1], pp. 19–20; [3], p. 724).

  12. At least, he has not attempted to defend the argument from evolutionary history, and he has more recently expressed doubts concerning the evolutionary hypothesis ([4], p. 68).

  13. Greene [20], p. 176.

  14. Greene [6], p. 365. By contrast, a dilemma was originally classified as ‘personal’ if “the action in question (a) could reasonably be expected to lead to serious bodily harm (b) to a particular person or a member or members of a particular group of people (c) where this harm is not the result of deflecting an existing threat onto a different party.” ([5], p. 2107). While our responses are sensitive to the conjunction of two factors (personal force plus intention), the argument from morally irrelevant factors focuses primarily on the irrelevance of the former factor.

  15. Berker [14].

  16. I am here adopting the dialectic of Berker’s argument, who contends that Greene’s arguments either “rely on a shoddy inference” or on premises “that render the neuroscientific results irrelevant to the overall argument.“([14], p. 294)

  17. It is probably fair to say that Greene’s notes on this issue, which are labelled as ‘work in progress’, are somewhat sketchy. Below, I present what I hope is their most charitable interpretation.

  18. Berker [14], p. 325.

  19. Berker [14], p. 325.

  20. Kumar and Campbell [21], pp. 314–15.

  21. See Greene [1], p. 14; [3], pp. 711–713.

  22. Berker [14], p. 326.

  23. Greene [1], p. 18.

  24. See in particular Cushman et al. [22]; Hauser et al. [23]. Note though that the evidence provided by these studies is rather mixed and limited.

  25. Greene [1], p. 20.

  26. Greene [1], p. 20.

  27. Greene [1], p. 15.

  28. Greene [3]; see also Greene [1, 2, 4]. For an interesting discussion, see Lott [24].

  29. Greene [3], p. 714.

  30. See e.g. Greene [2], pp. 98–99, 348; [4], p. 73.

  31. Greene [2], pp. 66–67, see also Greene [2], p. 99; [4], pp. 72–73. His characterization of ‘Us vs Them’ problems as ‘unfamiliar’ is problematic, though (see note 45 below).

  32. Greene [2], pp. 1–27, pp. 293–295.

  33. Bruni et al. call this the ‘collective usefulness’ view: “According to this view, certain forms of moral thinking are to be recommended because they serve instrumentally to further widely shared goals, such as a reduction in conflict, or an increase in social cohesion.” ([25], p. 160). Note that there is a long tradition in utilitarian thought of embracing at least some common-sense moral rules as useful rules-of-thumb (Sunstein [26], p. 533).

  34. Greene [1], p. 21.

  35. See note 33 above.

  36. Greene seems to be aware of this problem and promises to address it in his book ([1], p. 24). But as I explain below, I find his treatment of these issues in his book unconvincing.

  37. Greene [3], p. 713. Elsewhere, he writes that “whether a judgment is produced by a process that is emotional, heuristic, or a by-product of our evolutionary history is not unrelated to whether that judgment reflects a sensitivity to factors that are morally irrelevant.” ([1], p. 12)

  38. The example is from Haidt et al. [27] and Haidt [28].

  39. Greene [1], p. 11.

  40. Greene [1], p. 11. Shortly after, he explicitly endorses the ‘argument from morally irrelevant factors’-interpretation suggested by Berker. And he writes that his characterization of the argument from morally irrelevant factors is modelled on the incest argument ([1], p. 15). Elsewhere ([3], p. 712), his presentation of the incest case is also embedded in a discussion of the argument from morally irrelevant factors.

  41. Greene [1], p. 22.

  42. Greene [3], p. 714.

  43. This is also noted by Greene ([3], p. 714)

  44. Prinz [29], p. 220. The epidemiological approach was pioneered by Sperber [30].

  45. See Prinz [29], pp. 223–229. Greene could object that this is an inter-tribal conflict and as such not suited for our automatic mode, anyway. Indeed, at one point, Greene writes: “Of course, Us versus Them is a very old problem. But historically it’s been a tactical problem rather than a moral one.” ([2], p. 15) But this statement is puzzling. What does it mean for a problem to be a tactical rather than moral one? And why is the intra-tribal tragedy of the commons (presumably as ‘tactical’ a problem as one can imagine) a moral problem rather than a tactical one? And does this mean that ‘familiarity’ is not the decisive criterion, at least not the only one? In any case, even if we abstract from this specific problem with the cultural trial-and-error process, other problems remain.

  46. Greene [3], p. 714.

  47. See Railton [31] and Sauer [32].

  48. Similarly, Bruni, Mameli and Rini conclude that Greene’s suggested heuristic is unconvincing until it has been corroborated by “a very ambitious empirical research program” ([25], p. 171).

  49. Greene [2], p. 291.

  50. Greene [2], p. 189.

  51. Greene [2], pp. 213–217, p. 261.

  52. Greene [2], pp. 224–245. Note that the evolutionary debunking arguments that feature in Greene’s book differ from Greene’s earlier evolutionary debunking arguments.

  53. Similarly, Wielenberg [33], p. 914.

  54. Greene [2], pp. 254–285.

  55. Similarly, Tobia [34], p. 749. By contrast, the idea behind seeking shared ground is intelligible. Given that one of the to-be-solved problems are the conflicts resulting from disagreement, identifying shared values may be a way of mitigating these conflicts. Interestingly, however, Greene himself appears to favor an epistemological rationale, which is less intelligible given the functionalist framework ([2], pp. 188–189).

  56. Persson and Savulescu [35]; Estlund [36].

  57. Surprisingly, neither Berker nor Greene consider this option, even though Greene’s presentation of the argument clearly invokes the neuroscientific findings. Berker only remarks that there is no need to discuss this argument, as it presupposes the success of the debunking of deontological intuitions, which Berker disputes ([14], 315). But this is rather uncharitable. As just noted, Greene can still use the argument from morally irrelevant factors to debunk some deontological intuitions (while admitting its limitations) and then combine it with the argument from confabulation. If the latter uses the neuroscientific findings, this would suffice to refute Berker’s main criticism.

  58. On post hoc rationalization, see e.g. Uhlmann et al. [37]; Wheatley and Haidt [38]; Wilson [39].

  59. Greene [10], pp. 60–63, 67–72; [3], p, 718; for instructive discussions, refer to Dean [40], pp. 47–48; Mihailov [41].

  60. Greene [10], p. 68.

  61. Greene [10], p. 68.

  62. In particular Greene et al. [5].

  63. Greene [3], p. 718, emphasis added. Note that deontologists are not claimed to only rationalize their deontological intuitions but both their deontological as well as their consequentialist intuitions. After all, deontologists provide explanations of why Footbridge is morally different from Switch, rather than just why Footbridge calls for a deontological response.

  64. Besides, the argument is an ad hominem attack, which, even if sound, has no place in scholarly debate [42].

  65. Greene [1], pp. 8 and 14. His 2014 paper is also obviously intended to demonstrate the moral significance of dual-process theory.

  66. Greene [1], p. 4.

  67. Greene [1], p. 17.

  68. For two interesting discussions, see Kahane [43] and Kumar and Campbell [21], pp. 315–319. Rini [44], too, defends the method underlying the argument from morally irrelevant factors.

  69. Hence the title of his paper, “The Normative Insignificance of Neuroscience”. See also in particular Berker [14], pp. 294, 325–327.

  70. And Berker can hardly be faulted for not appreciating the normative significance of this sort of experimental moral psychology given that Greene did not much emphasize its normative significance until after Berker had raised concerns about the normative significance of neuroscience. The focus had no doubt been on the neuroscientific findings and dual-process theory. The question which factors trigger deontological responses initially played only a subordinate role. It was necessary to make a provisional guess on this question in order to test the dual-process hypothesis. ([5], p. 2107; see also [1], p. 27; [6]; [3], p. 701 n17). It was only later that Greene and colleagues set out to develop a more precise account of the principles that govern people’s responses to trolley dilemmas [45]. This study had already been published when Berker wrote his article, and he mentions it in a footnote ([14], p. 323 n73). But Greene’s most complete statement of why the empirical findings matter, his ‘The Secret Joke of Kant’s Soul’ [10], predates this study and does not take it into account.

  71. Greene [46], p. 849. Greene initially took the findings to refute moral realism rather than deontology.

  72. Greene [1], p. 27; see also Greene [6].

References

  1. Greene, J. 2010. Notes on ‘The Normative Insignificance of Neuroscience’ by Selim Berker. Unpublished manuscript.

  2. Greene, J. 2013. Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. New York: Penguin Press.

    Google Scholar 

  3. Greene, J. (2014). Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics. Ethics 124 (4), 695–726.

    Article  Google Scholar 

  4. Greene, J. 2017. The rat-a-gorical imperative: Moral intuition and the limits of affective learning. Cognition 167 (1): 66–77.

    Article  Google Scholar 

  5. Greene, J., B.R. Sommerville, L.E. Nystrom, J.M. Darley, and J.D. Cohen. 2001. An fMRI Investigation of Emotional Engagement in Moral Judgment. Science 293 (5537): 2105–2108.

    Article  Google Scholar 

  6. Greene, J. 2009. Dual-process morality and the personal/impersonal distinction: A reply to McGuire, Langdon, Coltheart, and Mackenzie. Journal of Experimental Psychology 45 (3): 581–584.

    Google Scholar 

  7. McGuire, J., R. Langdon, M. Coltheart, and C. Mackenzie. 2009. A reanalysis of the personal/impersonal distinction in moral psychology research. Journal of Experimental Social Psychology 45 (3): 577–580.

    Article  Google Scholar 

  8. Greene, J., L.E. Nystrom, A.D. Engell, J.M. Darley, and J.D. Cohen. 2004. The Neural Bases of Cognitive Conflict and Control in Moral Judgment. Neuron 44 (2): 389–400.

    Article  Google Scholar 

  9. Klein, C. 2011. The Dual Track Theory of Moral Decision-Making: a Critique of the Neuroimaging Evidence. Neuroethics 4 (2): 143–162.

    Article  Google Scholar 

  10. Greene, J. 2008. The Secret Joke of Kant's Soul. In Moral Psychology: Volume 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development, ed. W. Sinnott-Armstrong, 35–80. Cambridge, MA: MIT Press.

    Google Scholar 

  11. Greene, J. 2005. Cognitive Neuroscience and the Structure of the Moral Mind. In Innateness and the Structure of the Mind: Volume 1, ed. P. Carruthers, S. Laurence, and S. Stich, 338–352. New York: Oxford University Press.

    Chapter  Google Scholar 

  12. Greene, J. 2005. Emotion and Cognition in Moral Judgment: Evidence from Neuroimaging. In Neurobiology of Human Values, ed. J.-P. Changeux, A. Damasio, W. Singer, and Y. Christen, 57–66. Berlin/Heidelberg: Springer.

    Chapter  Google Scholar 

  13. Kahane, G. 2011. Evolutionary Debunking Arguments. Noûs 45 (1): 103–125.

    Article  Google Scholar 

  14. Berker, S. 2009. The Normative Insignificance of Neuroscience. Philosophy and Public Affairs 37 (4): 293–329.

    Article  Google Scholar 

  15. Sauer, H. 2012. Morally irrelevant factors: What’s left of the dual-process model of moral cognition. Philosophical Psychology 25 (6): 783–811.

    Article  Google Scholar 

  16. Singer, P. 2005. Ethics and Intuitions. The Journal of Ethics 9 (3–4): 331–352.

    Article  Google Scholar 

  17. Kahane, G. 2014. Evolution and Impartiality. Ethics 124 (2): 327–341.

    Article  Google Scholar 

  18. Mason, K. 2011. Moral Psychology and Moral Intuitions: A Pox On All Your Houses. Australasian Journal of Philosophy 89 (3): 441–458.

    Article  Google Scholar 

  19. Tersman, F. 2008. The reliability of moral intuitions: A challenge from neuroscience. Australasian Journal of Philosophy 86 (3): 389–405.

    Article  Google Scholar 

  20. Greene, J. 2016. Solving the Trolley Problem. In A Companion to Experimental Philosophy, ed. J. Sytsma and W. Buckwalter, 175–189. Malden, MA: Wiley Blackwell.

    Google Scholar 

  21. Kumar, V., and R. Campbell. 2012. On the normative significance of experimental moral psychology. Philosophical Psychology 25 (3): 311–330.

    Article  Google Scholar 

  22. Cushman, F., L. Young, and M. Hauser. 2006. The Role of Conscious Reasoning and Intuition in Moral Judgment: Testing Three Principles of Harm. Psychological Science 17 (12): 1082–1089.

    Article  Google Scholar 

  23. Hauser, M., Cushman, F., Young, L., Kang-Xing, J., & Mikhail, J. (2007). A Dissociation Between Moral Judgments and Justifications. Mind & Language, 22(1), 1–21.

  24. Lott, M. 2016. Moral Implications from Cognitive (Neuro)Science? No Clear Route. Ethics 127 (1): 241–256.

    Article  Google Scholar 

  25. Bruni, T., M. Mameli, and R.A. Rini. 2014. The Science of Morality and its Normative Implications. Neuroethics 7 (2): 159–172.

    Article  Google Scholar 

  26. Sunstein, C. 2005. Moral heuristics. Behavioral and Brain Sciences 28 (4): 531–542.

    Google Scholar 

  27. Haidt, J., F. Bjorklund, and S. Murphy. 2000. Moral dumbfounding: When intuition finds no reason. Unpublished manuscript.

  28. Haidt, J. 2001. The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment. Psychological Review 108 (4): 814–834.

    Article  Google Scholar 

  29. Prinz, J. 2007. The Emotional Construction of Morals. Oxford: Oxford University Press.

    Google Scholar 

  30. Sperber, D. 1996. Explaining Culture. A Naturalistic Approach. Oxford: Blackwell.

    Google Scholar 

  31. Railton, P. 2014. The Affective Dog and Its Rational Tail: Intuition and Attunement. Ethics 124 (4): 813–859.

    Article  Google Scholar 

  32. Sauer, H. 2012. Educated intuitions. Automaticity and rationality in moral judgment. Philosophical Explorations 15 (3): 255–275.

    Article  Google Scholar 

  33. Wielenberg, E.J. 2014. Joshua Greene: Moral Tribes: Emotion, Reason, and the Gap between Us and Them. Ethics 124 (4): 910–916.

    Article  Google Scholar 

  34. Tobia, K. 2015. Moral Tribes: Emotion, Reason, and the Gap between Us and Them. Philosophical Psychology 28 (5): 746–750.

    Article  Google Scholar 

  35. Persson, I., and J. Savulescu. 2012. Unfit for the Future: The Need for Moral Enhancement. Oxford: Oxford University Press.

    Book  Google Scholar 

  36. Estlund, D. (2014). Utopophobia. Philosophy & Public Affairs, 42(2), 113–134.

  37. Uhlmann, E.L., D.A. Pizarro, D. Tannenbaum, and P.H. Ditto. 2009. The motivated use of moral principles. Judgment and Decision making 4 (6): 476–491.

    Google Scholar 

  38. Wheatley, T., and J. Haidt. 2005. Hypnotic Disgust Makes Moral Judgments More Severe. Psychological Science 16 (10): 780–784.

    Article  Google Scholar 

  39. Wilson, T.D. 2002. Strangers to ourselves: discovering the adaptive unconscious. Cambridge, MA/London: Harvard University Press.

    Google Scholar 

  40. Dean, R. 2010. Does Neuroscience Undermine Deontological Theory? Neuroethics 3 (1): 43–60.

    Article  Google Scholar 

  41. Mihailov, E. 2015. Is Deontology a Moral Confabulation. Neuroethics 9 (1): 1–13.

    Article  Google Scholar 

  42. Königs, P. 2018. Two types of debunking arguments. Philosophical Psychology 31 (3): 383–402.

    Article  Google Scholar 

  43. Kahane, G. 2013. The armchair and the trolley: an argument for experimental ethics. Philosophical Studies 162 (2): 421–445.

    Article  Google Scholar 

  44. Rini, R.A. 2013. Making Psychology Normatively Significant. The Journal of Ethics 17 (3): 257–274.

    Article  Google Scholar 

  45. Greene, J., F.A. Cushman, L.E. Stewart, K. Lowenberg, L.E. Nystrom, and J.D. Cohen. 2009. Pushing Moral Buttons: The interaction between personal force and intention in moral judgment. Cognition 111 (3): 364–371.

    Article  Google Scholar 

  46. Greene, J. 2003. From neural ‘is’ to moral ‘ought’: what are the moral implications of neuroscientific moral psychology? Nature Reviews Neuroscience 4 (10): 847–850.

    Article  Google Scholar 

Download references

Acknowledgements

I would like to thank Katharina Brecht, Sabine Döring, Malte Hendrickx, Michael Wenzler and the anonymous referees of this article.

Funding

Studienstiftung des deutschen Volkes.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Königs.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Königs, P. On the normative insignificance of neuroscience and dual-process theory. Neuroethics 11, 195–209 (2018). https://doi.org/10.1007/s12152-018-9362-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12152-018-9362-y

Keywords

Navigation