Abstract
This paper presents an empirical solution to the puzzle of weakness of will. Specifically, it presents a theory of action, grounded in contemporary cognitive neuroscientific accounts of decision making, that explains the phenomenon of weakness of will without resulting in a puzzle.
Similar content being viewed by others
Notes
Following Rorty (1980), the so-called akratic break can occur at various points, including both deciding to eat the cake and eating it.
The opposing feelings of giving in and of acting freely, or ‘uncompelledness,’ are sometimes jointly described as a feeling of inner conflict, e.g., Hare describes weakness of will as a psychological circumstance best expressed by the “curious metaphor of divided personality which, ever since this subject is first discussed, has seemed so natural” (1963, 81).
Folk psychological theories are philosophical theories that aim to explain choice and action using purportedly commonsense assumptions about judgments, beliefs, desires, etc. (Lewis 1972; Stich and Nichols 2003). Folk Psychological Theory is broadly representative of those theories as presented in the context of explaining weakness of will (see Stroud and Tappolet 2003 for a review). Weakness of will is a central test case for such theories, since they have difficulties explaining how the phenomenon could be possible. My goal in this paper is to provide a data-driven rather than a folk psychological theory of weakness of will.
Per the recommendations of two anonymous referees, Folk Psychological Theory and Weakness of Will accommodate the temporal features of choice, the principle that an agent need only believe that she is free to act in a certain way, and unsuccessful attempts at action. Although both Folk Psychological Theory and Weakness of Will are still vulnerable to additional counterexamples, they can, in principle, be revised to accommodate them. My goal is not to establish a theory of weakness of will that precludes all possible counterexamples, however. My goal is to provide a plausible, i.e., empirically-informed theory of weakness of will that nonetheless does not result in a paradox. See also Ft. 6 below.
Weakness of Will does not define weakness of will. It identifies a class of actions characterized by a sufficient condition. I have tried to choose as neutral a sufficient condition as possible, though of course my choice of condition is still controversial. For example, Weakness of Will is often formulated using the notion of “intending to do,” rather than “trying to do.” I prefer the latter formulation, since the notion of intentions as distinct mental states is facing pressure from the cognitive neurosciences (for an excellent discussion of using intentions in neuroscientific explanations, see Uithol et al. 2014). However, my theory could be modified to accommodate intentions as well as alternative sufficient conditions. At present, my theory only addresses those cases of weakness of will captured by Weakness of Will. I thank two anonymous referees for helping me clarify this point.
Conditional evaluative judgments are also called prima facie or all things considered judgments. Unconditional evaluative judgments are also called all-out judgments about what it would be best to do (Davidson 1970).
This type of weakness of will is also known as last-ditch akrasia (Pears 1984), strict akratic action (Mele 1987), and clear-eyed weakness of will (Bobonich and Destrée 2007). In a representative passage, Robert Dunn argues,
Davidson too is revealed as unsympathetic to the possibility of [unconditional weakness of will]. For, as I have just stressed, the concern I have with whether weakness of will is possible is specifically a concern with whether certain cases of acting against one’s unconditional better judgment, or judgment about what is right, or some such, are possible. No doubt other putative phenomena merit being thought of in terms of weakness of will; but none seem more central than the range of cases I have in mind; and moreover, it is surely these which, quite naturally, have provided the standard focus of discussion of whether weakness of will is possible (1987, 12).
This is called Kamin Blocking (Kamin 1969).
Securing the relevant value alternatives represents an associated challenge. It is of no use to predict that the juiciest and most nutritious leaves are on the highest branches of the tree if one cannot also reach those highest branches.
While most evidence suggests that decisions are controlled by at least three distinct systems, the full range of decision-making behaviors may be underwritten by still more. For example, there may be more than one type of Pavlovian controller (see Daw and O’Doherty 2013 for a recent discussion of this issue). Thus, while the Multi-System Model refers to just three systems in practice, it allows for still more in theory.
In reinforcement learning, this system is formally called the Pavlovian system (e.g., see Dayan et al. 2006; Talmi et al. 2008; Dayan and Berridge 2014). However, the term ‘Pavlovian’ frequently leads to confusion among researchers in other fields (for an interesting discussion of how psychologists and machine learning scientists characterize the Pavlovian system differently, see Rescorla’s “Pavlovian conditioning: It’s not what you think it is,” 1988). In most fields, as well as in everyday usage, the term ‘Pavlovian’ is usually associated with Pavlov’s original experiments with dogs, where Pavlov trained his dogs by repeatedly ringing a bell and then consistently feeding them afterwards. By contrast, in reinforcement learning, it is the relationship between the unconditioned stimulus (i.e., the food) and the unconditioned response (i.e., the salivating) that is of interest.
The deliberative system is formally called the goal-directed or model-based system (e.g. see Dayan 2011).
Example from Dayan (2011).
The habitual system is formally called the habit-based or model-free system (e.g., see Dayan 2011). Experimental psychologists have dissociated deliberative- and habit-based activities in animals. Anthony Dickinson and Bernard Balleine trained rats to press a lever in exchange for a reward, and then devalued the reward by pairing it with a noxious substance. They then examined whether the animals continue to press the lever to receive further rewards. Notably, the duration of the rats’ initial training determined whether they were willing to press the lever or not. If they were trained for a moderate period of time, the rats no longer pressed the lever. If they were trained for a longer period, the rats continued to press the lever. These responses have been interpreted as reflecting the deliberative system in the first case and the habitual system in the latter case (Dickinson 1985; Dickinson and Balleine 2002). Modified versions of this methodology have been used to isolate deliberative activity in human participants (Hampton et al. 2006; Valentin et al. 2007; Tricomi et al. 2009).
I owe this formulation to (Crockett 2013).
The feedback signal works much like exclamations of ‘Hotter!’ and ‘Colder’ in the children’s game Hot-or-Cold. The Seeker moves around the room with the general goal of finding a hidden object. The Hider helps the Seeker by telling her whether she is getting closer or farther away. The Hider’s suggestions operate like an error signal by helping the Seeker refine her predictions, albeit without giving her detailed instructions about where to go (analogy from Montague 2006).
Thanks to an anonymous referee for raising the issue of hierarchical models, and for encouraging me to clarify my discussion of PA below.
Partial evaluation further optimizes choice, “trading off the likely costs (for example, time or calories) of additional search against its expected benefits (more accurate valuations allowing better reward harvesting)” (Daw et al. 2005, p. 1708).
That said, if HA does turn out to be correct, and, further, provides the relevant conditions for the deliberative system enacting control, the model of weakness of will presented here should apply, mutatis mutandis, to HA.
As it is the agent’s own habitual system that computes and assigns the relevant values to B, there is no reason to expect that she will feel compelled or otherwise ‘un-free’ in making her decision.
Since the habitual system continues to re-evaluate its choices as it acquires new experiences, habitual weakness of will promises to be remediable. That is, although Davidson might get up to brush his teeth once or twice, he would experience the action’s negative consequences and refrain from repeating it in the future. Interactions between the deliberative and habitual systems thus provide one explanation of the everyday experience of weakness of will.
In a recent experiment, Huys and colleagues showed that when one of the branches of a decision tree is associated with a large loss early in the tree, participants rely on the hardwired system to prune out that entire series of actions (Huys et al. 2012).
One could object at this point that, per Weakness of Will, this is not really a case of weakness of will. But Gene may still be aware of his pruned options. What the agent consciously perceives as his options may not match what happens at the level of his decision systems. Thanks to an anonymous referee for pressing this point.
As noted by an anonymous referee, it may be objected that Gene’s case and Austin’s example amount to instances of inhibitory rather than pruning-based weakness of will. But inhibitory weakness of will is characterized by either physical immobility or nearly compulsive activity, and neither Gene nor Austin experience either of these (e.g., Austin does not ‘raven’). Their cases are thus best understood as instances of pruning-based weakness of will.
The participants’ behaviors in the Milgram studies suggest that they correspond to instances weakness of will. Even those participants who continued to administer the shocks did so under extreme stress. For example, many participants perspired heavily, laughed at inappropriate times, and even experienced seizures (Milgram 1974). In their analysis, Merritt et al. (2011) interpret the participants’ acute symptoms of distress as indicating that the participants did not endorse the violent punishment of the victim, but continued to press the button anyway.
Along slightly different lines, in Experiment 15, the baseline condition remained the same, but there were two Experimenters in the room with the participant instead of one. When Learner protested at the shock, the two Experimenters verbally disagreed with one another as to whether they should go on. In this version, 19 out of 20 participants did not continue administering the shocks past this point (Milgram 1974, p. 106).
The hardwired system can also support or enhance the deliberative system. For example, if an athlete deliberately runs up a challenging hill, hearing her favorite song triggers her hardwired system to subconsciously picks up the pace.
Davidson writes: “If we are going to explain irrationality at all, it seems we must assume that the mind can be partitioned into quasi-independent structures that interact ... Recall the analysis of akrasia. There I mentioned no partitioning of the mind because the analysis was at that point more descriptive than explanatory. But the way could be cleared for explanation if we were to suppose two semi-autonomous departments of the mind, one that finds a certain course of action to be, all things considered, best, and another that prompts another course of action. On each side, the side of sober judgment and the side of incontinent intent and action, there is a supporting structure of reasons, of interlocking beliefs, expectations, assumptions, attitudes and desires” (1982, p. 300).
System 1 corresponds to unconscious, intuitive reasoning. System 2 corresponds to deliberate reasoning abilities (Kahneman 2011).
One question that follows from this, suggested by an anonymous referee, is what MSM entails regarding weakness of will and accountability. One implication of MSM, perhaps contrary to previous opinion, is that weakness of will is a relatively common phenomenon, so that the issue of how we should hold people accountable is of greater importance on the MSM worldview. It is plausible that, just as MSM fractionates different types of weakness of will and different degrees of compulsion, so it is consistent with a step-wise attribution of accountability. However, MSM does not commit us to any one view.
Fault line approaches to the decision systems are gaining substantial ground in psychology and neuroscience. For example, neuroscientist A. David Redish uses ‘vulnerabilities’ to highlight those aspects of our decision-making architecture that are susceptible to illnesses such as addiction and depression. One key vulnerability consists in the mammalian opioid system and its susceptibility to external chemicals such as opium and heroin (Redish 2013). Quentin Huys and colleagues have similarly argued that psychiatric illnesses such as depression and impulsivity might be byproducts of our decision-making systems (2012, 2013). A similar approach should be taken up in the philosophy of action.
References
Ainslie, G. (2001). Breakdown of will. Cambridge: Cambridge University Press.
Aquinas, T. (1952). The Summa theologica of Saint Thomas Aquinas, Fathers of the English Dominican Province (Trans.). Chicago: Encyclopædia Britannica Press.
Aristotle. (1984). Nicomachean Ethics, Bk. VII, Chs. 1-10. In J. Barnes (Ed.), The complete work of Aristotle (pp. 1808–1821). Princeton: Princeton University Press.
Arpaly, N. (2000). On acting rationally against one’s better judgment. Ethics, 110, 488–513.
Audi, R. (1979). Weakness of will and practical Judgment. Noûs, 13, 173–196.
Augustine. (1960). Confessions (John K. Ryan Trans.). New York: Doubleday.
Austin, J. A. (1956/57). A plea for excuses. In Austin (1979) (pp. 175–204).
Badre, D. (2008). Cognitive control, hierarchy, and the rostro-caudal organization of the frontal lobes. Trends in Cognitive Sciences, 12(5), 193–200.
Balleine, B. W. (2007). Reward and decision making in corticobasal ganglia networks. Boston: Blackwell Publishers.
Balleine, B. W., & Dickinson, A. (1998). Goal-directed instrumental action: contingency and incentive learning and their cortical substrates. Neuropharmacology, 37, 407–419.
Balleine, B. W., & Dickinson, A. (2000). The effect of lesions of the insular cortex on instrumental conditioning: Evidence for a role in incentive memory. Journal of Neuroscience, 20, 8954–8964.
Barrett, L. F. (2006). Are emotions natural kinds? Perspectives on Psychological Science, 1(1), 28–58.
Barto, A. G. (1995). Adaptive critics and the basal ganglia. In J. C. Houk, J. Davis, & D. Beiser (Eds.), Models of information processing in the basal ganglia (pp. 215–232). Cambridge, MA: MIT Press.
Baumeister, R. F. (2002). Ego depletion and self-control failure: An energy model of the self’s executive function. Self and Identity, 1, 129–136.
Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998). Ego-depletion: Is the active self a limited resource? Journal of Personality and Social Psychology, 74, 1252–1265.
Bobonich, C., & Destrée, P. (Eds.). (2007). Akrasia in Greek Philosophy: From Socrates to Plotinus. Leiden: Brill.
Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., & Cohen, J. D. (2001). Conflict monitoring and cognitive control. Psychological Review, 108(3), 624.
Botvinick, M., Niv, Y., & Barto, A. C. (2009). Hierarchically organized behavior and its neural foundations: A reinforcement learning perspective. Cognition, 113(3), 262–280.
Bouton, M. E. (2006). Learning and behavior: A contemporary synthesis. Sunderland, MA: Sinauer.
Bratman, M. (1979). Practical reasoning and weakness of the will. Noûs, 13, 153–171.
Brown, P. L., & Jenkins, H. M. (1968). Auto-shaping of the pigeon’s key-peck. Journal of the experimental analysis of behavior, 11(1), 1–8.
Buss, S. (1997). Weakness of will. Pacific Philosophical Quarterly, 78, 13–44.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.
Cohen, J. D., Braver, T. S., & O’Reilly, R. C. (1996). A computational approach to prefrontal cortex, cognitive control and schizophrenia: Recent developments and current challenges. Philosophical Transactions: Biological Sciences, 351, 1515–1527.
Cisek, P., & Kalaska, J. F. (2010). Neural mechanisms for interacting with a world full of action choices. Annual Review of Neuroscience, 33, 269–298.
Crockett, M. J. (2013). Models of morality. Trends in cognitive sciences, 17(8), 363–366.
Davidson, D. (1970). How is weakness of the will possible?. In Davidson (1980) (pp. 21–42).
Davidson, D. (1980). Essays on actions and events. Oxford: Clarendon Press.
Davidson, D. (1982). Paradoxes of irrationality. In Davidson (2004) (pp. 169–187).
Davidson, D. (2004). Problems of rationality. Oxford: Clarendon Press.
Daw, N. D., Niv, Y., & Dayan, P. (2005). Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience, 8(12), 1704–1711.
Daw, N. D., & O’Doherty, J. P. (2013). Multiple systems for value learning. In Neuroeconomics: Decision making, and the brain (pp. 393–410).
Dayan, P. (2011). Interactions Between Model-Free and Model-Based Reinforcement Learning,’ Seminar Series from the Machine Learning Research Group. University of Sheffield, Sheffield. Lecture recording. http://ml.dcs.shef.ac.uk/. Accessed May 2013.
Dayan, P., & Berridge, K. C. (2014). Model-based and model-free Pavlovian reward learning: Revaluation, revision, and revelation. Cognitive, Affective, and Behavioral Neuroscience, 14(2), 473–492.
Dayan, P., & Huys, Q. J. (2008). Serotonin, inhibition, and negative mood. PLoS Computational Biology, 4(2), e4.
Dayan, P., & Niv, Y. (2008). Reinforcement learning: The good, the bad and the ugly. Current Opinion in Neurobiology, 18(2), 185–196.
Dayan, P., Niv, Y., Seymour, B., & Daw, N. D. (2006). The misbehavior of value and the discipline of the will. Neural Networks, 19(8), 1153–1160.
Deneve, S., & Pouget, A. (2004). Bayesian multisensory integration and cross-modal spatial links. Journal of Physiology Paris, 98(1–3), 249–58.
Dickinson, A. (1985). Actions and habits: The development of a behavioural autonomy. Philosophical Transactions of the Royal Society B: Biological Sciences, 308, 67–78.
Dickinson, A., & Balleine, B. (2002). The role of learning in motivation. In C. R. Gallistel (Ed.), Learning, motivation and emotion (pp. 497–533). New York: Wiley.
Dunn, R. (1987). The possibility of weakness of will. Indianapolis: Hackett.
Glimcher, P. W. (2010). Foundations of neuroeconomic analysis. Oxford University Press.
Glimcher, P. W., & Fehr, E. (Eds.). (2013). Neuroeconomics: Decision making and the brain. Academic Press.
Hampton, A. N., Bossaerts, P., & O’Doherty, J. P. (2006). The role of the ventromedial prefrontal cortex in abstract state-based inference during decision making in humans. The Journal of Neuroscience, 26(32), 8360–8367.
Hare, R. M. (1952). The language of morals. Oxford: Clarendon Press.
Hare, R. M. (1963). Freedom and reason. Oxford: Clarendon Press.
Hare, T. A., Camerer, C. F., & Rangel, A. (2009). Self-control in decision-making involves modulation of the vmPFC valuation system. Science, 324(5927), 646–648.
Heil, J. (1989). Minds divided. Mind, 98(392), 571–583.
Hershberger, W. A. (1986). An approach through the looking-glass. Animal Learning and Behavior, 14(4), 443–451.
Hohwy, J. (2013). The predictive mind. Oxford: Oxford University Press.
Holton, R. (1999). Intention and weakness of will. Journal of Philosophy, 96, 241–262.
Holton, R. (2009). Willing, wanting, waiting. Oxford: Oxford University Press.
Huys, Q. J., Cools, R., Gölzer, M., Friedel, E., Heinz, A., Dolan, R. J., et al. (2011). Disentangling the roles of approach, activation and valence in instrumental and pavlovian responding. PLoS Computational Biology, 7(4), e1002028.
Huys, Q. J., Eshel, N., O’Nions, E., Sheridan, L., Dayan, P., & Roiser, J. P. (2012). Bonsai trees in your head: How the Pavlovian system sculpts goal-directed choices by pruning decision trees. PLoS Computational Biology, 8(3), e1002410.
Huys, Q. J., Pizzagalli, D. A., Bogdan, R., & Dayan, P. (2013). Mapping anhedonia onto reinforcement learning: A behavioural meta-analysis. Biology of Mood and Anxiety Disorders, 3(1), 1.
Izquierdo, A., Suda, R. K., & Murray, E. A. (2004). Bilateral orbital prefrontal cortex lesions in rhesus monkeys disrupt choices guided by both reward value and reward contingency. Journal of Neuroscience, 24, 7540–7548.
Kable, J. W., & Glimcher, P. W. (2009). The neurobiology of decision: Consensus and controversy. Neuron, 63(6), 733–745.
Kahneman, D. (2011). Thinking fast and slow. New York: Farrar, Straus and Giroux.
Kamin, L. J. (1969). Predictability, surprise, attention, and conditioning. In B. A. Campbell & R. M. Church (Eds.), Punishment and aversive behavior (pp. 279–296). New York: Appleton-Century-Crofts.
Kiani, R., & Shadlen, M. N. (2009). Representation of confidence associated with a decision by neurons in the parietal cortex. Science, 324(5928), 759–764.
Killcross, S., & Coutureau, E. (2003). Coordination of actions and habits in the medial prefrontal cortex of rats. Cerebral Cortex, 13, 400–408.
Lee, S. W., Shimojo, S., & O’Doherty, J. P. (2014). Neural computations underlying arbitration between model-based and model-free learning. Neuron, 81(3), 687–699.
Leibniz, G. W. F. (1965). Nouveaux Essais Sur L’Entendement Humain, (Hans Heinz Holz, Trans. German). Darmstadt: Wissenschaftliche Buchgesellschaft.
Levy, N. (2011). Resisting weakness of the will. Philosophy and Phenomenological Research, 82(1), 134–155.
Lewis, D. (1972). Psychophysical and Theoretical Identifications. Australasian Journal of Philosophy, 50, 249–258 (reprinted in Rosenthal 1994, pp. 204–10).
Macintosh, N. J. (1983). Conditioning and associative learning. Oxford: Oxford University Press.
Mele, A. (1987). Irrationality. New York, NY: Oxford University Press.
Mele, A. (2010). Weakness of will and akrasia. Philosophical Studies, 150(3), 391–404.
Mele, A. R. (1992). Springs of action: Understanding intentional behavior. Chicago: Oxford University Press.
Mele, A. R. (2002). Akratics and addicts. American Philosophical Quarterly, 39(2), 153–167.
Merritt, M. W., Doris, J. M., & Harman, G. (2011). Character. In John M. Doris (Ed.), The moral psychology handbook. Oxford: Oxford University Press.
Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67(4), 371.
Milgram, S. (1974). Obedience to authority: An experimental view. London: Tavistock.
Montague, R. (2006). Why choose this book? How We Make Decisions. New York: Dutton.
Montague, R. (2007). Your brain is (almost) perfect: How we make decisions. Chicago: Penguin.
O’Doherty, J. P. (2014). The problem with value. Neuroscience and Biobehavioral Reviews, 43, 259–268.
Padoa-Schioppa, C., & Assad, J. A. (2006). Neurons in the orbitofrontal cortex encode economic value. Nature, 441(7090), 223–226.
Pears, D. (1984). Motivated irrationality. Oxford: Clarendon Press.
Peijnenburg, J. (2000). Akrasia, dispositions and degrees. Erkenntnis, 53(3), 285–308.
Penner, T. (1997). Socrates on the strength of knowledge: Protagoras 351B–357E. Archiv für Geschichte der Philosophie, 79(2), 117–149.
Plato, (1997). Protagoras. In J. M. Cooper & D. S. Hutchison (Eds.), Complete works (pp. 746–791). Indianapolis: Hackett.
Rangel, A. (2013). Regulation of dietary choice by the decision-making circuitry. Nature Neuroscience, 16(12), 1717–1724.
Rangel, A., Camerer, C., & Montague, P. R. (2008). A framework for studying the neurobiology of value-based decision making. Nature Reviews Neuroscience, 9(7), 545–556.
Redish, D. A. (2013). The mind within the brain. Oxford: Oxford University Press.
Rescorla, R. A. (1988). Pavlovian conditioning: It’s not what you think it is. American Psychologist, 43(3), 151.
Rescorla, R. A., & Wagner, A. R. (1972). A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. Classical conditioning II: Current research and theory, 2, 64–99.
Rorty, A. (1980). Where does the akratic break take place? Australasian Journal of Philosophy, 58, 333–347.
Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275(5306), 1593–1599.
Sheffield, F. D. (1965). Relation between classical and instrumental conditioning. In W. F. Prokasy (Ed.), Classical conditioning (pp. 302–322). New York, NY: Appleton Century Crofts.
Shields, C. (2007). Unified Agency and Akrasia in Plato’s Republic. In Bobonich and Destree 2007 (pp. 61–86).
Sripada, C. (2010). Philosophical questions about the nature of willpower. Philosophy Compass, 5(9), 793–805.
Spinoza, B. (2002). Complete Works, (Samuel Shirley, Trans.). Indianapolis: Hackett.
Stich, S., & Nichols, S. (2003). Folk Psycholgy. In S. Stich & T. Warfield (Eds.), The Blackwell Guide to Philosophy of Mind (pp. 235–255). Oxford: Blackwell.
Stocker, M. (1979). Desiring the bad: An essay in moral psychology. Journal of Philosophy, 76, 738–753.
Stroud, S. (2003). Weakness of will and practical judgment. In Stroud and Tappolet (2003) (pp. 121–146).
Stroud, S., & Tappolet, C. (Eds.). (2003). Weakness of will and practical irrationality. Oxford: Clarendon Press.
Sutton, R. S., & Barto, A. G. (1998). Introduction to reinforcement learning. Cambridge: MIT Press.
Talmi, D., Seymour, B., Dayan, P., & Dolan, R. J. (2008). Human Pavlovian-instrumental transfer. The Journal of Neuroscience, 28(2), 360–368.
Tappolet, C. (2003). Emotions and the intelligibility of akratic action, In Stroud and Tappolet (2003) (pp. 97–120).
Tricomi, E., Balleine, B. W., & O’Doherty, J. P. (2009). A specific role for posterior dorsolateral striatum in human habit learning. European Journal of Neuroscience, 29(11), 2225–2232.
Uithol, S., Burnston, D. C., & Haselager, P. (2014). Why we may not find intentions in the brain. Neuropsychologia, 56, 129–139.
Uithol, S., van Rooij, I., Bekkering, H., & Haselager, P. (2012). Hierarchies in action and motor control. Journal of Cognitive Neuroscience, 24(5), 1077–1086.
Valentin, V. V., Dickinson, A., & O’Doherty, J. P. (2007). Determining the neural substrates of goal-directed learning in the human brain. Journal of Human Neuroscience, 27, 4019–4026.
Walsh, J. J. (1963). Aristotle’s conception of moral weakness. New York: Columbia University Press.
Watson, G. (1977). Skepticism About Weakness of Will. Philosophical Review, 86, 316–339.
Williams, D. R., & Williams, H. (1969). Auto-maintenance in the pigeon: Sustained pecking despite contingent non-reinforcement. Journal of the Experimental Analysis of Behavior, 12(4), 511–520.
Acknowledgements
I would like to thank Carl Craver, Peter Dayan, John Doris, Ursula Goldenbaum, Bryce Huebner, Colin Klein, Kathryn Lindeman, Robert McCauley, Shaun Nichols, Casey O’Callaghan, Richard Patterson, Elizabeth Schechter, and two anonymous referees for helpful discussions and comments. I’m also grateful to audiences at the 2014 Pacific American Philosophical Association poster session, and at the 2015 Self-prediction in Decision Theory and Artificial Intelligence Conference, for discussions of the material presented in this paper. Special thanks go to Benjamin Henke and Julia Staffel.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Haas, J. An empirical solution to the puzzle of weakness of will. Synthese 195, 5175–5195 (2018). https://doi.org/10.1007/s11229-018-1712-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-018-1712-0