Paradox Lost pp 91-106 | Cite as

The Self-Torturer

  • Michael Huemer


A person is repeatedly given the option to increase his torture level by an undetectable increment, in exchange for $10,000. Each time, it seems rational to accept, but the end result is a life of agony that seems not worth the financial reward. The solution is to recognize that there can be an introspectively undetectable increment in pain, and that an undetectable harm can outweigh a detectable benefit.

4.1 The Paradox

You have been fitted with an unobtrusive torture device, which will be attached to you for the rest of your life.1 It has a thousand and one settings, labeled from 0 up to 1,000. At setting 0, the device is off, so you feel no suffering caused by the device. At setting 1, the device applies a very slight electric current to the pain center in your brain. It is so slight that you wouldn’t even notice it. At setting 2, it applies a very slightly higher electric current. And so on. For any setting n, the setting n+1 applies a very slightly higher current than n, with the increase being so small that you cannot introspectively tell the difference between being at setting n and being at n+1. However, by the time you get up to setting 1,000, you are in severe pain.

Imagine that you are offered a series of choices over the next thousand days: at the beginning of each day, you may either turn the dial up by one, or leave the device alone. If you turn it up by one, you will be given $10,000. You can never turn the dial down (not even if you give back the money!). Should you turn the dial up on day one? You might reason as follows: “If I turn the dial up just one setting, I won’t even notice the difference. However, I will have an additional $10,000, which is a significant and very noticeable benefit. A significant benefit outweighs an unnoticeable cost (if indeed an unnoticeable ‘cost’ counts as a cost at all). So I should turn the dial.”

This seems reasonable. And the same reasoning applies every day. So it looks as if it is rational to turn the dial up each time. The result: at the end of 1,000 days, you have made $10 million, and you are condemned to spend the rest of your life in agony. It seems, however, that the $10 million, as nice as it may be, would not adequately compensate for the harm of spending the rest of your days in severe pain.

Something must have gone wrong. But it seems that you made the correct (rational, self-interested) choice at each stage. How is it possible that by making the correct choice at every stage, you predictably wind up much worse off than when you started (when you had the option of keeping your starting situation)? This case might also be taken to illustrate how the “better than” relation can be intransitive. Transitivity is the principle that if x is better than y, and y is better than z, then x is better than z. The self-torturer case seemingly violates this principle: for each n, it is better to go to setting n+1 than to remain at n, yet it is not better to go to setting 1,000 than to stay at 0 – or so one might argue.2

This scenario resembles some real-life situations.3 Suppose you have a large supply of potato chips on hand. You like potato chips, but you do not want to become overweight. You pick up chip #1 and consider whether to eat it. If you eat it, you will experience a noticeable pleasure attributable to that particular chip. And surely that one chip will not affect your waistline. So it seems that it makes sense to eat it. You do so. Next you consider chip #2, which will also cause a noticeable pleasure without itself causing any noticeable change in your waistline. You eat that one too. Next, chip #3. And so on. Before long, you’ve eaten the whole bag. As this happens to you every day for several months, you eventually come to regret your chip-eating habit.

What could we expect of a solution to this paradox? First, many find the intransitivity of betterness or of rational preference paradoxical – that is, it seems paradoxical that one’s situation could be repeatedly getting better at each of a series of stages (or that one’s rational preferences could be repeatedly getting satisfied) and yet that the end result be decidedly worse for one than the starting situation (or rationally dispreferred to the starting situation). So it would be nice to have a solution that explains why this scenario does not truly exhibit intransitivity of betterness or rational preferences. Second, it would be nice to have a theory of rational choice that can explain why a rational chooser would not in fact wind up turning the dial all the way up to 1,000, and what the rational chooser would do instead.

4.2 Quinn’s Solution

In the original discussion of the paradox, Warren Quinn proposes that the self-torturer should try focusing his attention on some proper subset of all the settings of the device – for example, instead of thinking about all the settings 0, 1, 2, . . ., 1000, the self-torturer might just think about the six settings numbered 0, 200, 400, 600, 800, and 1000.4 He should take a subset such that (i) his preferences over this subset are transitive, and (ii) there is at least one setting in the series that he prefers over his original state (setting 0). He should employ the most refined set of which conditions (i) and (ii) hold, within some systematic scheme for generating such subsets. The self-torturer should form a plan to proceed to whatever is his most preferred setting from that subset. He should then in fact proceed to that setting and stop. For instance, perhaps from the set {0, 200, 400, 600, 800, 1000}, his favorite setting would be 200, since going to 200 would earn him two million dollars while only subjecting him to mild pain that would be compensated for by the money; perhaps the additional pain resulting from proceeding up to 400 would not be worth the additional two million dollars. In that case, the self-torturer should turn the dial up every day for 200 days, and thereafter leave the dial alone.

What happens when the self-torturer reaches day 201, and he is offered the chance to turn the dial up just one more time, in exchange for another $10,000? In Quinn’s view, the self-torturer should rationally decline the offer, on the grounds that accepting the offer would involve departing from his earlier-formed rational plan, and no new information has appeared to justify changing the plan. Quoth Quinn : “He should be stopped by the principle that a reasonable strategy that correctly anticipated all later facts . . . still binds.”5

There are two plausible principles of rational choice in play here, which I will call “Preference Consistency” and “Strategy Consistency”; intuitively, the idea is that one’s choices should be consistent with one’s rational preferences and with one’s rationally chosen strategies, respectively:
  • Preference Consistency: If one has a choice between A and B, and one rationally prefers A to B, it is rational to choose A.

  • Strategy Consistency: If an agent has adopted a rational strategy that correctly anticipated all later facts, then the agent is rationally required to follow through on that strategy.

Strategy Consistency is Quinn’s principle of rational choice. Quinn holds that we should reject Preference Consistency because, in the case of the self-torturer on day 201, Preference Consistency conflicts with Strategy Consistency.6 This conflict exists, allegedly, because the self-torturer rationally prefers being at setting 201 with an extra $10,000 to being at setting 200; yet his rationally chosen strategy requires him to stop at setting 200.

I find this reasoning unpersuasive, for two reasons. First, I think Preference Consistency is a self-evident principle about rational choice – indeed, it is perhaps the most straightforward, obvious principle of rational choice. I am tempted to call Preference Consistency “true by definition”. So if Strategy Consistency conflicts with Preference Consistency, I think one should reject the former.

Second, it is possible to develop a plausible view on which Preference Consistency and Strategy Consistency are both true. Since both principles are plausible, such a view is to be preferred over one that renounces Preference Consistency. This view will be described presently.

4.3 An Orthodox Solution

4.3.1 In Defense of Undetectable Changes

It is plausible that there could be a device that works like the above-described torture device – in particular, that there could be no introspectively detectable difference between (the experiences caused by) adjacent settings, but a large and very detectable difference between the first and the last setting. This shows that it is possible for there to be changes in a person’s conscious experience too small to be introspectively detected. For instance, there could be two pains so similar to each other that one could not tell by introspection which of the two, if either, was more intense; nevertheless, one might in fact be more intense than the other.

Something analogous is certainly true of observable physical properties. There could be two objects so similar to each other that one could not tell which was larger, which was warmer, or which was heavier; nevertheless, one might in fact be larger, warmer, or heavier than the other. But some philosophers would say that what is true of physical properties is not true of pains: unlike physical properties, pains are conscious experiences whose entire nature is exhausted by how they feel to us. Therefore, one might claim, if two pain sensations feel to us the same, then ipso facto they are exactly the same. Therefore, if one cannot tell the difference between the pains caused by settings n and n+1 on the torture device, then those pains are the same.

This argument cannot be correct; it seeks to defend a logically incoherent account of the case. Let “Qn” denote the qualitative character of the conscious experience one has when one has the torture device at setting n. By stipulation, any other experience has Qn if and only if it is precisely, qualitatively identical to the experience one has at setting n (including having the identical pain intensity). Now suppose we accept the reasoning of the preceding paragraph, that is, we accept that for each n, setting n on the torture device feels exactly the same as setting n+1. For instance, setting 0 feels the same as setting 1. When at setting 0, one experiences qualitative feel Q0. When at setting 1, one experiences Q1. On the present view, these experiences are qualitatively identical, so Q0=Q1. The “=” sign there is the identity symbol: “Q0” and “Q1” are two names for one and the same qualitative character. By the same reasoning, Q1=Q2, and then Q2=Q3, and so on. By transitivity of identity, Q0=Q1000. But that is absurd; Q0 definitely is not Q1000. Therefore, we must reject the assumption that for each n, Qn=Qn+1; therefore, we must also reject the assumption that if two experiences cannot be introspectively distinguished, then they are qualitatively identical.

Notice what is not an appropriate reply here: it would not be appropriate to object that I have falsely assumed that introspective indistinguishability is transitive. I have not assumed that; I assumed precisely the opposite. Here is another way of putting my argument:
  1. 1.

    Introspective indistinguishability is not transitive.

  2. 2.

    Qualitative identity is transitive.

  3. 3.

    Therefore, introspective indistinguishability does not entail qualitative identity.

Premise 1 is uncontroversial. Settings 0 and 1 are indistinguishable, as are settings 1 and 2, settings 2 and 3, and so on. But settings 0 and 1000 are not indistinguishable. So indistinguishability is not transitive.

Premise 2 is a truth of logic. Two things are qualitatively identical just in case they share all their qualitative properties.7 If x and y possess the same set of qualitative properties, and y and z possess the same set of qualitative properties, then x and z must possess that same set of qualitative properties. It does not matter whether we are talking about mental or physical phenomena. No insight about the nature of the mind can enable minds to defy the laws of logic. Whatever else is true of mental states, they cannot both have and lack a property, or neither have nor fail to have a property, or do anything else that is self-contradictory.

Finally, conclusion 3 logically follows from premises 1 and 2. Premise 1 entails that there can be an x, y, and z such that x and y are indistinguishable, y and z are indistinguishable, and yet x and z are distinguishable. If indistinguishability entailed qualitative identity, then x and y and y and z would have to be qualitatively identical. Given premise 2, x and z would then have to be qualitatively identical. But since x and z are distinguishable, they cannot be qualitatively identical.

So the original description of the self-torturer scenario just entails that there can be undetectable but real differences in one’s experiences.

Note that we are not here positing unconscious pains. I am not saying that turning up the dial on the device by one setting causes one to have an unconscious pain added onto the conscious pain one already had. There is only ever one pain, which (at least past a certain point) is conscious, and that pain becomes slightly more intense when one turns up the dial – but so slightly that the subject cannot know, by introspection alone, that the pain has intensified.

4.3.2 Indeterminacy

How would Quinn respond to my reasoning? He would object to my assumption that there is such a thing as the precise qualitative character of one’s experience: “But the measure of the self-torturer’s discomfort,” says Quinn , “is indeterminate. There is no fact of the matter about exactly how bad he feels at any setting.”8 (Quinn does not elaborate; that quotation is the entire discussion of indeterminacy.)

I think this is an incoherent view. If you think Quinn has a coherent view here, it may be because you think he is saying one of the following things that he is not, in fact, saying:


He is not positing mere semantic indeterminacy. He is not saying merely that it is indeterminate whether some word applies to some object. He is proposing that there are pains that fail to have any specific intensity. (Perhaps he would say this is true of all pains, since every pain could be part of a series analogous to the pains in the self-torturer case.) In this way, the case is not like cases of vagueness.


He is not saying that the notion of intensity simply fails to apply to pains. The pain the self-torturer experiences at setting 1,000 is definitely more intense than the pain he experiences at setting 100, and Quinn would accept this. His view would have to be that pains have intensity, but they do not have any specific intensity.


He is not merely saying that pain intensities only come in a limited number of possible values rather than infinitely many values. For that would not avoid the transitivity argument. For instance, suppose that there are only three intensities of pain: 1, 2, and 3. Still, the relation “has the same intensity as” would be transitive.


He could not be proposing that pain intensity should be modeled by a range of numbers, rather than a specific number, for that view, again, fails to avoid the transitivity argument. If x and y have the same range of numbers associated with them, and y and z likewise have the same range, then x and z must have the same range. Thus, “has the same intensity as” would still be transitive.9


He is not merely saying that pain intensities are qualitative properties rather than quantities. Suppose there is a series of qualitative properties, {q1, q2, . . .}, which are the possible intensities of a pain. Still, “same intensity as” would be transitive. If x and y have the same one of those properties, and y and z also have the same one, then x and z must have the same one. It doesn’t matter that they aren’t quantitative. What exactly does matter? Nothing except that there are intensities, and pains have them.

So what could Quinn be saying? I think he is saying that for any given pain, there are certain intensities such that it is neither true nor false that the pain has them. Of the possible intensities, there is no specific one that the pain has, because there is a range for which the pain neither has them nor fails to have them.

This is a contradiction. To say that x neither has nor fails to have q, where q is some property, is to say: it’s not the case that x has q, and it is also not the case that x doesn’t have q. This is an explicit contradiction; it is of the form “~A & ~~A”. If one grants that a pain has intensity, then to say “of the possible intensities, there is no specific one that the pain has” is to say that none of the possibilities is realized. This is, by definition, impossible.

Similarly, to say that it is neither true nor false that x has q is to say that “x has q” isn’t true and “x has q” isn’t false. But “x has q” is true if and only if x has q, and “x has q” is false if and only if x doesn’t have q. So, for it to be neither true nor false requires that x neither has nor doesn’t have q. Which, we have already remarked, is a contradiction.

4.3.3 In Defense of an Optimal Setting

How bad is the undetectable increment in pain that results from turning the dial on the torture device up by one setting?

One might be tempted to say that, since one cannot detect the change, it is not bad at all. But this would be wrong, for reasons analogous to the error just diagnosed above. Let “Bn” denote the degree of badness of the experience one has when the device is at setting n. If in general, increasing from setting n to setting n+1 does not make things worse, then Bn=Bn+1. In that case, B0 = B1 = B2 = . . . = B1000. But obviously B0 does not equal B1000. So it cannot be that turning up the dial never makes things worse. (This argument assumes that “is worse” means “has a higher degree of badness”.)

It is logically coherent to hold that turning up the dial only sometimes makes things worse. But there is no reason to believe this. The self-torturer has no reason to think that any particular turning of the dial is worse than any other. Therefore, he should assign the same expected disvalue to each turning of the dial. Thus, for each n, the self-torturer should assume that turning up the dial from n to n+1 is one thousandth as bad (in terms of pain) as turning the dial from 0 to 1000. That is, the expected value of turning the dial up one setting is (0.001)(B1000). (Note: if you have some reason for thinking that some increments are worse than others – e.g., perhaps the early increments are worse than the later ones – this would complicate the reasoning to follow, but the important conclusion will remain, namely, that there is an optimal stopping point for the self-torturer.)

What about the value of the $10,000 that the self-torturer gets paid each time he turns the dial up? Unlike pleasure and pain, money has diminishing marginal value . This is a fancy way of saying: the more money you already have, the less an additional $10,000 is worth to you. If you have no money, then getting $10,000 is terrific. If you are already a millionaire, then getting $10,000 makes much less difference to your well-being than it would for a person who starts with no money.

On any given day, the self-torturer should decide what to do based on his own self-interest: he should turn the dial up if and only if the marginal value of $10,000 (the increase to his well-being that ten thousand additional dollars would bring about) is greater than the disvalue of the increment in pain, which, as noted above, is (0.001)(B1000). The marginal value of $10,000 decreases each day as the self-torturer grows richer, until eventually it is less than (0.001)(B1000). At that point, the self-torturer should stop turning the dial. That is the optimal setting (see figure 4.1).
Fig. 4.1

Optimal stopping point for the self-torturer

If the self-torturer follows this approach, how much benefit will he derive? The benefit he gets from the money he makes is represented in figure 4.1 by the area under the “Money” curve, between the y-axis and the stopping point.10 The harm he suffers due to pain equals the area under the “Pain” curve between the y-axis and the stopping point. His net benefit is the area between the two curves, that is, the shaded region in figure 4.1. This is the maximum obtainable benefit.

How do I know that the correct graph looks like figure 4.1, rather than, say, figure 4.2 or 4.3? In figure 4.2, pain has constant marginal disvalue, and money has diminishing marginal value, but the value of an extra $10,000 always remains greater than the disvalue of an extra increment of pain. In figure 4.3, money has diminishing marginal value and pain also has diminishing marginal disvalue (perhaps as you get used to it, further increments of pain cease to be as bad as they were at the beginning?), though the average disvalue of an increment of pain is still (0.001)(B1000). Again, the marginal value of the money always remains above that of the pain. If the correct graph is like figure 4.2 or 4.3, then the self-torturer would have to keep turning up the dial, all the way to the end. In this case, my proposed solution fails.
Fig. 4.2

A case with constant disvalue of pain and no optimal stopping point

Fig. 4.3

A case with diminishing disvalue of pain and no optimal stopping point

Here is how I know that neither figure 4.2 nor figure 4.3 is correct: because it follows from the initial description of the scenario that those graphs are not correct. By stipulation, the harm of going all the way to setting 1000 is greater than the benefit of $10 million. So the area under the Money curve, between 0 and 1000 on the x-axis, has to be less than the area under the Pain curve between those same limits. Furthermore, assuming that it is at least beneficial to turn the dial the first time, the marginal value of money has to start out greater than the marginal disvalue of pain; that is, the Money curve must start out higher than the Pain curve. The only way to draw the graph so that these things are true is to have the Money curve decline until it crosses the Pain curve, and continue going down after that, as in figure 4.1. That gives us an optimal stopping point, namely, the point where those two curves intersect. Figures 4.2 and 4.3 are both wrong because they both portray the total value of the money, as one goes all the way to setting 1000, as being greater than the total disvalue of the pain.

Admittedly, it may be difficult to know precisely where the optimal stopping point is, since the self-torturer may have trouble quantifying either the badness of pain or the value of money. What he should do, therefore, is simply to make his best estimate of where the optimal stopping point is. However rough that estimate may be, it nevertheless provides sufficient means for avoiding disaster (namely, the result where he winds up at setting 1,000), for certainly his best guess as to the optimal stopping point will be something much less than setting 1,000.

Let’s suppose that his best guess as to the optimal stopping point is setting 200. (He need not believe outright that 200 is the optimum; he need only be equally uncertain as to whether the optimum is above 200 or below 200.) He turns the dial up every day for 200 days. What then happens on day 201, when he is offered the chance to turn the dial up one more time, in exchange for another $10,000?

He should decline. This would not be, as Quinn maintains, because of some sort of prudential duty to be faithful to his past intentions. It would be because at this point he has rational grounds for thinking that turning the dial up again would render him worse off, or at least no better off. Since 200 was his best estimate of the optimal stopping point, and he has obtained no new information, he (still) rationally believes that the disvalue of the additional pain he would receive from turning the dial up again would be at least as great as the value of the additional money he would receive. Or, more precisely, he regards it as at least as likely that the pain disvalue would be greater than the monetary value as that it would be less. Or, even more precisely, the expected disvalue of the pain, based on his subjective probability distribution, is at least as great as the expected value of the money based on that same distribution.

4.3.4 Detectable and Undetectable Values

Now it may seem that this neat solution fails to address a major part of the reasoning that got the initial paradox going: $10,000 is a significant and very noticeable benefit, and it remains such even as one’s wealth expands, all the way up to $10 million. Even if we allow that there can be an undetectable increase in pain, and even if we allow that an undetectable pain increment has some nonzero disvalue, surely that disvalue must be very small. How could this undetectable increase in pain outweigh a very significant and noticeable benefit, such as $10,000?

I have two replies: first, I am not at all convinced that the benefit of $10,000 remains significant or even noticeable as one’s wealth expands up to $10 million. Of course, one can easily notice such an increase in one’s wealth, since one can look, say, at one’s bank account balance. But noticing the benefit produced by that wealth is another matter. It is not obvious that a person with $5,010,000 would be happier than a person with only $5,000,000 (what would you buy if you had $5,010,000 that you would not buy if you only had $5,000,000?) – or that one could introspectively notice the difference in happiness if indeed there would be one.11

Second, there are in fact perfectly understandable reasons why a larger quantity might be less easily detectable than a smaller quantity. The human ability to detect a quantity is not always proportioned to the magnitude of the quantity. Imagine a very thin but very long string, one micron thick but a thousand light years long. And compare this object to a rubber ball one centimeter in radius. The string would be invisible to the human eye, whereas the ball would be very easily visible. Nevertheless, the string would be trillions of times greater in volume than the ball.

Or imagine a swimming pool in which one teaspoon of salt is dissolved. And compare this to a cup of water in which just a tenth of a teaspoon of salt is dissolved. The salt in the swimming pool would be undetectable to the human tongue, while the salt in the cup of water would be very easily detectable, even though there would be ten times as much total salt in the former as in the latter.

These examples illustrate that large quantities can be rendered undetectable by being spread very thinly, where a smaller but more concentrated quantity would be detectable. In the case of the self-torturer, the harm resulting from turning up the dial one notch will be spread out over the rest of the individual’s life, whereas the benefit of $10,000 can be temporally concentrated: one can buy some particular good, which one can enjoy in a period much shorter than a lifetime. Thus, it is understandable that the harm might be undetectable but the benefit detectable, even if the harm is the greater quantity.

4.3.5 Advantages of This Solution

This treatment of the self-torturer puzzle is better than the treatment proposed by Quinn , because my solution lets us keep three seemingly self-evident principles that Quinn gives up.


My treatment, though faithful to the factual (non-evaluative) stipulations of the scenario, does not involve any intransitivity, either of the better-than relation, or of rational preferences. My view explains why it might be understandable for the self-torturer to have intransitive preferences, due to the undetectability of certain harms, though this would be a mistake on his part. This is a theoretical advantage, since the transitivity of “better than” and “rationally preferred to” is among the most intuitive and widely accepted normative principles philosophers have ever articulated.


My treatment of the case does not violate classical logic. In particular, it does not require anything to be metaphysically indeterminate.


My treatment of the case maintains standard principles of decision theory, including the axiom that, when given a choice between two options, one of which is rationally preferred to the other, one ought to take the preferred option. At the same time, there is no need to deny the principle that one ought to follow through on any rational plan that correctly anticipated all relevant facts. The rational plan for the self-torturer to adopt at the start is to proceed to the device setting that he estimates to be the optimal point, and then stop. When he reaches that point, it will be rational for him to follow through on his plan by declining all further offers to raise the setting on the device, since he will rationally expect any further increases to make him worse off.

The only objection to this approach appears to be that it implies that there can be unnoticeable facts about the qualitative character of conscious mental states.12 But this does not strike me as a particularly problematic implication. It would indeed be odd, even contradictory, to maintain that there could be a conscious mental state that was introspectively unnoticeable. But the solution to the self-torturer puzzle posits no such thing. It posits only that there could be an unnoticeable fact about the relationship between two conscious mental states, namely, that the one was very slightly more intense than the other. This is not contradictory or even especially implausible. We should not abandon classical logic or standard decision theory to avoid this.


  1. 1.

    This paradox discussed in this chapter derives from Warren Quinn (1990). I have altered the scenario in minor ways here.

  2. 2.

    Quinn (1990, pp. 79–80) assumes that “better-than” is transitive, so he says only that the self-torturer has intransitive preferences. Andreou (2006, 2016) also accepts the rationality of holding intransitive preferences in this case.

  3. 3.

    Quinn 1990, p. 79.

  4. 4.

    Andreou (2016) takes a similar strategy, which depends upon dividing up options into evaluatively described, qualitative categories, such as “terrible”, “poor”, “acceptable”, and the like.

  5. 5.

    Quinn 1990, p. 87.

  6. 6.

    More precisely, Quinn (1990, p. 85) says that we should reject the “Principle of Strategic Readjustment”, which holds that “Strategies continue to have authority only if they continue to offer him what he prefers overall. Otherwise, they should be changed.” The Principle of Strategic Readjustment, so described, is a special case of Preference Consistency.

  7. 7.

    I say “qualitative properties” to exclude such things as haecceities or “the property of being this particular individual”.

  8. 8.

    Quinn 1990, p. 81.

  9. 9.

    What if we say that x and y “have the same intensity” provided that they have overlapping ranges? Then “having the same intensity” will not entail being equally bad: one pain might be worse than another in virtue of having a higher upper boundary and/or a higher lower boundary, even though the two pains have overlapping ranges.

    I leave aside the question of what these ranges might mean, since the proposal in any case fails to avoid my argument for the existence of undetectable changes in the badness of a pain.

  10. 10.

    Why is this true? The marginal value of money, by definition, is the rate at which wellbeing increases with increases in one’s wealth – in technical terms, the derivative of wellbeing with respect to wealth. The integral of this, say, from 0 to n, is the total increase in wellbeing obtained as one goes from 0 to n, which is the area under the marginal value curve. The same applies to the marginal disvalue of pain. Here, as an approximation, I treat the marginal value curve as continuous.

  11. 11.

    According to Kahneman and Deaton (2010), money income increases one’s happiness, up to about $75,000 per year, after which it makes no discernible difference.

  12. 12.

    For a general argument that almost any psychological state can be undetectable (that is, there can be a case in which one cannot know whether one is in the state), see Williamson 2000, ch. 4.


  1. Andreou, Chrisoula. 2006. “Environmental Damage and the Puzzle of the Self-Torturer”, Philosophy and Public Affairs 34: 95–108.CrossRefGoogle Scholar
  2. Andreou, Chrisoula. 2016. “The Real Puzzle of the Self-Torturer: Uncovering a New Dimension of Instrumental Rationality”, Canadian Journal of Philosophy 45: 562–75.CrossRefGoogle Scholar
  3. Kahneman, Daniel and Angus Deaton. 2010. “High Income Improves Evaluation of Life but Not Emotional Well-being,” Proceedings of the National Academy of Sciences 107: 16489–16493.CrossRefGoogle Scholar
  4. Quinn, Warren S. 1990. “The Puzzle of the Self-Torturer”, Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition 59: 79–90.CrossRefGoogle Scholar
  5. Williamson, Timothy. 2000. Knowledge and Its Limits. Oxford: Oxford University Press.Google Scholar

Copyright information

© The Author(s) 2018

Authors and Affiliations

  • Michael Huemer
    • 1
  1. 1.Philosophy DepartmentUniversity of Colorado BoulderBoulderUSA

Personalised recommendations