Decision Making, Bias
KeywordsChoice Behavior Loss Aversion Framing Effect Reward Magnitude Loss Frame
Biases in decision making are identified as patterns of choice behavior that violate normative theory of choice allocation.
According to the axioms frequently used in economic theory, a rational decision maker is an agent who makes consistent choices over time, and therefore exhibits stable preferences (Samuelson 1938; Friedman and Savage 1948, 1952). Importantly, the rationality assumption does not prescribe that the decision maker should always optimize her objective outcome, e.g., her monetary payoff, but it does prescribe that an individual should behave consistently, given the assumption of stable preferences. This rationality assumption has been applied to both human and animal behavior (Kacelnik 2006). However, both in the human and animal literature, numerous examples of violations of economic rationality, i.e., biases in decision making, can be found (Kalenscher and Van Wingerden 2011).
Present Bias and Time-Inconsistent Preferences
A ubiquitous finding in decision making studies is that decision makers prefer immediate or briefly delayed outcomes over longer delayed outcomes (e.g., Ainslie 1974; Green et al. 1994; see Kalenscher and Pennartz 2008 for a review), even if the delayed outcomes clearly constitute the better long-term choice (e.g., saving for a pension plan vs. drinking away your money today). This myopic choice behavior is usually termed ‘impulsivity’. Impulsive, shortsighted intertemporal decision making violates several assumptions made in rational choice theories, such as Discounted Utility Theory (DUT). DUT is a normative account of decision-making over time, which predicts a rational decision maker’s preference for immediate over delayed reward (see “Choice Behavior”). DUT states that agents should maximize discounted utility when making intertemporal decision, where discounted utility reflects the reduced current value incurred by options that are deferred into the future. For example, when choosing between a smaller, sooner (SS) or a larger, later (LL) outcome, an agent’s discount function combined with the duration of the delays could lead a decision maker to prefer SS over LL or vice versa, depending on the delays and magnitudes involved. Importantly, for a rational decision maker, such preferences are assumed to be stable. Specifically, one of the predictions of DUT is that decision makers discount future rewards at a constant rate. This means that, if a decision maker reveals a preference for, say, an SS outcome delivered after delay t1 over an LL outcome delivered after delay t2, with t1 < t2, she should still prefer the SS outcome if both outcomes are deferred into the future by the same time interval Δt, because both outcomes would be discounted at the same rate. However, in contrast to the assumption of stable intertemporal preferences, in practice, most human as well as animal decision makers exhibit preference reversals, typically changing their choice behavior from preferring the SS to preferring the LL outcome when a common front-end delay (Δt) is added (Rachlin and Green 1972; Ainslie 1974; Bateson and Kacelnik 1996; Isles et al. 2003; Kalenscher et al. 2005; Kalenscher and Pennartz 2008; Louie and Glimcher 2010; Kalenscher and van Wingerden 2011). Assuming that the overt choice behavior reflects an internal ranking of discounted utility, this implies that the time-discounted utility attached to the SS option has sunk below that of the LL, invalidating constant discounting.
How can such time-inconsistent behavior be explained? Two-system models of decision making posit that the utility premium put on immediate over delayed outcomes can be traced to different (competing) neural systems that process immediate vs. delayed outcomes. McClure et al. (2004, 2007) and others have shown that ‘limbic’ regions such as the ventromedial prefrontal cortex and ventral striatum (Hariri et al. 2006) are preferentially activated by immediate outcomes, while other prefrontal structures such as lateral prefrontal cortex are engaged by both immediate and delayed outcomes. The inverse of impulsivity, self-control, can be invoked to combat present-bias in decision making. Hare and colleagues (2009, 2011) have shown that activity in dorsolateral prefrontal cortex is associated with self control, and that this activity can modulate value signals in ventromedial prefrontal cortex, downregulating the value of liked, but unwanted outcomes when self control succeeds.
Certainty Effect in Decision Making
In decision making under risk, the normatively flavored Expected Utility Theory framework (EUT, see Choice Behavior) posits that monetary outcomes are nonlinearly transformed to subjective value or utility. This transformation can explain the commonly found risk-aversion when choosing between a certain and a probabilistic outcome. EUT assumes that, while gains are non-linearly transformed, probabilities are not, so that each outcome’s expected utility is the linearly weighted sum of the probability-weighted utilities of the possible outcomes. However, behavioral data show that decision makers overvalue certain outcomes. The Allais paradox supports this notion by showing that adding a probabilistic outcome to two gambles in a choice set, so that one of the options now becomes certain instead of probabilistic, induces decision makers to reverse their preferences (Allais 1953; Kahneman and Tversky 1979; Hsu et al. 2009). So, suppose that the choices are a large gain G1 with a low probability P1 or a small gain G2 with high probability P2, and the subject prefers the first, riskier option. According to the independence axiom of EUT, the preference should be the same when the same probabilistic outcome is added to both options. So, if a gain G2 with probability (1 − P2) is now added to both choices, both expected outcomes increase by the same amount, but the second one becomes certain, i.e., G2 with a probability of 1, and in this case decision makers typically prefer the second option, reversing their original preference. In other words, decision makers have a strong preference for outcomes that are certain over outcomes that are probabilistic, even if the expected utility ranking is maintained. A recent brain imaging study (Hsu et al. 2009) building on prospect theory (Kahneman and Tversky 1979; Tversky and Kahneman 1992) indicates that neural activity in the striatum when evaluating probabilistic outcomes is better explained by a non-linear (reverse-S shaped) function than a strictly linear function. In animals studies, results that resemble Allais-like certainty effects are observed in rats (MacDonald et al. 1991) and honeybees (Shafir et al. 2008).
Sign/Reflection Effects in Decision Making
Economic theory usually assumes a monotonically increasing utility function linking objective quantities of a good to a subjective valuation as the basis for preference judgments (Samuelson 1937, 1938; von Neumann and Morgenstern 1944). When outcomes are not certain, EUT prescribes that the expected utility of a choice can be calculated simply by weighing the utilities of the possible outcomes of a decision by their probabilities and summing across these outcomes (see “Choice Behavior”). According to EUT, a decision maker should base her choice on the final state of wealth, taking into account all (discounted) utilities of all outcomes, and it should not matter whether changes in good quantities are accounted as gains or losses. However, humans seem to treat gains differently to losses. Because of the concave, decelerating nature of the utility function, humans tend to be risk-averse when choosing between gambles with gains. However, when the gambles are presented as potential losses rather than gains, risk attitudes often change from risk-averse to risk-seeking (Kahneman and Tversky 1979, 1984). This is called the reflection effect, the finding that risk attitudes change when the prospects flip around 0.
Further indications that losses are treated differently from gains come from research on loss aversion (Kahneman and Tversky 1984; Tversky and Kahneman 1992). The idea that losses loom larger than gains is illustrated by the fact that most participants reject a 50/50 gamble on a coin toss or fair die. In fact, the gains need to be about twice as big to offset the potential loss. In the brain, several areas including the VStr and ventromedial prefrontal cortex (vmPFC) show a pattern of ‘neural’ loss aversion – that is, the slope of BOLD deactivations to losses is steeper than activations to gains, when mixed gambles of potential losses and gains are evaluated (Tom et al. 2007). In animals, loss aversion is also observed (Chen et al. 2006; but see Silberberg et al. 2008).
Framing Effect in Decision Making
The differential treatment of losses and gains is even more strikingly evident in the framing effect. Kahneman and Tversky (1984) showed that when a decision has to be made between a certain and a probabilistic outcome on human lives saved (gain frame) versus lost (loss frame), with the exact same numerical outcomes, the gain frame induces risk aversion whereas the loss frame induces risk seeking choice behavior in participants. This shows that the wording used to present a decision problem is sufficient to bias a subject between treating identical outcomes as gains or losses, and affecting risk attitude accordingly. There is some evidence that decisions in the direction of the framing effect, as opposed to those decisions in opposition to the choice biases, elicit more activity in the amygdala (De Martino et al. 2006; but see Talmi et al. 2010), and that resilience against the framing bias is associated with activity in OFC. In animals, starlings show behavior in line with a framing effect. When the baseline expectation of a food outcome is either low (gain frame) or high (loss frame), the birds are more prone towards risk-averse behavior (selecting the fixed outcome) or risk-seeking behavior (selecting the probabilistic outcome), respectively (Marsh and Kacelnik 2002).
Non-stationary Preferences: Intransitivity
A rational decision maker should exhibit stable preferences. Formally, if a decision maker prefers A over B and B over C, she should also prefer A over C (transitivity of preferences). However, human subjects often show context-dependent or intransitive choices (Tversky 1969; Kalenscher and Pennartz 2010; Kalenscher et al. 2010). Intransitive preferences seem to occur readily when choice options differ on several value-related attributes, such as probability and reward magnitude, and could stem from an attribute-wise comparison instead of an alternative-wise integration of choice attributes. For instance, consistent with the additive difference model put forward by Tversky (1969), Kalenscher et al. (2010) showed that participants choosing between two gambles tend to weigh differences in probabilities and reward magnitudes differentially. So, a large difference in probabilities (with a moderate difference in reward magnitude) resulted in a strong weighting of this attribute, and correspondingly risk-averse choices, whereas a large difference in reward magnitude (with a moderate difference in probability) resulted in a strong weighting of magnitude, together with risk-seeking choices. This context-dependent shift from risk-aversion to risk-seeking, depending on the difference in the gambles’ attributes, rendered the potential for circular, intransitive preference structures. The neural data by Kalenscher et al. support separate tracking of the context-dependent weight of outcome probabilities (posterior cingulate cortex) and magnitudes (posterior insula).
Sunk Cost Effect in Decision Making
The sunk cost effect is the tendency to persist in an endeavor once an investment of effort, time, or money has been made (Thaler 1980). A famous example entails the continuation of the Concorde supersonic airplane project, even when profitability analyses indicated that the project would never make money (Arkes and Ayton 1999). Besides policy makers, participants in the lab also exhibit the sunk cost effect. Navarro and Fantino (2005) and others have shown that human participants persist in a choice option searching for payout beyond the moment when switching, abandoning or resetting would be optimal (Kogut 1990). In animals, sunk costs have been studied in pigeons. Like humans, pigeons generally persisted longer on a probabilistic choice than prescribed by optimal choice, and this persistence was modulated by the size of the sunk costs (Navarro and Fantino 2005; Pattison et al. 2012).
Prosocial Biases in Decision Making
One of the tenets of microeconomic theory is that decisions should be made from a purely self-interested perspective, and that other people’s outcomes should be disregarded. Obviously, this notion does not conform to reality. Human decision makers experience consistent prosocial biases that become manifest in costly helping behavior, e.g., when an agent reduces her own outcome in favor of another’s. The most straightforward example of this is the Dictator Game (DG; Fehr and Fischbacher 2003; Engel 2011), where a participant can split a sum of money between herself and a powerless partner. The partner has to accept, and cannot retaliate. From a game theoretical perspective, the dictator should keep all the money for herself. This is not the case, people across all cultures and socio-economic groups consistently give sums larger than 0, even in anonymous, one-shot games where reputation building or other strategic motives can be ruled out (Fehr and Fischbacher 2003; Engel 2011). This suggests that decision makers are presumably not the selfish, own-payoff maximizers postulated by economic theory, but do take the well-being of others into consideration.
- Green L, Fisher EB, Perlow S, Sherman L (1981) Preference reversal and self-control: choice as a function of reward amount and delay. Behav Anal Lett 1:43–51Google Scholar
- Kalenscher T, Pennartz C (2010) Do intransitive choices reflect genuinely context-dependent preferences. In: Delgado MR, Phelps E, Robbins T (eds) Attention and performance XIII: decision making. Oxford University Press, VermontGoogle Scholar
- Kalenscher T, Tobler PN, Huijbers W, Daselaar SM, Pennartz CMA (2010) Neural signatures of intransitive preferences. Front Hum Neurosci 4:49. doi:10.3389/fnhum.2010.00049Google Scholar
- Stephens DW, Krebs JR (1986) Foraging theory, Monographs in behavior and ecology. University Press, PrincetonGoogle Scholar
- Von Neumann J, Morgenstern O (1944) Theory of games and economic behavior. University Press, PrincetonGoogle Scholar