Encyclopedia of Computational Neuroscience

Living Edition
| Editors: Dieter Jaeger, Ranu Jung

Decision-Making Tasks

  • Angela J. YuEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4614-7320-6_314-1


Color Word Multisensory Integration Reward Rate Outcome Uncertainty Response Deadline 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


A diverse repertoire of behavioral tasks has been employed to examine the cognitive processes and neural basis underlying decision-making in humans and animals. Some of these have their origins in psychology, others in cognitive neuroscience, yet others in economics. There is also a continual invention of novel or hybrid paradigms, sometimes motivated by deep conceptual questions stimulated by computational modeling of decision-making and sometimes motivated by specific hypotheses related to functions of neuronal systems and brain regions that are suspected of playing an important role in decision-making.

Detailed Description

The area of decision-making is a dynamically evolving, multifaceted area of active research that sits at the interfaces of many areas, among them psychology, neuroscience, economics, finance, political science, engineering, and mathematics. The range of decision-making tasks seen in the literature is extremely diverse, reflecting the interdisciplinary nature of the area and the diverse background of the researchers engaged in it. This entry attempts to summarize and categorize the major classes of decision-making tasks with respect to the formal, mathematically quantifiable properties of the tasks and the cognitive processes that may consequently be entailed. Broadly, decision-making tasks can be grouped into the following categories and subcategories:
  • Decision-making under uncertainty
    • Sensory uncertainty

    • Outcome uncertainty

  • Multidimensional decision-making
    • Target-distractor differentiation

    • Multisensory integration

  • Preference-based decision-making

Decision-Making Under Uncertainty

Many decision-making tasks require the subjects to make choices among alternatives under conditions of uncertainty. Uncertainty can arise from noise related to sensory inputs, ignorance about stimulus or action outcomes, stochasticity or imprecise representation related to (relative) temporal onset of events, or some hybrid combinations of two or more of these factors. The difficulty of the tasks can often be controlled parametrically via the level of uncertainty induced experimentally, and the tasks can therefore be used to probe fundamental limitations and properties of human and animal cognition, as well as characterize individual and group differences.

Sensory Uncertainty

Decision-making under sensory uncertainty, or perceptual decision-making, involves tasks in which the subject views a noisy stimulus and chooses a response consistent with the perceived response. One such widely used task is the random-dot coherent motion task (Newsome and Paré 1988), in which a field of flickering dots is displayed to the subject, a fraction (usually a minority) of which has simulated motion in a coherent direction, and the remainder either move in random directions (Newsome and Paré 1988) or have no discernible direction of motion (Shadlen and Newsome 2001). Based on the stimulus, the subject then chooses the appropriate response to indicate the perceived direction of coherent motion in the stimulus. This is usually done by eye movements in monkeys and either eye movements or key presses in humans. Typically, there are two possible directions of motion, out of which the subject must choose, making this a 2-alternative forced choice (2AFC) task; occasionally, it has been done with more than two alternatives or in a multiple-alternative forced choice (mAFC) format (Churchland et al. 2008). When the 2AFC version is done in conjunction with recordings of neurons that are sensitive to the direction of motion of the stimulus, the two possible directions of motion are chosen to be aligned with the preferred and anti-preferred directions of the neuron being recorded (Shadlen and Newsome 2001); in the mAFC version, one of the possible motion directions is aligned with the preferred direction of the neuron (Churchland et al. 2008).

There are two major variants of the random-dot coherent motion task, one in which the stimulus is present for a fixed duration (Newsome and Paré 1988) and one in which the stimulus is present until the subject makes a perceptual response (Roitman and Shadlen 2002). The latter, known as the reaction time (RT) variant, is the behaviorally more interesting case, as it allows the subject to decide not only which stimulus was seen but how much sensory data to accumulate about the stimulus before responding. This has made this task a simple yet effective means for probing the speed-accuracy trade-off in perception (Gold and Shadlen 2002). Another property of the task that makes it ideal for examining the speed-accuracy trade-off is that it corrupts the sensory stimulus with a significant amount of noise that is identical and independent over time, making the informational value of the sensory stimulus constant over time (Bogacz et al. 2006). This property also gives the experimenter an accessible experimental parameter for controlling the information rate (signal-to-noise ratio) of the stimulus, through the coherence of the stimulus, or the percentage of coherently moving dots.

The mathematically optimal decision policy for choosing among two hypotheses based on independent, identically distributed data, where the observer can choose how much data to observe, is known as the sequential hypothesis ratio test (SPRT). It is the optimal policy for minimizing any cost function that is monotonically related to average reaction time and probability of error (Wald 1947; Wald and Wolfowitz 1948). It states that the observer should accumulate evidence, in the form of Bayesian posterior probability using Bayes rule or, equivalently, by summing up log odds ratios, until the total evidence exceeds a confidence bound that favors one or the other stimulus; then, at that point, the observer should terminate the observation process and choose the more probable alternative. The height of the decision boundary is determined by the relative importance of speed and accuracy in the objective function, with greater importance of speed relative to accuracy leading to lower decision boundaries, and vice versa for decreasing importance of speed relative to accuracy. In the limit when many data samples are observed before the decision is made, SPRT converges to the drift-diffusion process first studied in connection with statistical mechanics; it is essentially a bounded linear dynamical system with additive Wiener noise and absorbing boundaries (Laming 1968; Ratcliff and Rouder 1998; Bogacz et al. 2006). Many mathematical properties of the drift-diffusion process are well known, such as the average sample size before termination, and the probability of exceeding the correct boundary, for different problem parameters and task conditions. It therefore provides a convenient mathematical framework for analyzing SPRT and thus experimental behavior in different task settings.

A popular alternative to the 2AFC paradigm is the Go/NoGo (GNG) task, in which one stimulus requires a response (Go) and the other requires the response to be withheld (NoGo). One apparent advantage of the GNG paradigm is that it obviates the need for response selection (Donders 1969), thus helping to minimize any confounding influences of motor planning and execution when the experimental focus is on perceptual and cognitive processing. However, within-subject comparison of reaction time (RT) and error rates (ER) in 2AFC and Go/NoGo variants of the same perceptual decision-making task has found systematic biases between the two (Gomez et al. 2007). Specifically, the go stimulus in the Go/NoGo task elicits shorter RT and more false-alarm responses than in the 2AFC task, when paired with the same opposing stimulus in both paradigms, resulting in a Go bias. This raises the question of whether the two paradigms really probe the same underlying processes.

Existing mechanistic models of these choice tasks, mostly variants of the drift-diffusion model (DDM; Ratcliff and Smith 2004; Gomez et al. 2007) and the related leaky competing accumulator models (Usher and McClelland 2001; Bogacz et al. 2006), capture various aspects of behavioral performance, but do not clarify the provenance of the Go bias in GNG. Recently, it has been shown that the Go bias may arise as a strategic adjustment in response to the implicit asymmetry in the cost structure of the 2AFC and GNG tasks and need not imply any fundamental differences in the sensory and cognitive processes engaged in the two tasks. Specifically, the NoGo response requires waiting until the response deadline, while a Go response immediately terminates the current trial (Shenoy and Yu 2012). Using a Bayes-risk minimizing decision policy that minimizes not only error rate but also average decision delay naturally exhibits the experimentally observed Go bias. The optimal decision policy is formally equivalent to a DDM with a time-varying threshold that initially rises after stimulus onset and collapses again just before the response deadline. The initial rise in the threshold is due to the diminishing temporal advantage of choosing the fast Go response compared to the fixed-delay NoGo response.

Outcome Uncertainty

This class of decision-making tasks is designed such that subjects have no uncertainty about the stimulus identity, but rather about the consequences of choosing one alternative over another. To induce such uncertainty, the subjects are not explicitly told about the reinforcement consequences of the different alternatives, but instead they have to learn them over time. The reinforcements are typically in the form of a reward, such as money for human subjects, juice for monkeys, or seeds for birds; but occasionally they can also be in the form of a penalty, such as money taken away for human subjects, a foot shock for rats, or air puff for rabbits. This class of task originated in the study of associative learning, which showed that subjects’ asymptotic choice performance after substantial learning exhibited interesting features. For example, Herrnstein (Herrnstein 1961) found that pigeons, when faced with a choice between pecking two different buttons in a Skinner box, did not always choose the one with the higher reward rate, which is mathematically optimal, but rather alternated among the choices stochastically so as to “match” their underlying reward rates. Based on these results, Herrnstein formulated the “matching law” and the related “melioration theory” (Herrnstein 1970), which gave a procedural account of how matching-like behavior can arise from limited working memory. A more recent account, using Bayesian learning theory, shows that matching-like behavior is, at a finer timescale, the consequence of a maximizing choice strategy (always choosing the best options), coupled with a learning procedure that assumes the world to be nonstationary and thus results in internal beliefs that fluctuate with empirical experiences, even those driven by noise in a truly stationary environment (Yu and Cohen 2009; Yu and Huang 2014).

In order to induce continual outcome uncertainty, many implementations of this task incorporate unpredictable, unannounced changes in the stimulus-outcome contingencies during the experimental session. A classical example of such a manipulation is reversal learning, in which the choice with the better outcome suddenly becomes the worse choice, while the previously bad choice now becomes the better choice. This type of tasks has been used to study, among other things, how perseverative tendencies might change as a function of experimental condition or manipulations of the neural circuitry in the brain. Such tasks are rich for theoretical modeling of the neural representation, computation, and utilization of different forms of uncertainty that arise during learning and decision-making (Yu and Dayan 2005; Nassar et al. 2010). For example, there is thought to be a differentiation of at least two kinds of uncertainties, “expected uncertainty,” dealing with known uncertainties and variabilities in the environment, and “unexpected uncertainty,” arising from dramatic, unexpected changes in the statistical contingencies in the environment. Experimentally, human subjects have been shown to be more ready to alter choice strategy under conditions of more frequent choice-outcome contingencies compared to when these changes are less frequent (Behrens et al. 2007); the upregulation of the neuromodulator norepinephrine has been shown to increase the rates for learning such changes (Devauges and Sara 1990); pupil diameter in human subjects, under the control of noradrenergic and cholinergic neuromodulatory systems, has also been shown to correspond to specific model-derived uncertainty signals (Nassar et al. 2012).

Multidimensional Decision-Making

There are two main types of multidimensional decision-making: (1) target-distractor differentiation, in which subjects must exclude the interfering influence of a distractor stimulus or attribute, in order to focus on the target stimulus or attribute, and (2) multisensory integration, in which subjects must integrate multiple stimuli, often differing in sensory modalities, to reach a combined choice response.

Target-Distractor Differentiation

One classical task of this type is the Stroop task (Stroop 1935), in which subjects must ignore the meaning of a color word and report the physical color in which the text is written. Subjects consistently respond faster and more accurately when the meaning and physical attribute of the color word are congruent than when they are not. A related task is the Simon task, in which a stimulus present on one side (e.g., visual stimulus on the left side of the screen or auditory stimulus to the left ear) may instruct the subjects to respond with the other, less direct response (e.g., right button press), and again, performance is better when the two dimensions are congruent than when they are not (Simon 1967). Yet another similar task is the Eriksen task (Eriksen and Eriksen 1974), which requires the subject to identify a central stimulus (e.g., the letter “H” or “S”) when flanked on both sides by letters that are either congruent or incongruent with the central stimulus; again, subjects are significantly better in the congruent condition. These tasks have been used to discern how the brain identifies and filters out (or fails to do so) irrelevant distractors. Careful behavioral analysis of choice accuracy as a function of reaction time (Cho et al. 2002) has given clues as to the role of cognitive processes such as attentional control and stimulated theoretical modeling of the neural computations giving rise to specific features of the behavioral dynamics in such tasks (Yu et al. 2009).

A related but distinct class of tasks involves a prepotent “go” signal present on each trial and an occasional “stop” signal on a small fraction of trials, whereby the subject must execute the “go” response only on “go” trials and not on “stop” trials. In the stop-signal task (Logan and Cowan 1984), the “stop” signal appears, if at all, at an unpredictable time after the “go” stimulus, and the greater the onset asynchrony, the more unlikely that the subject is able to withhold the “go” response. A related task is what has been called a “compelled-response task,” in which the subject is instructed to go before being shown the cue that indicates which of the two responses is correct (Salinas et al. 2010). These tasks have given rise to competing theoretical models that postulate either the existence (Logan and Cowan 1984) or absence (Shenoy and Yu 2011; Salinas and Stanford 2013) of a specific stopping process or pathway and, in the latter class, the precise capacity to stop depending on either a normative, sequential consideration of the pros and cons of responding (Shenoy and Yu 2011) or the efficiency of the perceptual process itself (Salinas and Stanford 2013). However, recent experimental results indicate systematic and rational changes in subjects’ stopping capacity as a function of the reward structure of the task (Leotti and Wager 2009), suggesting not only that the “go” response is context sensitive and strategically malleable but also that it takes into account perceptual uncertainty and decision-theoretical factors such as reward contingencies (Shenoy and Yu 2011).

Multisensory Integration

Instead of investigating how the brain selectively processes certain aspects of the environment and excludes others, simultaneously present stimuli have also been employed in experimental tasks to examine how the brain combines multiple sources of sensory information, in particular across sensory modalities. For instance, several studies in recent years have shown that human subjects combine differentially reliable sensory inputs from different modalities in a computationally optimal (Bayesian) way, such that greater weight is assigned to sensory inputs with less noise (and greater reliability) in the combined percept (Jacobs 1999; Ernst and Banks 2002; Battaglia et al. 2003; Dayan et al. 2000; Shams et al. 2005). This explains, for instance, why vision typically dominates over auditory modality in spatial localization in a phenomenon known as “visual capture” – this follows directly from the Bayesian formulation, since vision has greater spatial acuity and reliability than audition (Battaglia et al. 2003). Conversely, when the localization task is in the temporal domain instead of spatial, where audition has greater temporal acuity and reliability, auditory stimuli can induce an illusory visual percept. For example, when a single visual flash is accompanied by several auditory beeps, the visual percept is that of several flashes (Shams et al. 2000). Again this phenomenon can be explained by a statistically optimal ideal observer model (Shams et al. 2005; Kording et al. 2007). Recent neurophysiological studies have also begun to elucidate the neural basis of multisensory integration [see, e.g., Driver and Noesselt (2008) for a review].

Preference-Based Decision-Making

One area of decision-making is devoted to the study of how humans make their choices based on their own internal preferences, such as in consumer decision-making, instead of based on sensory features or behavioral outcomes associated with the options. When choosing among options that differ along multiple attribute dimensions, humans consistently exhibit certain puzzling preference shifts, or even reversals, depending on the context. Three broad categories of contextual effects are studied in the psychology literature. In the attraction effect, given two similarly preferred options, A and B, the introduction of a third option Z that is similar to B, but also clearly less attractive than B, results in an increase in preference for B over A (Huber and Payne 1982; Heath and Chatterjee 1995). In the compromise effect, when B > A in one attribute and B < A in another attribute and Z has the same trade-off but is even more extreme than B, then B becomes the “compromise” option and becomes preferred relative to A (Simonson 1989). In the similarity effect, the introduction of a third option Z that is very similar to B in both attribute dimensions shifts the relative preference away from B to A (Tversky n.d.).

Traditionally, there have been two lines of explanations for such effects, the first attributing them to biases or suboptimalities in human decision-making (Kahneman and Tversky 1979; Kahneman et al. 1982) and the other suggesting that they are by-products of specific architectural or dynamical constraints on neural processing (Busemeyer and Townsend 1993; Usher and McClelland 2004; Trueblood 2012). A more recent, normative account (Shenoy and Yu 2013) uses a Bayesian model to demonstrate that these contextual effects can arise as rational consequences of three basic assumptions: (1) humans make preferential choices based on relative values anchored with respect to “fair market value,” which is inferred from both prior experience and the current set of available options; (2) different attributes are imperfect substitutes for one another, so that one unit of a scarce attribute is more valuable than one unit of an abundant one; and (3) uncertainty in beliefs about “market conditions” induces stochasticity in relative preference on repeated encounters with the same set of options. This model not only provides a principled explanation for why specific types of contextual modulation of preference choice exist, but a means to infer individual and group preferences in novel contexts given observed choices.


  1. Battaglia PW, Jacobs RA, Aslin RN (2003) Bayesian integration of visual and auditory signals for spatial localization. J Opt Soc Am A Opt Image Sci Vis 20(7):1391–1397PubMedCrossRefGoogle Scholar
  2. Behrens TEJ, Woolrich MW, Walton ME, Rushworth MFS (2007) Learning the value of information in an uncertain world. Nature Neurosci 10(9):1214–1221PubMedCrossRefGoogle Scholar
  3. Bogacz R, Brown E, Moehlis J, Hu P, Holmes P, Cohen JD (2006) The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced choice tasks. Psychol Rev 113(4):700–765PubMedCrossRefGoogle Scholar
  4. Busemeyer JR, Townsend JT (1993) Decision field theory. Psychol Rev 100:432–459PubMedCrossRefGoogle Scholar
  5. Cho RY, Nystrom LE, Brown ET, Jone AD, Braver TS, Holmes PJ, Cohen JD (2002) Mechanisms underlying dependencies of performance on stimulus history in a two-alternative forced choice task. Cogn Affect Behav Neurosci 2:283–299PubMedCrossRefGoogle Scholar
  6. Churchland AK, Kiani R, Shadlen MN (2008) Decision-making with multiple alternatives. Nat Neurosci 11(6):693–702PubMedCentralPubMedCrossRefGoogle Scholar
  7. Dayan P, Kakade S, Montague PR (2000) Learning and selective attention. Nat Rev Neurosci 3:1218–1223CrossRefGoogle Scholar
  8. Devauges V, Sara SJ (1990) Activation of the noradrenergic system facilitates an attentional shift in the rat. Behav Brain Res 39(1):19–28PubMedCrossRefGoogle Scholar
  9. Donders FC (1969) On the speed of mental processes. Acta Psychol (Amst) 30:412CrossRefGoogle Scholar
  10. Driver J, Noesselt T (2008) Multisensory interplay reveals crossmodal influences on “sensory-specific” brain regions, neural responses, and judgments. Neuron Rev 57:11–23CrossRefGoogle Scholar
  11. Eriksen BA, Eriksen CW (1974) Effects of noise letters upon the identification of a target letter in a nonsearch task. Percept Psychophys 16:143–149CrossRefGoogle Scholar
  12. Ernst MO, Banks MS (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415(6870):429–433PubMedCrossRefGoogle Scholar
  13. Gold JI, Shadlen MN (2002) Banburismus and the brain: decoding the relationship between sensory stimuli, decisions, and reward. Neuron 36:299–308PubMedCrossRefGoogle Scholar
  14. Gomez P, Ratcliff R, Perea M (2007) A model of the go-nogo task. J Exp Psychol 136(3):389–413. doi:10.1037/0096-3445.136.3.389.ACrossRefGoogle Scholar
  15. Heath T, Chatterjee S (1995) Asymmetric decoy effects on lower-quality versus higher-quality brands: meta-analytic and experimental evidence. J Consum Res 22:268–284CrossRefGoogle Scholar
  16. Herrnstein RJ (1961) Relative and absolute strength of responses as a function of frequency of reinforcement. J Exp Anal Behav 4:267–272PubMedCentralPubMedCrossRefGoogle Scholar
  17. Herrnstein RJ (1970) On the law of effect. J Exp Anal Behav 13:243–266PubMedCentralPubMedCrossRefGoogle Scholar
  18. Huber J, Payne J (1982) Adding asymmetrically dominated alternatives: violations of regularity and the similarity hypothesis. J Consum Res 9(1):90–98Google Scholar
  19. Jacobs RA (1999) Optimal integration of texture and motion cues in depth. Vis Res 39:3621–3629PubMedCrossRefGoogle Scholar
  20. Kahneman D, Tversky A (1979) Prospect theory: an analysis of decisions under risk. Econometrica 47:313–327CrossRefGoogle Scholar
  21. Kahneman D, Slovic P, Tversky A (eds) (1982) Judgement under uncertainty: heuristics and biases. Cambridge University Press, Cambridge, UKGoogle Scholar
  22. Kording KP, Beierholm U, Ma W, Quartz S, Tenenbaum J, Shams L (2007) Causal inference in cue combination. PLoS One 2(9):e943PubMedCentralPubMedCrossRefGoogle Scholar
  23. Laming DRJ (1968) Information theory of choice-reaction times. Academic, LondonGoogle Scholar
  24. Leotti LA, Wager TD (2009) Motivational influences on response inhibition measures. J Exp Psychol Hum Percept Perform 36(2):430–447CrossRefGoogle Scholar
  25. Logan G, Cowan W (1984) On the ability to inhibit thought and action: a theory of an act of control. Psych Rev 91:295–327CrossRefGoogle Scholar
  26. Nassar MR, Wilson RC, Heasly B, Gold JI (2010) An approximately Bayesian delta-rule model explains the dynamics of belief updating in a changing environment. J Neurosci 30(37):12366–12378PubMedCentralPubMedCrossRefGoogle Scholar
  27. Nassar MR, Rumsey KM, Wilson RC, Parikh K, Heasly B, Gold JI (2012) Rational regulation of learning dynamics by pupil-linked arousal systems. Nature Neurosci 15(7):1040–1046Google Scholar
  28. Newsome WT, Paré EB (1988) A selective impairment of motion perception following lesions of the middle temporal visual area (MT). J Neurosci 8(6):2201–2211PubMedGoogle Scholar
  29. Ratcliff R, Rouder JN (1998) Modeling response times for two-choice decisions. Psychol Sci 9:347–356CrossRefGoogle Scholar
  30. Ratcliff R, Smith PL (2004) A comparison of sequential sampling models for two-choice reaction time. Psychol Rev 111:333–346PubMedCentralPubMedCrossRefGoogle Scholar
  31. Roitman JD, Shadlen MN (2002) Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. J Neurosci 22(21):9475–9489PubMedGoogle Scholar
  32. Salinas E, Stanford TR (2013) Waiting is the hardest part: comparison of two computational strategies for performing a compelled-response task. Front Comput Neurosci 33(13):5668–5685Google Scholar
  33. Salinas E, Shankar S, Gabriela Costello M, Zhu D, Stanford TR (2010) Waiting is the hardest part: comparison of two computational strategies for performing a compelled-response task. Front Comput Neurosci. doi:10.3389/fn-com.2010.00153PubMedCentralPubMedGoogle Scholar
  34. Shadlen MN, Newsome WT (2001) Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. J Neurophysiol 86:1916–1935PubMedGoogle Scholar
  35. Shams L, Kamitani Y, Shimojo S (2000) What you see is what you hear. Nature 408:788PubMedCrossRefGoogle Scholar
  36. Shams L, Ma WJ, Beierholm U (2005) Sound-induced flash illusion as an optimal percept. Neuroreport 16(17):1923–1927PubMedCrossRefGoogle Scholar
  37. Shenoy P, Yu AJ (2011) Rational decision-making in inhibitory control. Front Hum Neurosci. doi:10.3389/fnhum.2011.00048PubMedCentralPubMedGoogle Scholar
  38. Shenoy P, Yu AJ (2012) Strategic impatience in Go/NoGo versus forced-choice decision-making. Adv Neural Inf Process Syst 25Google Scholar
  39. Shenoy P, Yu AJ (2013) A rational account of contextual effects in preference choice: what makes for a bargain? In: Proceedings of the thirty-fifth annual conference of the cognitive science society. Berlin, GermanyGoogle Scholar
  40. Simon JR (1967) Ear preference in a simple reaction-time task. J Exp Psychol 75(1):49–55PubMedCrossRefGoogle Scholar
  41. Simonson I (1989) Choice based on reasons: the case of attraction and compromise effects. J Consum Res 16:158–157CrossRefGoogle Scholar
  42. Stroop JR (1935) Studies of interference in serial verbal reactions. J Exp Psychol Gen 18:643–662CrossRefGoogle Scholar
  43. Trueblood JS (2012) Multialternative context effects obtained using an inference task. Psychon Bull Rev 19(5):962–968. doi:10.3758/s13423-012-0288-9PubMedCrossRefGoogle Scholar
  44. Tversky A (n.d.) Elimination by aspects: a theory of choice. Psychol Rev 79:288–299Google Scholar
  45. Usher M, McClelland JL (2001) The time course of perceptual choice: the leaky, competing accumulator model. Psychol Rev 108(3):550–592PubMedCrossRefGoogle Scholar
  46. Usher M, McClelland J (2004) Loss aversion and inhibition in dynamical models of multialternative choice. Psychol Rev 111(3):757–769Google Scholar
  47. Wald A (1947) Sequential analysis. Wiley, New YorkGoogle Scholar
  48. Wald A, Wolfowitz J (1948) Optimal character of the sequential probability ratio test. Ann Math Statist 19:326–339CrossRefGoogle Scholar
  49. Yu AJ, Cohen JD (2009) Sequential effects: superstition or rational behavior? Adv Neural Inf Process Syst 21:1873–1880Google Scholar
  50. Yu AJ, Dayan P (2005) Uncertainty, neuromodulation, and attention. Neuron 46:681–692PubMedCrossRefGoogle Scholar
  51. Yu AJ, Huang H (2014) Maximizing masquerading as matching in human visual search choice behavior. Decision (To appear)Google Scholar
  52. Yu AJ, Dayan P, Cohen JD (2009) Dynamics of attentional selection under conflict: toward a rational Bayesian account. J Exp Psychol Hum Percept Perform 35:700–717PubMedCentralPubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Department of Cognitive ScienceUniversity of California, San DiegoLa JollaUSA