Research Article

# A Range-Normalization Model of Context-Dependent Choice: A New Model and Evidence

• equal contributor Contributed equally to this work with: Alireza Soltani, Benedetto De Martino

asoltani@stanford.edu

Affiliations: Howard Hughes Medical Institute and Department of Neurobiology, Stanford University School of Medicine, Stanford, California, United States of America, Division of Biology and Computation and Neural Systems, California Institute of Technology, Pasadena, California, United States of America

X
• equal contributor Contributed equally to this work with: Alireza Soltani, Benedetto De Martino

Affiliations: Division of Psychology and Language Sciences, University College London, London, United Kingdom, Division of the Humanities and Social Sciences, California Institute of Technology Pasadena, California, United States of America

X
• Affiliation: Division of the Humanities and Social Sciences, California Institute of Technology Pasadena, California, United States of America

X
• Published: July 19, 2012
• DOI: 10.1371/journal.pcbi.1002607

## Abstract

Most utility theories of choice assume that the introduction of an irrelevant option (called the decoy) to a choice set does not change the preference between existing options. On the contrary, a wealth of behavioral data demonstrates the dependence of preference on the decoy and on the context in which the options are presented. Nevertheless, neural mechanisms underlying context-dependent preference are poorly understood. In order to shed light on these mechanisms, we design and perform a novel experiment to measure within-subject decoy effects. We find within-subject decoy effects similar to what have been shown previously with between-subject designs. More importantly, we find that not only are the decoy effects correlated, pointing to similar underlying mechanisms, but also these effects increase with the distance of the decoy from the original options. To explain these observations, we construct a plausible neuronal model that can account for decoy effects based on the trial-by-trial adjustment of neural representations to the set of available options. This adjustment mechanism, which we call range normalization, occurs when the nervous system is required to represent different stimuli distinguishably, while being limited to using bounded neural activity. The proposed model captures our experimental observations and makes new predictions about the influence of the choice set size on the decoy effects, which are in contrast to previous models of context-dependent choice preference. Critically, unlike previous psychological models, the computational resource required by our range-normalization model does not increase exponentially as the set size increases. Our results show that context-dependent choice behavior, which is commonly perceived as an irrational response to the presence of irrelevant options, could be a natural consequence of the biophysical limits of neural representation in the brain.

## Author Summary

While faced with a decision between two options for which you have no clear preference (say, a small cheap TV and a large expensive TV), you are presented with a new but inferior option (say, a medium expensive TV). The mere presence of the new option, which you would not select anyway, shifts your preference toward the expensive large TV. This simple example shows how the introduction of an irrelevant option, called the “decoy,” to the choice set can change preference between existing options, a phenomenon often called the context-dependent preference reversal. A number of models have been proposed to explain context effects. Despite their success, they are either uninformative about the underlying neural mechanisms or they require comparison of every possible pair of option attributes, a computation that is unlikely to be implemented by the nervous system due to its high computational demand and undesirable outcomes when the choice set size increases. Here we present a novel account of the context-dependent preference based on the adjustment of neural response to the set of available options. Moreover, we show results from a novel behavioral task designed to test contrasting predictions of our model and a classic model of context effects.

### Introduction

At the core of many utility theories used in social and biological sciences lies a central axiom, called independence from irrelevant alternatives (IIA). The IIA axiom states that the relative preference between any pair of options does not depend on what other options might be present [1][3]. In decision neuroscience, IIA holds in the appealing model in which separate values are computed for each different option, and values are then compared to make a choice [4], [5]. Nevertheless, a wealth of data has clearly shown that the IIA axiom is often violated behaviorally [6], [7]. For example, it has been shown that adding a third “decoy” option into a choice set often results in a predictable shift in the relative preference between the other two options of an initial pair. A striking example is when the decoy option is dominated by one initial option – i.e., all of the new option's attributes are worse than the existing option attributes – but is not dominated by the other initial option. The decoy is an “irrelevant alternative” because it would never be chosen if it is dominated by another option. Introducing such a decoy results in an increased preference for the initial option that dominates the decoy [6], [8][10], a phenomenon called the attraction effect or the asymmetric dominance effect.

Decoy effects can be considered an error in logical reasoning and there is some evidence that they can be exploited by consumer marketing and political strategies [11][13]. Interestingly, these effects are not limited to humans [14][16], they increase after lesion of the medial orbitofrontal cortex in macaques [17], and they can be mitigated by improving self-control or increasing blood glucose [18]. Considering that under realistic scenarios, choices are usually made in particular contexts [19], exploring the neural mechanisms underlying context-dependent preference is crucial for better understanding of choice behavior in general [20].

Several explanations have been proposed to account for the preference reversal induced by the type of decoy in a choice set. Most of these models are based on verbally-described heuristics and are not mathematically formalized, which makes them difficult to test or generalize to new experimental paradigms [21], [22]. An exception is the context-dependent “advantage” (CDA) model of Tversky and Simonson that coherently accounts for attraction and other context effects [7]. The CDA relies on the comparison between different attributes of the available options to account for context effects [23]. The CDA model is the precursor of more elaborate connectionist models such as the leaky competing accumulator (LCA) model [24], [25] or the decision field theory (DFT) [26], [27]. All these models aim to account for many types of context effects such as attraction, similarity, and compromise effects within a single framework [28]. The two popular connectionist models, the LCA and DFT, differ in a number of key features, such as the requirement of loss aversion, but like the CDA model, their core mechanism is comparison between each pair of option attributes. In most cases, psychological models such as CDA, LCA, and DFT, successfully reproduce the behavioral observations that they aim to explain. However comparing all attributes between all pairs of options in the choice set is computationally demanding, especially as the number of options and attributes grows. Other models of choice avoid these demands by assuming limited sequential attribute comparison (e.g., elimination-by-aspects [29], for which there is evidence [30]), but those models cannot explain the attraction effect.

We propose a new model to explain context effects, based on known biophysical limits of neural representation. The guiding presumption in our range-normalization (RN) model is that subjective values of option attributes are encoded in the firing rate of neural populations, rather than other aspects of neural firing [31]. If so, mental representations of subjective values will be bound by the same biophysical limits that govern neural representations. Namely, neural responses are bound from below by zero and from above by a few hundred spikes per second and, therefore, neurons can only represent a set of stimuli using a limited range of firing rates. Faced with a new set of stimuli to encode, however, neurons can adjust their dynamic range (i.e. interval between threshold and saturation points) to represent these stimuli distinguishably. We propose that this adjustment mechanism, which we call range normalization, is the principal neural mechanism underlying context-dependent effects.

Normalization of the neural response is common in vision and other sensory modalities, and could be a more widespread property of neural representations [32]. To account for context effects, the range-normalization mechanism we propose here is computationally easier than comparison of all pairs of option attributes, since only the two most extreme attribute values are needed to compute the range. We implement a specific functional form of range normalization and test predictions of the outcome model using a novel within-subject design.

We first describe experimental results that demonstrate within-subject decoy effects and reveal some new properties of these effects (correlation between effects across types of decoys and decoy distance). Second, we describe the CDA model, how attribute comparison gives rise to context effects in this model, and its predictions in our experimental paradigm. Third, we present our RN model and its predictions for context effects. Finally, we describe new, contrasting predictions of the CDA and RN models about the influence of choice set size on context effects and the neural plausibility of these models.

### Results

#### The experimental paradigm to test within-subject decoy effects

Our experimental paradigm consisted of two tasks: an initial estimation task and the decoy task. We used the subject's choice from the estimation task to calculate the subject's attitude toward risk in order to tailor subject-specific target (T) and competitor (C) gambles that are equally preferred (see below). This step is necessary because context effects are most strongly demonstrated when T and C are equally valuable. In the second part of the experiment (decoy task), we assessed the preference between jittered versions of the T and C gambles in the presence of a third decoy gamble (see Methods for more details).

#### Behavioral results from the estimation task

During the estimation task, the subject was presented with two options. These options were risky monetary gambles, described by probability p of winning a monetary reward of magnitude M, denoted . On each trial, the subject selected between pairs of gambles, always consisting of one fixed low-risk gamble, (0.7, $20), and one high-risk gamble, (0.3,$M), for many different values of M (see Methods for more details).

The data analysis of the estimation task confirmed that all subjects appeared to understand the task and respond to changes in magnitude, preferring the high-risk gamble when its reward magnitude was large, but not when its reward magnitude was small (Figure S1 in Text S1). Logistic fitting of these choices yielded a subject-specific value of the high-risk gamble magnitude M for which the low- and high-risk gambles are equally subjectively valuable. (Figures S2A and S2B in Text S1). Across subjects, we found a wide range of values for the indifference high-risk magnitude and the sensitivity to reward magnitude (), but these two quantities were not significantly correlated (p = 0.33) (Figure S2C in Text S1).

As a validity check, we computed the relative expected utility of each pair of gambles (), and divided the pairs into sets with either greater than (easy choice pairs), or less than (hard choice pairs). If value is being inferred accurately, response times (RTs) should be slower for hard choice pairs that are close in subjective value. As predicted, the average RT was about 110 msec longer on trials with hard choice pairs, and that relation also held for all but one subject (Figure S3 in Text S1).

#### Modulation of preference by the decoy

On each trial of the decoy task, three monetary gambles were displayed on the screen for an 8 sec evaluation period. At the end of this period, one of the three gambles was removed from the screen and subjects had only 2 sec to choose one of the two remaining gambles in a selection period (Figure 1A). Two of three initial gambles were the low-risk gamble (target T) and the subject-tailored high-risk gamble (competitor C). The third gamble was the decoy gamble (D) that was randomly chosen from a set of gambles with a wide range of attribute values (see Figure 1B and Methods for more details).

On two thirds of the trials (regular trials), the decoy gamble was removed after the evaluation period and the subject had to choose between T and C gambles. On the remaining one third of the trials (catch trials), either the T or C gamble disappeared. The catch trials were included to conceal the underlying structure of the task and were subsequently discarded from the analysis (since they do not provide choices between T and C). Therefore, we only analyze the regular trials to investigate how the preference between T and C gambles changed as a function of a decoy that was present at the evaluation period, but not available in the selection period.

Having a long evaluation period (8 sec) and a short selection period (2 sec) forces subjects to evaluate and “pre-choose” options by ranking them during the evaluation period; therefore, they would be prepared to make a rapid choice in the 2-sec selection period. This ensures that presentation of the decoy during the evaluation period can influence context-dependent processes of assigning values enough to have a behavioral impact during rapid selection. This “phantom decoy” design allowed us to study the effect of dominant decoys (decoys that are better than either T or C gambles) as well as dominated decoys (see below).

We found that subjects' preference between T and C was systematically influenced by the attributes of the decoys. The first indication of the decoy influence on the subsequent choice was that the majority of our subjects did not select T and C gambles equally (Figure S4 in Text S1), though they were constructed (from the estimation task data) to be equally preferable.

As in previous studies, we divided trials into 6 groups (D1 to D6) based on the position of the decoy (Figure 1B). Decoys in positions D1 and D4 are called the asymmetrically dominant decoys because they dominate either T or C (they are less risky and also have larger reward magnitudes), but do not dominate both. Decoys in positions D3 and D6 are asymmetrically dominated decoys since they are either worse than the target (D6) or the competitor (D3) on both dimensions (i.e. they are more risky and also have smaller reward magnitudes), but are only dominated by one of T and C [6], [10]. Finally, decoys in positions D2 and D5 are similar to the target and the competitor and are better on one dimension but worse on another. They are called similar decoys [28], [33].

We quantified decoy effects by computing the difference between the probability of selecting the target for a given decoy location, , and the overall probability of choosing the target across all trials, (Figure 1C). We found that the decoys influenced subjects' preference between T and C gambles (one-way ANOVA, p<0.0001) and the average values of over all subjects were significantly different from zero (Wilcoxon signed rank test, p<0.05) except for decoys in position D2.

For statistical purposes, it was useful to scale decoy effects to account for the fact that some subjects had an overall target choice frequency, , which was very different from 0.5 (despite the attempt to control this frequency using the estimation task). A scaled measured of decoy efficacy (see Methods) that adjusts for the target choice frequency still showed strong within-subject decoy effects (one-way ANOVA, p<0.0005) similar to changes in preference presented earlier (Figure 1D).

In addition, we replicated three main findings regarding decoy effects. Firstly, we observed a robust attraction effect similar to what has been shown in previous between-subject studies [6], [10]. That is, the asymmetrically dominated decoys D3 and D6 increased the selection of the option that dominated them: competitor C and target T, respectively (Wilcoxon signed rank test, p<0.05). Secondly, the asymmetrically dominant decoy D1 and D4 decreased the selection of the option which was dominated by those decoys: competitor C and target T, respectively (Wilcoxon signed rank test, p<0.05). We were able to study this effect due to our task design where the dominant decoy disappeared during the selection time. Thirdly, decoys in positions D2 and D5 decreased the selection of the option close to them (C and T, respectively); however, only the effect of decoys in position D5 was statistically significant (Wilcoxon signed rank test, p<0.05). These effects have been previously described as the similarity effects [29], indicating that decoys take more share from the option in the choice set with which they are most similar, thereby decreasing the preference for the option similar to them.

Thus, our results confirm previous between-subject findings and extend them to a within-subject design. Most preference reversals due to differences in descriptions, procedures or context are established by between-subject designs. Preference for between-subjects designs is guided by the intuition that two conditions that change a normatively irrelevant detail will be transparently equivalent if both conditions are presented in a within-subjects design; however, the normative irrelevance is cognitively inaccessible if only one condition is presented, in a between-subjects design. Establishing context-dependence in a within-subject design therefore shows its robustness. The within-subject design also adds substantial statistical power, and allows us to compute the within-subject correlation between effects for different decoys (which a between-subject design cannot do).

We also examined relationships between the overall decoy effects, as shown by a given subject and his/her risk aversion parameters from the estimation task. We found no relationship between the overall susceptibility of individual subjects to decoys (defined as the average of absolute values of decoy efficacies for each subject) and their indifference values (r = −0.2, p = 0.38), or between the overall susceptibility and the sensitivity to the reward magnitude (r = −0.21, p = 0.37).

#### Dependence of decoy effects on distance and correlation between decoy effects

Next, we divided all regular trials into close and far trials, depending on the distance between the decoy and the gamble closest to it. Then we computed the decoy efficacy for each decoy location (Figure S5 in Text S1). For this analysis, decoy efficacies for close and far decoys were defined relative to the overall probability of selecting T only for the corresponding set of close or far decoys; therefore, this definition controlled for possible differences between the close and far sets of gambles. Close decoys had no significant effect (one-way ANOVA, p = 0.69), while far decoys had a very strong effect (one-way ANOVA, p<10−11) (Figure 2A). Moreover, for all decoys with significant effects over all trials (except D4), the far decoy effect was larger than the close decoy effect (two-sample t-test, p<0.01).

We then examined the correlation between different decoy effects within-subjects. This correlation analysis provided a tool for testing whether different types of decoy effects were generated by the same mechanisms or not. We grouped decoys at different locations into three decoy types—asymmetrically dominant decoys (D1 and D4), similar decoys (D2 and D5), and asymmetrically dominated decoys (D3 and D6). We then computed the average decoy efficacy for each of these three decoy types in terms of their effects on the preference for the gamble close to or far from them. A positive (or negative) decoy efficacy means an increase (or decrease, respectively) in the preference for the gamble close to the decoy with respect to the gamble far from it.

The different decoy types do influence the choice preference differently (one-way ANOVA, p<0.0001). Specifically, asymmetrically dominant decoys decreased preference for the gamble close to it (Wilcoxon signed rank test, p<0.05) while asymmetrically dominated decoys increased preference for the gamble close to it (Wilcoxon signed rank test, p<0.05) (Figure 2B). There were no significant effects for similar decoys (Wilcoxon signed rank test, p = 0.07). Interestingly, we found a significant negative correlation between asymmetrically dominant and asymmetrically dominated decoy efficacies (r = −0.57, p = 0.008) (Figure 2C).

#### Behavior and predictions of the CDA model

Next we tested whether the CDA model could reproduce the decoy effects observed in our experiment. First, we briefly describe the CDA model of Tversky and Simonson [7] and we present some results and predictions of this model that are relevant to our experimental paradigm. For simplicity, we assumed options have only two attributes and that the overall subjective value of an option is a weighted sum of its values on these attributes. The latter was assumed to avoid altering the original CDA model for the case where the overall value of an option is the product of its attribute values (as for risky gambles).

In the CDA model, the context effects arise from pairwise comparison of all options in the choice set. This pairwise comparison is performed through computing quantities termed the advantage and disadvantage. More specifically, the advantage of option T with respect to option C, , is defined as
where
Similarly, the disadvantage of option T with respect to option C, is defined as
where is an increasing monotonic function of (note the change in the order of T and C in the argument of the advantage and disadvantage functions). Tversky and Simonson included loss aversion in their model, by assuming that the disadvantage looms greater than the advantage, that is [7]. For simplicity, we assume a linear relationship, where .

The advantage and disadvantage are used to define the relative advantage of option T with respect to option C, (1)
Finally, the value of an option in the choice set increases proportionally to the sum of the relative advantages between that option and each other option in the choice set. With three options T, C, and D, the overall values of options including context effects are(2)
where determines the strength of the context effects, and and are the subjective values of option X before and after including the context effects. We can apply a sigmoid function to the difference in option values of T and C to obtain the choice preference between these options, before and after the decoy introduction.

In order to illustrate the behavior of the CDA model over a wide range of decoy attributes, we calculated the change in the value of original options (i.e. the options of the choice set before the decoy was introduced) as a function of each decoy's attributes (Figure 3A). This analysis showed that the maximal change in the value of a given option happens when the decoy is dominated (both decoy attributes are smaller than the attributes of that option). Likewise, when the decoy is dominant (both decoy attributes are larger than the attribute of a given option), the change in that option value is zero, independent of the exact location of the decoy. These option value changes happen because the relative advantage is one for dominated decoys and zero for dominant decoys. Overall, decoy introduction can only add a non-negative amount to the value of original options in the choice set. This property has undesirable consequences, which we discuss later.

Next, we computed the change in the difference between the values of the original options (and the resulting change in preference between them) as a function of the decoy attributes (Figure 3B). This analysis revealed some important aspects of the CDA model. Firstly, no change in preference occurs when both decoy attributes are smaller or larger than the attributes of both of the original options. This means that in the CDA model, such decoys are irrelevant for the choice preference. Secondly, the change in preference is larger when the decoy is dominated by the close option rather than when the decoy is dominant (Figure 3B), because of loss aversion (). Finally, preference reversal is stronger for decoys close to the original options than for far decoys (Figure 3B).

For better comparison of the results of the CDA model with our experimental data, we calculated the average models' choice behavior for decoys at locations in the attribute space that qualitatively match our experimental design (see Methods for more details). The CDA model exhibits attraction and asymmetrically dominant decoy effects, but not similarity effects (as has been previously pointed out [26], Figure 3C). However, because both attraction and asymmetrically dominant decoy effects are driven by the same mechanism (but in an opposite direction), the values of decoy efficacies for these decoys are anti-correlated (data not shown). Moreover, as mentioned above, the decoy effects are stronger for attraction than asymmetrically dominant decoys due to the inclusion of the loss aversion concept in the CDA model (Figure 3C). There is some evidence for this prediction when we group the experimental data based on the decoy type (Figure 2B). However, fitting of our data using the CDA model yielded , which is closer to loss-neutrality (Figure S6 in Text S1). Finally the CDA predicts that close decoys have stronger effects than far decoys (Figure 3D). This prediction of the CDA model is not supported by our experimental data (Figure 2A).

#### The Range-Normalization (RN) model

Here we propose a model for context effects that can account for our experimental observations and is based on plausible limits of neuronal elements in representing sensory and cognitive stimuli. Specifically, for neural representation to be useful it should be able to distinguish between any two unequal stimuli in the set of represented stimuli. However, neural firing rates are bounded between zero and a few hundred spikes per second. That is the neural representation could be variable only in the interval between a threshold and saturation points (dynamic range); outside this interval, the stimuli are represented with the same response. Nevertheless, the response of a neuron (or a population of neurons) to a set of stimuli can still vary, depending on the relationship between the location of the threshold and saturation points and the values of all stimuli that have to be represented in the firing activity. Considering the mentioned constraints, it is therefore plausible that the response of a neuron or a population of neurons can be adjusted to a new set of stimuli that it needs to represent (widespread evidence of neural adaptation is reviewed in the Discussion). We show that this neural adjustment could explain the context-dependent preference reversal.

In this model we assumed that the overall value of a given option is represented by a neural population that receives inputs from different neural populations selective to an individual option attributes (see Method for more details). Assuming a linear response function, the overall value of an option, which is reflected in the firing activity of an option-selective population, is equal to a weighted sum of the neural responses to its attribute values(3)
where RA is the response of population selective to option A, ri(Ai) is the neural response of attribute-selective population i to option A, and wAi is the weight of connections from the attribute-selective population i to the option-selective population A.

For simplicity, we considered the case in which the neural response of attribute-selective populations is a linear function of stimulus value, s, when s is above a threshold ct,i and below a saturation point cs,i. In addition we normalized the response to the maximum response level so that the maximum response is represented with 1. Note that any difference in the maximum response of neurons encoding different attributes can be absorbed into the connection weights wi's. Therefore, the neural representation attribute i can be written as(4)
and so is determined by two parameters ct,i and cs,i. In order to simplify the notation, we drop the subscript i in the rest of the manuscript, but it should be understood that the neural representation could be different for each attribute.

In order to express the neural response in terms of the range and configuration of represented stimuli, we define two new parameters, ft and fs, which we call the representation factors(5)
where smin and snmin are the minimum and next-to-minimum values of s, and smax and snmax are the maximum and next-to-maximum values of s, respectively. The representation factors, ft and fs, determine the fraction of the value space around the minimum and maximum stimuli that are below or above the threshold or saturation points, respectively. This can be seen more clearly by expressing the threshold and saturation points, ct and cs, in terms of the representation factors(6)
Note that a positive fs implies that the neuron never reaches to its maximum possible faring rate. Therefore, the representation factors determine efficiency of a neuron (or a neural population) in representing a set of stimuli in their firing activities (see below), and so they are inherent properties of the neuron. By imposing , it is guaranteed that neural responses to different stimuli are distinct (except when there are only two presented stimuli, for which an additional constraint needs to be imposed: ).

In order to show how neural representation depends on the representation factors defined above, we plotted the neural responses for different values of representation factors in the case in which there are only two options (C and T) in the stimulus set (Figure 4A). For positive values of the representation factors threshold and saturation points are below and above the minimum and maximum stimuli, respectively. On the other hand, for negative values of representation factors, threshold and saturation points are above and below the minimum and maximum stimuli, respectively (which means extreme stimuli can be represented with the same response because they lie outside the dynamic range).

Therefore, the representation factors determine the relative position of the dynamic range of the neural response with respect to a set of represented stimuli. However, the above equations show that when a new stimulus is introduced to the stimulus set, the threshold and saturation points need to be adjusted in order for the representation factors to stay the same or adapt to the new set.

Using Eq.6 and assuming that the representation factors stay the same before and after decoy introduction (a condition which can be relaxed as shown below), we computed the adjustment of neural response and changes in the response to the original options due to decoy introduction (Figure 4B). The decoy may introduce a new minimum or maximum (or a next-to-minimum or next-to-maximum) to the stimulus set, and in all of these cases it changes the configuration of stimuli.

If there were originally two options in the set, the decoy introduction always changes the neural representation and therefore changes the value of the original options. More interestingly, the values of the original stimuli before and after decoy introduction depend on the relative decoy value (Figure 4B rightmost panel). This change is positive if the decoy is between the two original options or close to them, and it is negative if the decoy introduces a new minimum or maximum. Overall, the change in the differential response depends on the representation factors and decreases as the decoy becomes farther from the original options. Interestingly, we found that the ratio of the differential response after the decoy introduction to before the decoy introduction is inversely proportional to the ratio of the range of stimulus values after to before decoy introduction (see Text S1). For this reason, we call our proposed mechanism for neural adjustment the range normalization.

For the above simulations we assumed that adjustment to a new set of stimuli is perfect such that the neural response in terms of representation factors stay the same. However, it is possible that due to biophysical constraints, this adjustment is not fully realized (i.e. partial normalization) while neurons still represent each stimulus with different responses. To incorporate partial range normalization, we set the threshold and saturation points after the introduction of the new stimulus to(7)
where and are the threshold and saturation points after the decoy introduction as described by Eq.6, and is a quantity between 0 and 1 that determines the degree of range normalization. The extra conditions assure that all stimuli are represented with different responses. If , the neural response is not range normalized to the presentation of the new stimulus, and if , the range normalization is complete. Examples of a partial range normalization and the resulting change in the value of two original options are shown in Figure 4C (for ). These results showed how the degree of range normalization could control the decoy effects.

A limiting factor for neural responses to distinguish between different stimuli is the ubiquitous noise in the nervous system [34]. The effects of noise on range normalization are beyond the scope of this work, however, we considered a basic consequence of noise inclusion in our range-normalization model. We assumed that in order for the neural response to be distinguishable in the presence of noise, the slope of neural response (k) could not be indefinitely small. Therefore, we imposed an extra constraint on the neural representation to prohibit the slope from becoming smaller than a minimum value (). By adding this constraint to the RN model (see Methods for details), we found that the change in the differential response to original options reaches a plateau when the decoy is very far from the original options (Figure 4D). This property is psychologically plausible, however, it cannot be tested with our data since we did not use very far away decoys in our experiment.

#### Behavior and predictions of the RN model

So far, we have shown how decoy introduction changes the neural response to original options based on how neurons represent a given attribute. Here we demonstrate how decoy introduction changes the preference between the original (T and C) options as observed in our experiment.

We first show how range normalization results in the attraction effect when a decoy that asymmetrically dominates T (but no C) is introduced. The difference between option values before and after decoy introduction is equal to (using Eq.3)(8)
(9)
where is the neural response to option X after the decoy introduction. By dividing the last equation by and using Eq.8 we obtain
The first term in the last expression is less than one because the decoy introduces a new maximum in dimension 1, and the second term is larger than 1 as the decoy does not introduce a new minimum nor a maximum in dimension 2 (see Figure 4). Therefore, the sum of the parenthetical terms is negative so that , which shows that decoy introduction makes C preferred to T.

We then simulated change in preference due to decoy introduction at different locations (see Methods for details). We assumed that option attributes on a given dimension (e.g. monetary value) are represented by a neural population selective to that attribute (an attribute-selective population). The attribute-selective populations in turn project to neural populations representing the overall value of individual options (an option-selective population). The strength of these projections determines the weight of each attribute dimension on the overall value (Eq.3). Subsequently, the outputs of the option-selective populations project to a decision-making circuit, allowing the model to choose between the available options.

We found that the values of existing options are decreased or increased depending on the location of the decoy. These changes reach maximal values if the decoy is at a certain distance from the existing options. (Figures 5A and 5B). The fact that decoy effects do not increase indefinitely as the decoy becomes farther from the original options is due to consideration of noise in the model.

For better comparison of the behavior of the RN model with the CDA model and the experimental data, we calculated the average models' choice behavior for decoys at locations of the attribute space that qualitatively match the experimental design (the same as in Figure 5A). We found that similar to the CDA model, the RN model captures attraction and asymmetrically dominant decoy effects, but it does not capture similarity effects without including asymmetry in the representation factors of the two attributes (Figure 5C, and Figure S6 in Text S1). Interestingly, the behavior of the RN with representation factors equal to zero is qualitatively similar to the CDA model with loss-neutrality (Figure 3). In order to address between-subject variability, we simulated the model over a wide range of representation factors, and we found that overall, average behavior of many simulated subjects with this model follows the same trend as the model with zero representation factor (Figure 5D). However, in contrast to the CDA model, the decoy effects were stronger for far decoys than for close decoys. In addition, we found a significant anticorrelation between decoy effects for the attraction and asymmetrically dominant decoys (Figure 5E).

The CDA and RN models presented above, account for context effects based on very different assumptions and premises, and furthermore predict different patterns of decoy effects for far and close decoys. More importantly, different mechanisms underlying context effects in the presented models result in very different predictions regarding the influence of the choice set size on these effects, as described below.

#### Biophysical plausibility and set size

Although the CDA model captures most context effects, it is unclear how computations required by this model could be implemented biologically due to two main issues. First, in order to compute the advantage and disadvantage, every pair of options in the choice set should be compared. This causes a combinatorial problem because as the choice set becomes larger the number of required comparisons grows as , where is the number of options in the choice set. Second, the CDA model asserts that the introduction of each new option results in the addition of a non-negative value to every available option in the choice set, and therefore, as the number of options in a given choice set increases the value of every option in that set increases. This implies that the value of an option not only depends on other options in a given choice set but also on the size of that set.

In order to illustrate the effect of the set size on the valuation in the CDA model, we computed the value of an option at different locations of the attribute space as a function of the number of equally preferable options in the choice set. We found that option value increases linearly with the number of options in the choice set, in every location of the attribute space (Figure 6A). This is a direct consequence of the fact that in the CDA model, the relative advantage always adds a non-negative value to the overall value of a given option. Therefore, the same option has a larger value when it is part of a larger choice set (Figure 6B); in addition, the overall value of the options in the choice set exponentially increases with the choice set size (Figure 6C). The former suggests that the difference between the values of two options in a given choice set should grow as the set size increases, resulting in better value discrimination in a larger choice set.

The underlying mechanisms for context effects, which rely of pairwise comparison between all options in the choice set, imply that required resources for computations of context effects should increase supra-linearly with the choice set size. To demonstrate this point, we used the network structure in the LCA model [24] to calculate the required computational resource in the CDA model or any of its equivalent neural models (see Methods for more details). We found that computational resources also increase exponentially with the choice set size (Figure 6D).

Finally, we explored the influence of the set size on the valuation in the RN model by computing changes in valuation due to decoy introduction for different number of options in the choice set (Figure 6E). We found that choice set size does not have a significant effect on valuation, and the overall value of the decoy does not change with the choice set size (Figure 6F). Moreover, the overall value of options in the choice set as well as the required computational resources increase only linearly with the choice set size (Figures 6G and 6H). These happen in our model because the computations required for context effect do not require comparison and only depend on the configuration of option values in individual dimensions. Therefore, in contrast to the CDA model, the RN model does not predict an increase in the option values as the choice set size increases. These contrasting predictions of the model can be tested in future experiments.

Table 1 summarizes the overall decoy effects predicted by the CDA and RN models, and the actual effect sizes for different decoy types. Most effects are in the predicted direction and are significant. Note that the RN model correctly predicts both the influence of distance on the decoy effects and the anti-correlation between the effects for attraction and asymmetrically dominant decoys.

### Discussion

The prevalent influence of context on decision-making has long been considered an “anomaly” against the normative account of human choice behavior [35], [36]. The reason is that normative theories of choice typically assume that values are computed independently for each stimulus, rather than comparatively. The guiding metaphor for these normative theories of valuation and choice is a naïve theory of perception in which separate valued objects are perceived as encapsulated units and then integrated by a decision architecture. Of course, this view tends to disregard decades of evidence about how the visual system uses top-down encoding, neural adaptation and normalization, and gestalt principles in integrating multiple percepts.

In this spirit, we propose that context effects are a natural consequence of the biophysical limits of the neural processing in the brain, as shown for other aspects of perception and choice [37][39]. We construct a model for context effects based on plausible biophysical mechanisms that enable neurons to efficiently adjust their responses to the set of available stimuli. Both the effects of context on neural representation and the normalization to the set of stimuli have been extensively documented in auditory [40], [41] and visual domain [42][44], where neurons are required to represent and encode external stimuli presented in very different backgrounds. Moreover, adaptation is an efficient way for the nervous system to adjust to variable statistics of the environment to improve its local information capacity or discriminability power [45][50].

In our model, we explored one possible class of neural adjustments (range normalization) during valuation and choice using two main assumptions. First, neurons utilize their entire biophysical dynamic range to represent a set of stimuli. However, it is possible that neurons never reach to their maximum biophysical firing rates and instead fire at medium rates under many conditions (i.e. stimulus set). This only implies that the upper representation factor, fs, should be positive (see Eq.5) and does not qualitatively change the behavior of our model. Similarly, neurons not representing any stimulus with zero firing rate only implies positive values for the lower representation factor, ft. Second, we assume that range normalization only depends on configuration of the stimulus set and not the number of stimuli. Incorporating other parameters into response-normalization mechanisms does not contradict our proposal but it may change the resulting context effects. Here we only consider one form of range normalization to explain some of the basic effects of context on the choice preference. Future works would explore the consequences of other types of neural adjustments on the context-dependent choice behavior (see below).

Another recent study has shown that neurons in the lateral intraparietal cortex (LIP) show context-dependent effects by encoding the values of the saccade in the response field relative to the value of all other alternative saccade movements [52]. The authors used a divisive normalization model to account for their experimental findings. More specifically, the response to the value of the saccade in the receptive field is divided by the weighted response of the saccadic values of all options presented in the choice set, similarly to what has been proposed for sensory neurons [55], [56]. Therefore, due to divisive normalization, the value of each given option is globally scaled by the value of all the alternative options. In contrast, in our range-normalization model, the representation of each attribute dimension depends on the set of presented values, and not their sum (Figure 5B). Divisive normalization can account for relative value coding but does not predict any type of attraction effect because decoy introduction always suppresses the response to the target and the competitors without any change in the ranking of the options. However, it is possible that our proposed range normalization and the divisive normalization mechanisms play roles during different stages of decision process. Range normalization operates at the early stage of the decision process when cortical neurons have to represent individual features of each option; while divisive normalization operates at final stages (e.g. in LIP) when overall value associated with different actions need to be represented to control the selection processes (e.g. saccades).

A number of psychological models have used the attribute comparison as the basic mechanism to account for attraction and other decoy effects. The CDA model presented here was chosen as an example of such models because it accounts for the attraction and asymmetrically dominant decoy effects and provides testable predictions due to its simple, yet clear mathematical formulation. However, the CDA model or any other model that relies on attribute comparison, suffers from a few important issues. Firstly, such models predict that the values of all options increase (or at least the best and worst option) as the choice set increases, which implies that when presented as part of larger choice set options can be differentiated easier than when they are presented in a smaller set. This prediction is in contrast with experimental evidence showing that discriminability between items decreases with the increase of the data set [57], and that neural representation of option values decrease as the number of alternatives increase [52]. Secondly, in such models, resources required for computation of context effects exponentially increases with the choice set. The CDA model also predicts that decoy effects are larger for closer decoys. This is somehow counterintuitive as it predicts maximal decoy effects for very similar but dominated decoys - while these decoys should have little or no effect on the preference for the close dominant option, as it might be hardly distinguishable.

Recently, more sophisticated connectionist models have been proposed to capture attraction and other context effects such as the compromise and similarity effects. Two of such connectionist models are the decision-field theory (DFT) [26] and leaky competing accumulator (LCA) models [24]. While in both models attention determines which attribute to be compared at the time, these models rely on different mechanisms to account for attraction effect. The DFT model relies on bi-directional distant-dependent inhibition while the LCA model depends on the loss aversion. However, because both the DFT and LCA models require attribute comparison at some stages of processing (similar to the CDA model), they both suffer from the combinatorial problem as the CDA model. In contrast, our model that relies on range normalization of neural responses, which is adjusted only once regardless of the number of options, does not suffer from this issue.

There are other psychological models of context effects that do not rely on attribute comparison as the basic mechanism. Most of these models are based on heuristics and are not mathematically well formulated. These include but are not limited to the so-called weight-change, value-shift, and value-added models [21]. The weight-shift model assumes that adding a new alternative changes the relative weights of different attributes; it reduces the weight of a given attribute if the range on that attribute is extended and increases the weight if the number of different attribute values is increased. The value-shift model on the other hand, assumes that decoy changes the subjective evaluation of the attribute values, mainly based on the relative position of decoy with respect to the rest of options (as in range-frequency theory [58]). Finally the value-added model assumes that decoy introduction adds values to original options, which depend on the relational properties of the decoy and each target. Our range-normalization model shares some similarities with the value-shift model in a sense that it assumes that the decoy value on a given attribute changes the value representation in that attribute independently of the other attributes. However, for a limited case where representation factors are equal, the effective weight of a given dimension is inversely proportional to the range of values on that dimension (but there is no explicit relationship to the frequency effects in weight-shift model). Despite this similarity, our model relies on very different assumptions to explain the decoy effects and generates a number of novel predictions, while it is difficult to generalize the previous models because of their lack of mathematical formalization.

Still another set of models, from economics and marketing [59][61], assume that consumers are not sure what they prefer, but those consumers infer reasonable preferences from what options are available (as if mere option availability is advice). Decoys have an influence because they shape the consumer's idea of what might be a good choice. Comparison of these models with the CDA, RN and others is an interesting area for future research.

Context is a powerful modulator of how underlying preferences are constructed and choices are made, as documented by many behavioral experiments and field studies [35], [62]. At the theoretical level, however, most of the attempts to account for context effects have neglected the computational constraints faced by the brain in order to compare choice options characterized by several different attributes. In this paper we show that considering plausible biophysical constraints of the nervous system can indeed account for a few important aspects of context effects. The range-normalization model we proposed here has a reduced computational cost relative to competing models and at the same time produces accurate empirical predictions. More importantly, it enables us to connect plausible biophysical constraints of neural representation to the biases in the human choice behavior.

### Methods

#### Ethics statement

All participants gave informed consent to participate according to a protocol approved by the California Institute of Technology Institutional Review Board.

The experiment consisted of two parts in which subjects selected between different monetary gambles. In the first part (estimation task), the subject selected between two gambles with different reward probabilities and magnitudes. We used subject's choice in this task to estimate his/her attitude toward risk and to tailor equally preferred target (T) and competitor (C) gambles. In the second part of the experiment (decoy task), we assessed the preference between the target and competitor gambles in the presence of a third gamble. The subjects were told to consider every trial as equally important because at the end of the experiment, only one trial would be randomly extracted and the selected gamble on that trial would be played for real. To further encourage subjects to pay attention to every trial, we deducted $1 from the final compensation for each missed response. In total, 22 healthy Caltech male students (22±4 years old) took part in the study. One subject was excluded from the data analysis since he showed an erratic pattern of gamble selection during the estimation task. This was reflected in a poor fit of his choice behavior - his sensitivity to reward magnitude, , was 7 times smaller than the mean of the group (see Figure S2 in Text S1 for the distribution) - which prevented a reliable estimation of his indifference point. #### Estimation task In the estimation task, we assessed individual subjects' risk attitude using selection between two monetary gambles. The assessment procedure was an adaptation of the widely used method for estimating the indifference point which was originally developed by Holt and Laury [63]. Every subject completed four equivalent sessions, each of which consisted of 40 trials. On each trial, the subject had 4 seconds to evaluate two gambles while the instruction message “Evaluate” was on the screen. After this interval, the instruction message was changed to “Choose” and the subject had 2 seconds to indicate their choice using a keyboard. Each gamble was defined by two parameters (p, M), probability p of winning a monetary reward of magnitude M, that were presented on the screen with different colors. One gamble was characterized by a small reward magnitude but a large reward probability (low-risk or the target gamble). The other gamble had a large reward magnitude but a small reward probability (high-risk or the competitor gamble). We fixed the magnitude and probability of the low-risk gamble (p = 0.7, M =$20±2) while we varied the magnitude of the high-risk gamble between $30 and$80 (p = 0.3, M = $30–$80).

In the second part of the experiment, we tested how presence of different decoy gambles influences the preference between the low-risk and high-risk gambles. The low-risk gamble (T) was set to have a magnitude M of \$20±2 and a probability p of 0.7±0.05. The high-risk gamble (C) was set to have a probability p = 0.3±0.05 while its magnitude was tailored individually using the indifference point from the estimation task, in order to have the subjects indifferent between T and C. Finally, decoy gambles (D) were designed to have a wide range of magnitude and probability values (Figure 1). Specifically, we varied probability values of the decoy between 0.15 and 0.85, while we varied its reward magnitudes by 30% of the reward magnitude of the gamble closest to the decoy.

The task sequence was as follows. Three gambles (T, C and D) were presented on the screen for 8 seconds (evaluation period) while the “Evaluate” message was on the screen. The subjects were told to evaluate the three gambles during this period. Once the evaluation time was over, the message “Evaluate” was changed to “Choose” and simultaneously, one of the three gambles was randomly removed from the screen. The subjects then had 2 seconds to choose between the two remaining gambles by pressing a keypad (selection period). The decoy task was conducted in the MRI scanner (Siemens Trio); however, the fMRI data are neither analyzed nor presented here as they are beyond the scope of this paper. The main reason for not including the fMRI data here was that none of the models presented in this paper generates predictions that could be tested using BOLD-level signals.

On one third of the trials (catch trials), either C or T gambles disappeared. These trials were included to avoid the subject from predicting which gamble would disappear after the evaluation period, and were subsequently excluded from the analysis. On the remaining two thirds of the trials (regular trials), the decoy gamble disappeared allowing us to study how the presence of this option in the choice set influence the preference between C and T. Using this design (i.e. phantom decoy design), we were able to examine the effects of decoys that were preferred over C or T gambles. Finally, we used a short choice period (2 seconds) to avoid subjects from reevaluating the two remaining gambles. In fact, the only way to perform this task efficiently was to rank the 3 gambles during the evaluation period and to use this ranking at the choice period. Debriefing after the study confirmed that a large majority of the subjects used this “ranking strategy” which was also reflected in the dependence of the RT on the decoy (Figure S7 in Text S1).

#### Range-normalization model

The range-normalization model consists of three layers of neural populations: the attribute-selective, option-selective, and decision-making populations. The attribute-selective layer consists of two neural populations that represent the two attributes of the options. The attribute-selective populations project to the option-selective layer that consists of neural populations each of which represents the subjective value of an option in the choice set. The subjective values of options are determined by the weight of connections from the attribute-selective layer to the option-selective layer (Eq.3). Finally, the outputs of option-selective populations project the corresponding populations in the decision-making layer. The decision-making network is similar to what has been previously used to simulate different reward-dependent choice behaviors [37], [64].

Here we were only interested in the outcome of decision-making processes, therefore, we did not simulate the decision-making network on every trial. Instead, we used a sigmoid function, which has shown to describe the choice behavior of the decision-making network very well [37], [64], in order to compute the choice probability for a given set of inputs to the decision network. More specifically, the probability of selecting T, , is equal to(10)
where RT and RC are the responses of option-selective populations for target and competitor (Eq.3), is the strength of connections from option-selective to decision-making populations, and is a model parameters which is determined by the architecture of the decision-making network and the overall strength of its inputs [37], [64].

In order to obtain the neural response of attribute-selective populations to a given stimulus set, we used Eq.6 to calculate the threshold and saturation points. The threshold and saturation points uniquely define the neural response through Eq.4. To calculate the neural response after the decoy introduction, we first identified the minimum and maximum, and next to minimum and maximum stimuli in the stimulus set, and then we used Eq.6 to compute the threshold and saturation points.

For simulations presented in Figure 4C, we used Eq.7 to calculate partially adjusted threshold and saturation points. For simulations presented in Figure 4D, an additional constraint for the slope of neural response was imposed as follows. For a given decoy location, we calculated the threshold and saturation points from which the slope could be determined. If the slope was below the minimum value (0.015 in simulations presented in this paper), in a stepwise fashion we increased and decreased the values of threshold and saturations points, respectively, until the slope value became larger than the minimum slope value. In order to simulate decoy effects in the two-dimensional attribute space, the same procedure was applied on each attribute dimension independently. For simulations presented in Figure 5D and Figure 5E, the representation factors are selected from any combinations of and for each attribute dimension.

Finally, to calculate the required computational resource in our model, we assumed that an addition of each option to the choice set requires the engagement of one neural population to represent the subjective value of the new option, which requires an additional option-selective population. In contrast, in the network implantation of the CDA model, such as the LCA model, an addition of each option requires the engagement of a few neural populations that are required for comparison between each attribute of the new option and the existing options. As a result, required computational resources in CDA model increases with the number of options in the choice set, , as . All simulations were performed using custom-made codes in MATLAB.

#### Data analysis

For the statistical tests presented in the paper, we have provided the conventional significant values in addition to the applied test. In order to quantify the decoy effects, we used the overall preference for the target gamble and the preference for the target gamble for a given decoy to define the decoy efficacy,
Based on this definition, the decoy efficacy is bound between −1 and 1. Note that using preference for C to define the decoy efficacy gives similar results to what presented here.

### Supporting Information

Text S1.

A PDF file containing additional analysis of the CDA and RN models, and the supplementary figures.

doi:10.1371/journal.pcbi.1002607.s001

(PDF)

### Acknowledgments

We thank Peter Dayan and Zahra Ayubi for useful comments on the manuscript. We also thank Antonio Rangel for helpful discussions and comments on the experimental design.

### Author Contributions

Conceived and designed the experiments: AS BDM CC. Performed the experiments: AS BDM. Analyzed the data: AS BDM. Wrote the paper: AS BDM CC. Constructed the model: AS BDM Simulated the model: AS Analyzed the simulation results: AS.

### References

1. 1. Luce RD (1959) Individual choice behavior: A theoretical analysis. New York: John Wiley and Sons.
2. 2. Debreu G (1960) Individual choice behavior: A theoretical analysis. Am Econ Rev 50: 186–188.
3. 3. Von Neumann J, Morgenstern O, Rubinstein A, Kuhn HW (2007) Theory of games and economic behavior. Princeton, NJ: Princeton University Press.
4. 4. Glimcher PW (2008) Choice: towards a standard back-pocket model. In: Glimcher PW, Camerer CF, Fehr E, Poldrack RA, editors. Neuroeconomics: Decision making and the brain. New York: Academic Press. pp. 503–521.
5. 5. Rangel A (2008) The computation and comparison of value in goal-directed choice. In: Glimcher PW, Camerer CF, Fehr E, Poldrack RA, editors. Neuroeconomics: Decision Making and the Brain. New York: Academic Press. pp. 425–439.
6. 6. Huber J, Payne J, Puto C (1982) Adding Asymmetrically Dominated Alternatives: Violations of Regularity and the Similarity Hypothesis. J Consum Res 9: 90–98.
7. 7. Tversky A, Simonson I (1993) Context-dependent preferences. Manage Sci 1179–1189.
8. 8. Huber J, Puto C (1983) Market boundaries and product choice: Illustrating attraction and substitution effects. J Consum Res 10: 31–44.
9. 9. Dhar R (1996) Similarity in context: Cognitive representation and violation of preference and perceptual invariance in consumer choice. Organ Behav Hum Dec 67: 280–293.
10. 10. Bateman IJ, Munro A, Poe G (2008) Decoy effects in choice experiments and contingent valuation: asymmetric dominance. Land Econ 84: 115–127.
11. 11. Lehmann D, Pan Y (1994) Context Effects, New Brand Entry, and Consideration Sets. J Marketing Res 31: 364–374.
12. 12. Herne K (1997) Decoy alternatives in policy choices: Asymmetric domination and compromise effects. Eur J Polit Econ 13: 575–589.
13. 13. Herne K (1999) The effects of decoy gambles on individual choice. Exp Econ 2: 31–40.
14. 14. Hurly T, Oseen M (1999) Context-dependent, risk-sensitive foraging preferences in wild rufous hummingbirds. Anim Behav 58: 59–66.
15. 15. Shafir S, Waite TA, Smith BH (2002) Context-dependent violations of rational choice in honeybees (Apis mellifera) and gray jays (Perisoreus canadensis). Behav Ecol Sociobiol 51: 180–187.
16. 16. Bateson M, Healy SD, Hurly TA (2003) Context-dependent foraging decisions in rufous hummingbirds. Proc Biol Sci 270: 1271–1276.
17. 17. Noonan MP, Walton ME, Behrens TEJ, Sallet J, Buckley MJ, et al. (2010) Separate value comparison and learning mechanisms in macaque medial and lateral orbitofrontal cortex. Proc Natl Acad Sci U S A 107: 20547–20552.
18. 18. Masicampo EJ, Baumeister RF (2008) Toward a physiology of dual-process reasoning and judgment: lemonade, willpower, and expensive rule-based analysis. Psychol Sci 19: 255–260.
19. 19. Slaughter JE, Sinar EF, Highhouse S (1999) Decoy effects and attribute-level inferences. J Appl Psychol 84: 823–828.
20. 20. De Martino B, Kumaran D, Seymour B, Dolan RJ (2006) Frames, biases, and rational decision-making in the human brain. Science 313: 684–687.
21. 21. Wedell DH (1991) Distinguishing among models of contextually induced preference reversals. J Exp Psychol Learn 17: 767–778.
22. 22. Bhargava M, Kim J, Srivastava R (2000) Explaining Context Effects on Choice Using a Model of Comparative Judgment. J Consum Psychol 9: 167–177.
23. 23. Shafir E, Osherson D, Smith EE (1989) An Advantage Model of Choice. J Behav Decis Making 2: 1–23.
24. 24. Usher M, McClelland JL (2004) Loss aversion and inhibition in dynamical models of multialternative choice. Psychol Rev 111: 757–769.
25. 25. Usher M, Elhalal A, McClelland JL (2008) The neurodynamics of choice, value-based decisions, and preference reversal. In: Chater N, Oaksford M, editors. The Probabilistic Mind: Prospects for Bayesian Cognitive Science. New York: Oxford University Press. pp. 277–300.
26. 26. Roe RM, Busemeyer JR, Townsend JT (2001) Multialternative decision field theory: a dynamic connectionist model of decision making. Psychol Rev 108: 370–392.
27. 27. Johnson JG, Busemeyer JR (2005) A Dynamic, Stochastic, Computational Model of Preference Reversal Phenomena. Psychol Rev 112: 841–861.
28. 28. Rumelhart DL, Greeno JG (1971) Similarity between stimuli: An experimental test of the Luce and Restle choice models. J Math Psychol 8: 370–381.
29. 29. Tversky A (1972) Elimination by aspects: A theory of choice. Psychol Rev 79: 281–299.
30. 30. Payne JW, Bettman JR, Johnson EJ (1993) The adaptive decision maker. New York: Cambridge Univ Press.
31. 31. Dayan P, Abbott LF (2001) Theoretical neuroscience: Computational and mathematical modeling of neural systems. Cambridge, ME: MIT Press.
32. 32. Carandini M, Heeger DJ (2012) Normalization as a canonical neural computation. Nat Rev Neurosc 13: 51–62.
33. 33. Tversky A (1977) Features of similarity. Psychol Rev 84: 327–352.
34. 34. Faisal a A, Selen LPJ, Wolpert DM (2008) Noise in the nervous system. Nat Rev Neurosc 9: 292–303.
35. 35. Kahneman D, Tversky A (1984) Choices, values, and frames. Am Psychol 39: 341–350.
36. 36. Gilovich T, Griffin DW, Kahneman D (2002) Heuristics and biases: The psychology of intuitive judgement. UK: Cambridge University Press.
37. 37. Soltani A, Wang X-J (2006) A biophysically based neural model of matching law behavior: melioration by stochastic synapses. J Neurosci 26: 3731–3744.
38. 38. Soltani A, Wang X-J (2008) From biophysics to cognition: reward-dependent adaptive choice behavior. Curr Opin Neurobiol 18: 209–216.
39. 39. Soltani A, Wang X-J (2010) Synaptic computation underlying probabilistic inference. Nat Neurosci 13: 112–119.
40. 40. Malone BJ, Scott BH, Semple MN (2002) Context-dependent adaptive coding of interaural phase disparity in the auditory cortex of awake macaques. J Neurosci 22: 4625–4638.
41. 41. Bartlett EL, Wang X (2005) Long-lasting modulation by stimulus context in primate auditory cortex. J Neurophysiol 94: 83–104.
42. 42. Allman J, Miezin F, McGuinness E (1985) Stimulus specific responses from beyond the classical receptive field: neurophysiological mechanisms for local-global comparisons in visual neurons. Annu Rev Neurosci 8: 407–430.
43. 43. Albright TD, Stoner GR (2002) Contextual influences on visual processing. Annu Rev Neurosci 25: 339–379.
44. 44. Clifford CW, Webster MA, Stanley GB, Stocker AA, Kohn A, et al. (2007) Visual adaptation: neural, psychological and computational aspects. Vision Res 47: 3125–3131.
45. 45. Laughlin S (1981) A simple coding procedure enhances a neuron's information capacity. Z Naturforsch 36: 910–912.
46. 46. Field DJ (1987) Relations between the statistics of natural images and the response properties of cortical cells. J Opt Soc Am A 4: 2379–2394.
47. 47. Olshausen BA, Field DJ (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381: 607–609.
48. 48. Smirnakis SM, Berry MJ, Warland DK, Bialek W, Meister M (1997) Adaptation of retinal procssing to image contrast and spatial scale. Nature 386: 69–73.
49. 49. Fairhall AL, Lewen GD, Bialek W, de Ruyter van Steveninck RR (2001) Efficiency and ambiguity in an adaptive neural code. Nature 412: 787–792.
50. 50. Simoncelli EP, Olshausen BA (2001) Natural image statistics and neural representation. Annu Rev Neurosci 24: 1193–1216.
51. 51. Padoa-Schioppa C (2009) Range-adapting representation of economic value in the orbitofrontal cortex. J Neurosci 29: 14004–14014.
52. 52. Louie K, Grattan LE, Glimcher PW (2011) Reward value-based gain control: divisive normalization in parietal cortex. J Neurosci 31: 10627–10639.
53. 53. Tremblay L, Schultz W (1999) Relative reward preference in primate orbitofrontal cortex. Nature 398: 704–708.
54. 54. Padoa-Schioppa C, Assad JA (2008) The representation of economic value in the orbitofrontal cortex is invariant for changes of menu. Nat Neurosci 11: 95–102.
55. 55. Heeger DJ (1992) Normalization of cell responses in cat striate cortex. Visual Neurosci 9: 181–197.
56. 56. Carandini M, Heeger DJ (1994) Summation and division by neurons in primate visual cortex. Science 264: 1333–1336.
57. 57. Iyengar SS, Lepper MR (2000) When choice is demotivating: can one desire too much of a good thing? J Pers Soc Psychol 79: 995–1006.
58. 58. Parducci A (1974) Contextual effects: A range-frequency analysis. In: Carterette E, Friedman M, editors. Handbook of perception. New York: Academic Press, Vol. 2. pp. 127–141.
59. 59. Wernerfelt B (1995) A Rational Reconstruction of the Compromise Effect: Using Market Data to Infer Utilities. J Consum Res 21: 627–633.
60. 60. Prelec D, Wernerfelt B, Zettelmeyer F (1997) The role of inference in context effects: Inferring what you want from what is available. J Consum Res 24: 118–125.
61. 61. Kamenica E (2008) Contextual inference in markets: On the informational content of product lines. Am Econ Rev 1–51.
62. 62. Ratneshwar S, Shocker A, Stewart D (1987) Toward Understanding the Attraction Effect: The Implications of Product Stimulus Meaningfulness and Familiarity. J Consum Res 13: 520–533.
63. 63. Holt CA, Laury SK (2002) Risk aversion and incentive effects. Am Econ Rev 92: 1644–1655.
64. 64. Soltani A, Lee D, Wang X-J (2006) Neural mechanism for stochastic behaviour during a competitive game. Neural Netw 19: 1075–1090.

Ambra 2.10.7 Managed Colocation provided
by Internet Systems Consortium.