Judgment and Decision Making, Vol. 11, No. 1, January 2016, pp. 92-98
Overlap of accessible information undermines the anchoring effect
According to the Selective Accessibility Model of anchoring, the comparison question in the standard anchoring paradigm activates information that is congruent with an anchor. As a consequence, this information will be more likely to become the basis for the absolute judgment which will therefore be assimilated toward the anchor. However, if the activated information overlaps with information that is elicited by the absolute judgment itself, the preceding comparative judgment should not exert an incremental effect and should fail to result in an anchoring effect. The present studies find this result when the comparative judgment refers to a general category and the absolute judgment refers to a subset of the general category that was activated by the anchor value. For example, participants comparing the average annual temperature in New York City to a high 102 °F judged the average winter, but not summer temperature to be higher than participants making no comparison. On the other hand, participants comparing the annual temperature to a low –4 °F judged the average summer, but not winter temperature to be lower than control participants. This pattern of results was shown also in another content domain. It is consistent with the Selective Accessibility Model but difficult to reconcile with other main explanations of the anchoring effect.
Keywords: judgment, heuristics and biases, anchoring, selective accessibility
The anchoring effect denotes the assimilation of a judgment toward a previously considered value. Specifically, the standard anchoring paradigm consists of two questions: a comparison question, which asks for a comparison of the target to a reference point on the judgmental dimension, and the subsequent absolute judgment question about the target value. Typically, the absolute judgment is assimilated toward the reference point. This general procedure was used in the well-known demonstration of the anchoring effect by Tversky and Kahneman (1974), where judgments of the percentage of African nations in the United Nations were drawn toward a randomly generated number that had previously served as a standard of comparison.
The anchoring effect is a robust phenomenon (Klein et al., 2014). It is relevant to diverse domains such as negotiation (Galinsky & Mussweiler, 2001), valuation (Ariely, Loewenstein, Prelec, 2003, 2006), and legal judgment (Englich, 2006). In the last domain, it has been shown that prosecutors’ demands, punitive damage caps, or sentencing guidelines may serve as anchors during legal decision making and influence the judgment accordingly (Bennett, 2014; Englich, Mussweiler & Strack, 2006; Robbennolt & Studebaker, 1999). To avoid the biasing influences of the anchoring effect, it is important to understand the underlying psychological processes. This is particularly important for changing courtroom procedures in order to prevent biased judgments (Bennett, 2014).
One of the main explanations of the anchoring effect is the Selective Accessibility Model (Strack & Mussweiler, 1997; see also Chapman & Johnson, 1999). According to this model, people who answer the comparison question engage in “positive hypothesis testing” and selectively seek information that is compatible with the implications of the anchor value (Klayman & Ha, 1987). Because this increases the subsequent accessibility of the information, it is more likely to become the basis for the absolute judgment, which is then assimilated to the anchor. Supporting this view, people were faster to recognize words associated with concepts compatible with the anchor value after the comparison question (e.g., luxury cars after making comparison of the average car price with a high anchor; Mussweiler & Strack, 2000). Furthermore, the absolute judgment was slower if people had little time to answer the comparison question, possibly because it activates information used in making the absolute judgment (Mussweiler & Strack, 1999b).
The goal of the present research was to further illuminate the process underlying the anchoring effect by testing a new hypothesis that is derived from the Selective Accessibility Model. According to this model, answering the comparison question selectively activates information that is congruent with an anchor. For example, comparing the average summer temperature in New York City with a high reference point may make more accessible memories of heatwaves and especially hot summer days. The greater accessibility of such anchor-congruent information would then lead to absolute judgments that are assimilated toward the reference point.
However, the comparison question should exert no influence if it activates information that would be used for the absolute judgment in any case. For example, if the comparison question makes information about summer more accessible, this information should not influence absolute judgments about summer temperatures because the same information would have been activated by the absolute question itself; that is, even without answering the preceding comparison question. In general, this may happen if the comparison question asks about a general category and the target of the absolute judgment is a subcategory whose characteristics can be described by the anchor value. The positive hypothesis testing is then conducted within the frame of the general category and activates information that is associated with the subcategory, which is the target of the absolute question. For example, if the comparison question asked about the average annual temperature using a high reference point, the positive hypothesis testing would be conducted within the frame of the whole year. Information activated by this question would be less extreme and associated with lower temperatures than information activated by a comparison question that relates to the average summer temperature (e.g., summer days and hot spring days instead of heatwaves and especially hot summer days). Subsequently, the absolute judgment about the average summer temperature will not be changed by this activated information because it overlaps with information that is used for making the absolute judgment even without the comparison question. That is, the anchor effect will be eliminated.
The Selective Accessibility Model is thus consistent with a specific directional dependency of the anchoring effect in the situation described above. Namely, a high anchor may have no effect when the comparison question asks about the average annual temperature and the absolute judgment is about the average summer temperature. In contrast, a low anchor should produce the anchoring effect because information activated by the low anchor does not overlap with information that would have been used for the absolute judgment even without the comparison question.
The first proposed explanation of the anchoring effect argued that an anchor influences judgment because people adjust their judgment from the anchor value and the adjustment is usually insufficient (Tversky & Kahneman, 1974). The anchoring-and-adjustment explanation is now used mainly for explanation of the effect of self-generated anchors (Epley & Gilovich, 2001, 2006) and therefore it might not operate in the provided example and present experiments where experimenter-provided anchors are used. Moreover, insufficient adjustment would cause the anchoring effect for anchors in both directions and it does not therefore predict the pattern of results just described as consisent with the Selective Accessibility Model.
Another explanation of the anchoring effect argues that anchors serve as numeric primes. Studies supporting this explanation show that people are influenced by unrelated numbers when making numeric judgment (Critcher & Gilovich, 2008; Wilson, Houston, Etling & Brekke, 1996). Since numeric priming occurs even if the numeric prime is unrelated to the absolute judgment, the anchor should influence judgment independently of its direction.
An explanation based on conversational implicatures views anchors as a source of information from which conversational implicatures are derived (Frederick, Mochon & Danilowitz, 2013; Grice, 1975). A low anchor in a question about temperature in New York City thus implies that New York City is a cold place. Conversational implicatures are dependent on their relevance and they should therefore operate mainly when it is possible to derive useful information from an anchor. It can be argued that a high anchor should hold more information about summer temperatures than a low anchor and thus the prediction of this Gricean account would be opposite to the prediction we have described – i.e., the high anchor should influence the absolute judgment more than the low anchor in our example.
Finally, Frederick and Mochon (2012) have recently proposed that the anchoring effect is based on distortion of a response scale by an anchor. Importantly, Mochon and Frederick (2013) argued that scale distortion is largely unaffected by conceptual relevance of the targets of comparison and absolute judgment questions. They show that scale distortion occurs even for different targets and it disappears only if the difference is large. The present study used only targets that were within the same category and scale distortion would therefore produce anchoring effect for anchors in both directions if it were the underlying process.
The first two experiments were conducted to test the described prediction of the Selective Accessibility Model, the third experiment was meant to rule out an alternative explanation.
Six hundred and eight participants recruited on mTurk were divided into one of three groups.2 Participants from all groups were asked “How much does an average new small city car cost? [in dollars]”. Beforehand, participants from the high and low anchor groups were given a question asking “Does an average new car cost less or more than $100,000 (high anchor group)/$1,000 (low anchor group)?”. The anchors were chosen such that they were extreme not only for the general category, but also for the subcategory. Based on the Selective Accessibility Model, we expected that the low anchor may activate information about small and cheap new cars which would overlap with the information used for making the subsequent absolute judgment. Therefore, we predicted no difference between low anchor and no-anchor (control) groups, but expected to find the anchoring effect for the high anchor group.
Study Target of comparison Anchor Lower/ Higher Target of absolute judgment M.20 [95% CI] p-value 1 average new car $1,000 3/204 average new small city car 17923 [17222, 18694] .09 average new car $100,000 207/6 average new small city car 21354 [20294, 22457] <.001 - - - average new small city car 17157 [16562, 17762] - 2 annual temperature in NYC 102 °F 95/2 winter temperature in NYC 35.1 [32.3, 39.3] .005 annual temperature in NYC –4 °F 4/88 winter temperature in NYC 30.4 [27.1, 33.2] .62 - - - winter temperature in NYC 29.5 [27.1, 31.9] - annual temperature in NYC 102 °F 99/0 summer temperature in NYC 81.5 [79.7, 83.3] .28 annual temperature in NYC –4 °F 1/99 summer temperature in NYC 78.4 [76.7, 80.1] <.001 - - - summer temperature in NYC 82.7 [81.3, 84.1] - 3 summer temperature in NYC 102 °F 207/8 summer temperature in NYC 85.5 [84.4, 86.7] <.001 annual temperature in NYC 102 °F 199/4 summer temperature in NYC 82.4 [81.2, 83.5] .89 - - - summer temperature in NYC 82.2 [81.0, 83.2] - Note: p-values are obtained from Yuen’s trimmed mean test with comparison to a control group. M.20 = 20% trimmed mean
Four participants were excluded because they did not provide a numerical answer for the absolute judgment question or provided obviously nonsensical answer (higher than 100 million). Some of the answers were still implausible, so we used Yuen’s trimmed mean test with a 20 % trim to compare the absolute judgments between groups.3 Supporting our predictions, while the high anchor group gave higher answers (M.20 = $21,354) than the control group (M.20 = $17,157), t(186.1) = 6.33, p < .001, dR = 0.53,4 95% CI [0.33, 0.76], BF = 100201,5 the low anchor group (M.20 = $17,923) did not differ significantly from the control group, and in fact, the low anchor led to somewhat higher absolute judgments, t(234.1) = 1.69, p = .09, dR = 0.18, 95% CI [–0.03, 0.37], BF = 0.05. Summary results for all studies are in Table 1.
Since the low anchor was closer to the average answer to the absolute judgment question than the high anchor, it can be argued that the apparent absence of the anchoring effect in the low anchor condition could have been caused by the anchor’s insufficient distance from the average answer, leading to an effect that is small but not detectable. To test this possibility, we conducted nonlinear regression analysis using a model in which the distance of an anchor to the average answer to the absolute judgment question was included. In particular, the model had a form: RESPONSE = c + LAG · a · (LA − c) + HAG · (a + d) · (HA − c), where c is a parameter estimating the response without an anchor, LAG and HAG are binary variables representing low and high anchor conditions, a is a parameter estimating the anchoring effect (as a proportion of the distance of the anchor) common for both anchoring conditions, LA and HA are values of low and high anchors (i.e., 1,000 and 100,000), and d is a parameter estimating the difference in the anchoring effect for the two anchoring conditions. We then compared this model with a simpler model which did not include the d parameter. That is, a model where the anchoring effect is the same for both anchoring conditions. The more complex model had a significantly better fit, F(1, 589) = 4.63, p = .03.6 While the a parameter was positive and significant in the simpler model (a = 0.041, t(590) = 6.59, p < .001), it was nonsignificant in the complex model, a = –0.044, t(589) = –1.05, p = .29, suggesting that there was no anchoring effect common for both anchoring conditions. Furthermore, the d parameter was positive and significant in the complex model (d = 0.097, t(589) = 2.06, p = .04), indicating that the anchoring effect differed between the two anchoring groups. The nonlinear regressions thus showed that the pattern of results we found is not caused only by the different distance of anchors from the average answer to the absolute judgment question.
Another alternative explanation of the null result for the low anchor condition is that some participants misread the absolute judgment question and answered it as if it was still related to the average new car cost instead of the average new small city car cost. If that was the case, the higher estimates of the participants who misread the question could have countered the effect of the anchor. The absolute judgments in the low anchor condition would thus actually come from two distributions – participants who misread the absolute judgment question and participants who answered the correct question and were influenced by the low anchor. This explanation would therefore argue that the low anchor condition might have the same mean absolute judgment as the control condition, but it would predict different variances and distributions. However, a Brown-Forsythe test showed that the two groups do not differ in their variance, p = .22. Similarly, a Kolgomorov-Smirnov test showed that they do not differ in their distribution, p = .31. The data are therefore not clearly consistent with the alternative explanation.
The results show the general pattern consistent with the Selective Accessibility Model. The second experiment attempted to replicate the results with a different category and scale. Furthermore, it used two subcategories in the absolute judgment question – one that we had expected to be influenced by a low anchor and another that we had expected to be influenced by a high anchor.
Six hundred and fourteen participants recruited on mTurk were divided into one of six groups. Participants from three groups were asked “What is the average winter temperature in New York City? [in degrees Fahrenheit]”. Participants from the other three groups were asked for the corresponding summer temperature. The three groups for both subcategories differed in the comparison question. For each category, one control group was given no comparison question, one group was given a high anchor and one group was given a low anchor. The high anchor question for both categories asked: “Is the average annual temperature in New York City lower or higher than 102 °F?” In the low anchor question, the standard was –4 °F.
Twelve participants were excluded because they failed to provide an answer to the absolute judgment question. Yuen’s trimmed mean test with a 20 % trim revealed higher average winter temperature judgment for the high anchor group (M.20 = 35.1 °F) than for the control group (M.20 = 29.5 °F), t(113.5) = 2.86, p = .005, dR = 0.43, 95% CI [0.16, 0.67], BF = 134, and no difference between low anchor (M.20 = 30.4 °F) and control groups, t(112.6) = 0.49, p = .62, dR = 0.07, 95% CI [–0.24, 0.39], BF = 0.16. On the other hand, the high anchor did not influence judgments of the average summer temperature, t(117.0) = –1.08, p = .28, dR = –0.16, 95% CI [–0.45, 0.16], BF = 0.07, M.20 high = 81.5°F, M.20 control = 82.7 °F, whereas the low anchor did, t(116.5) = –3.87, p < .001, dR = –0.58, 95% CI [–0.92, –0.27], BF = 27.7, M.20 low = 78.4 °F.
Next, we conducted nonlinear regression analyses similarly as in Experiment 1. For the summer temperature absolute judgment question, including the d parameter again significantly improved the model, F(1, 281) = 3.94, p = .05.7 The parameter a estimating the anchoring effect common for both anchoring conditions was positive and significant in the simple model (a = 0.029, t(282) = 2.92, p = .004), and nonsignificant in the more complex model, a = –0.082, t(281) = –1.37, p = .17. The d parameter estimating the difference in the anchoring effect between the two groups was again positive (d = 0.127, t(281) = 1.89, p = .06). For the winter temperature absolute judgment question, the anchoring parameter a was positive and significant in the simple model (a = 0.030, t(276) = 1.99, p = .05), and nonsignificant in the complex model (a = 0.004, t(275) = 0.07, p = .94). The d parameter was again positive, even though nonsignificant (d = 0.033, t(275) = 0.51, p = .61). However, including the d parameter did not significantly improve the model (F(1, 275) = 0.26, p = .61). While the results of the nonlinear regression analyses were not completely unequivocal, they are again consistent with our hypothesis.
The null effects again do not appear to be a result of misreading the absolute judgment question. The absolute estimates of the average summer temperature did not differ either in their variance, p = .19, or in their distribution, p = .21, between the high anchor and control groups. Similarly, the estimates of the average winter temperature did not differ either in their variance, p = .35, or in their distribution, p = .64, between the low anchor and control groups.
The second experiment successfully replicated results of the first experiment and extended them to a different category and scale. Furthermore, we showed that the same anchor does not lead to the anchoring effect in cases where it activates information overlapping with information necessary for making the absolute judgment, but leads to the anchoring effect otherwise.
The anchors in the first two experiments were selected such that they were outside of typical responses even for the target of the absolute judgment. Nevertheless, the anchor for which we expected no anchoring effect was always closer to the average absolute judgment than the anchor for which we expected the anchoring effect to occur. While the results of nonlinear regression analyses suggest that the distance to the average absolute judgment is not behind the difference in anchoring effects between the two anchoring conditions, we conducted the third experiment to provide further evidence against this alternative explanation of our results.
Six hundred and fifteen participants recruited on mTurk were divided into one of three groups. All participants were asked “What is the average summer temperature in New York City? [in degrees Fahrenheit]”. One group served as a control group and was given no preceding comparison question. Another group was given a comparison question “Is the average annual temperature in New York City lower or higher than 102 °F?”. The last group answered the same question, but the target of comparison was summer temperature in New York City instead of annual temperature. Finding the anchoring effect for this group but no effect for the group with annual temperature as the target of comparison would show that the effect we found in the first two studies cannot be explained by insufficient difference between the anchor value and average absolute judgment.
Five participants were excluded because they failed to provide an answer to the absolute judgment question. Yuen’s trimmed mean test with a 20 % trim showed that while the high anchor had no effect if the target of the comparison question was the average annual temperature, t(236.4) = 0.14, p = .89, dR = 0.02, 95% CI [–0.20, 0.24], BF = 0.14, M.20 annual = 82.4 °F, M.20 control = 82.2 °F, it increased the answer to the absolute judgment question if the target of the comparison was the average summer temperature, t(242.2) = 4.15, p < .001, dR = 0.44, 95% CI [0.22, 0.64], BF = 1016, M.20 summer = 85.5 °F. The group comparing the average annual temperature to the high anchor did not differ from the control group in its variance, p = .71, or distribution, p = .93.
The third experiment showed that the anchor used in Experiment 2 influenced the absolute judgment when the same category was used in both questions. This result provides additional support for the conclusion that the absence of the anchoring effect in Experiment 2 was not due to the closeness of the anchor to the typical answers to the absolute judgment question.
The results of three experiments suggest that an anchor influences judgments only if the information it activates goes beyond the information that is elicited by the absolute judgment question. These results are consistent with the Selective Accessibility Model and are difficult to reconcile with other explanations of the anchoring effect that are not based on the information that is activated and included into the judgment.
Based on both Scale Distortion and Numeric Priming Theory it could be argued that the anchoring effect was not found in the first two experiments because the anchor was too close to the absolute judgment. However, the results of nonlinear regression analyses were largely inconsistent with this explanation. Furthermore, by showing that the same anchor can influence the absolute judgment only when the target of comparison and target of the absolute judgment question is the same, the third experiment casts doubt on this explanation. Scale distortion was previously argued to depend on conceptual distance between the two targets of judgment. For example, a question about the weight of a raccoon influences estimated weight of a giraffe, whereas a question about the weight of a tricycle does not (Mochon & Frederick, 2013). However, we would expect only a slight reduction of the effect and not its disappearance, since the conceptual distance between the target of comparison and the target of the absolute judgment was small. A conversational account does not predict the pattern of results either. From this perspective, the anchor that is closer to typical values of a target should be more relevant, and therefore more likely to be considered for the judgment. This is the opposite of what we found.
While the results suggest the operation of selective accessibility, it is also possible that a different, heretofore undescribed, process might be able to explain them. For example, it is possible that the target of the comparative question can itself serve as an anchor when it differs from the target of the absolute question. When estimating the average summer temperature in New York City, previous consideration of annual temperature might serve as a low anchor which would counter the effect of the provided high anchor resulting in no difference in absolute judgments from the control group. Note that this explanation would still be compatible with the Selective Accessibility Model since the consideration of annual temperature might activate information about annual temperature and serve as a low anchor via selective accessibility. Moreover, other processes such as numeric priming and scale distortion seem to be less compatible with this alternative explanation because they would predict that only the numeric information could serve as an anchor. Nevertheless, we hope that the present studies might inspire further development of the described mechanism or other novel alternative accounts of the anchoring effect.
Even though the processes proposed by the other explanations are not sufficient to explain the present results, they may account for anchoring effects under different circumstances (e.g., anchoring and adjustment in case of self-generated anchors; Epley & Gilovich, 2001, 2006). In fact, the proposed processes are not mutually incompatible and it is possible that in some cases the anchoring effect may be a result of different processes working in parallel (Bahník, Englich & Strack, in press; Simmons, LeBoeuf & Nelson, 2010).
Future studies may focus on the circumstances that elicit other processes that may cause anchoring effects. For example, it is possible that scale distortion may influence judgments more if the scale is relatively unknown or when people do not possess much knowledge about the target of judgment.
Mussweiler and Strack (1999a) argued that the activated information must not only be accessible but also applicable for the absolute judgment and representative of its target. The present study suggests that even if these conditions apply, the activated information may lead to the anchoring effect only if it is informative above the information that would have been used in the first place to generate the absolute judgment.
Algina, J., Keselman, H.J., & Penfield, R. D. (2005). An alternative to Cohen’s standardized mean difference effect size: A robust parameter and confidence interval in the two independent groups case. Psychological Methods, 10, 317–328.
Ariely, D., Loewenstein, G., & Prelec, D. (2003). “Coherent arbitrariness”: Stable demand curves without stable preferences. The Quarterly Journal of Economics, 118, 73–106.
Ariely, D., Loewenstein, G., & Prelec, D. (2006). Tom Sawyer and the construction of value. Journal of Economic Behavior & Organization, 60, 1–10.
Bahník, Š., Englich, B., & Strack, F. (in press). Anchoring effect. In R. F. Pohl (Ed.). Cognitive illusions: Intriguing phenomena in thinking, judgment, and memory (2nd ed.). Hove, UK: Psychology Press.
Bennett, M. W. (2014). Confronting cognitive “anchoring effect” and “blind spot” biases in dederal sentencing: A modest solution for reforming a fundamental flaw. The Journal of Criminal Law & Criminology, 104, 489–534.
Chapman, G. B., & Johnson, E. J. (1999). Anchoring, activation, and the construction of values. Organizational Behavior and Human Decision Processes, 79, 115–153.
Critcher, C. R., & Gilovich, T. (2008). Incidental environmental anchors. Journal of Behavioral Decision Making, 21, 241–251.
Englich, B. (2006). Blind or biased? Justitia’s susceptibility to anchoring effects in the courtroom based on given numerical representations. Law & Policy, 28, 497–514.
Englich, B., Mussweiler, T., & Strack, F. (2006). Playing dice with criminal sentences: The influence of irrelevant anchors on experts’ judicial decision making. Personality and Social Psychology Bulletin, 32, 188–200.
Epley, N., & Gilovich, T. (2001). Putting adjustment back in the anchoring and adjustment heuristic. Psychological Science, 12, 391–396.
Epley, N., & Gilovich, T. (2006). The anchoring-and-adjustment heuristic: Why the adjustments are insufficient. Psychological Science, 17, 311–318.
Frederick, S., & Mochon, D. (2012). A scale distortion theory of anchoring. Journal of Experimental Psychology: General, 141, 124–133.
Frederick, S., Mochon, D., & Danilowitz, J. (2013). Anchoring as inference. Yale University working paper.
Galinsky, A. D., & Mussweiler, T. (2001). First offers as anchors: The role of perspective-taking and negotiator focus. Journal of Personality and Social Psychology, 81, 657–669.
Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.). Syntax and semantics. Vol.3: Speech acts (pp. 41–58). New York: Academic Press.
Klayman, J., & Ha, Y. W. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review, 94, 211–228.
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Jr., Bahník, Š., Bernstein, M. J., ... & Nosek, B. A. (2014). Investigating variation in replicability: A “Many labs” replication project. Social Psychology, 45, 142–152.
Kruschke, J. K. (2013). Bayesian estimation supersedes the t test. Journal of Experimental Psychology: General, 142, 573–603.
Mochon, D., & Frederick, S. (2013). Anchoring in sequential judgments. Organizational Behavior and Human Decision Processes, 122, 69–79.
Mussweiler, T., & Strack, F. (1999a). Comparing is believing: A selective accessibility model of judgmental anchoring. European Review of Social Psychology, 10, 135–167.
Mussweiler, T., & Strack, F. (1999b). Hypothesis-consistent testing and semantic priming in the anchoring paradigm: A selective accessibility model. Journal of Experimental Social Psychology, 35, 136–164.
Mussweiler, T., & Strack, F. (2000). The use of category and exemplar knowledge in the solution of anchoring tasks. Journal of Personality and Social Psychology, 78, 1038–1052.
Robbennolt, J. K., & Studebaker, C. A. (1999). Anchoring in the courtroom: The effects of caps on punitive damages. Law and Human Behavior, 23, 353–373.
Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulleting and Review, 16, 225–237.
Simmons, J. P., LeBoeuf, R. A., & Nelson, L. D. (2010). The effect of accuracy motivation on anchoring and adjustment: do people adjust from provided anchors?. Journal of Personality and Social Psychology, 99, 917–932.
Strack , F., & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology, 73, 437–446.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 4157, 1124–1131.
Wilcox, R. R. (1992). Why can methods for comparing means have relatively low power, and what can you do to correct the problem? Current Directions in Psychological Science, 1, 101–105.
Wilson, T. D., Houston, C. E., Etling, K. M., & Brekke, N. (1996). A new look at anchoring effects: Basic anchoring and its antecedents. Journal of Experimental Psychology: General, 125, 387–402.
The research was supported by Deutsche Forschungsgemeinschaft (DFG-RTG 1253/2). We would like to thank Anand Krishna and Marek Vranka for their helpful comments and Jonathan Baron for suggesting the nonlinear regression analysis.
Copyright: © 2016. The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
This document was translated from LATEX by HEVEA.