Judgment and Decision Making, vol. 6, no. 1, February 2011, pp. 139-146

Biased calculations: Numeric anchors influence answers to math equations

Andrew R. Smith*   Paul D. Windschitl#

People must often perform calculations in order to produce a numeric estimate (e.g., a grocery-store shopper estimating the total price of his or her shopping cart contents). The current studies were designed to test whether estimates based on calculations are influenced by comparisons with irrelevant anchors. Previous research has demonstrated that estimates across a wide range of contexts assimilate toward anchors, but none has examined estimates based on calculations. In two studies, we had participants compare the answers to math problems with anchors. In both studies, participants’ estimates assimilated toward the anchor values. This effect was moderated by time limit such that the anchoring effects were larger when the participants’ ability to engage in calculations was limited by a restrictive time limit.


Keywords: anchoring, bias, heuristics, calculations, math, numeric priming, magnitude priming.

1  Introduction

When calculating numeric estimates, people are often confronted with both relevant and irrelevant information. For example, a grocery-store shopper who is trying to calculate the total cost of his grocery cart contents might see that a new gas grill is on sale for $199.99. Will the shopper’s estimate be influenced by the irrelevant cost of a new grill? Or, more generally, are people influenced by irrelevant numeric values (i.e., anchors) when calculating numeric estimates?

Numerous studies have demonstrated that estimates tend to assimilate toward irrelevant anchors (for a review, see Chapman & Johnson, 2002). For example, anchoring effects have been observed with general knowledge questions like the length of the Mississippi River and the height of Mount Everest (Jacowitz & Kahneman, 1995), criminal sentences (Englich, Mussweiler, & Strack, 2006), and performance ratings of university professors (Thorsteinson, Breier, Atwell, Hamilton & Privette, 2008). As is evident from the above examples, the typical anchoring study requires that participants recruit information from memory and/or make a quantitative estimate from primarily non-quantitative information. In the current studies, we investigated anchoring effects in a different context—one where participants needed to perform a calculation to generate their estimate. Even though situations like this are fairly common (e.g., estimating the total cost of multiple products, counting calories consumed in a day, calculating one’s approximate gas mileage), little is known about how anchors influence estimates made from calculations.

There is a classic study that is often cited as an example of anchoring in calculations, namely a study by Tversky and Kahneman (1974) in which participants gave lower estimates of the product of 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 than of 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1. However, whereas this finding serves as an illustration that anchoring might affect calculated answers, it is an idiosyncratic example in that the potential anchors (i.e., the first numbers in the series) are part of the expressions themselves. Also, the anchors are informative as to the answer to the expression—again, because the anchors are part of the problem. Therefore, the illustration does not necessarily speak to whether and why an irrelevant anchor that is external to the expression would influence people’s answers.

Imagine a person needs to solve “728 + 136 + 545 = ?” and has recently been exposed to an irrelevant number like 824. Would that anchor value, which happens to be smaller than the factual answer, have any biasing influence on the person’s solution to the problem? Two of the most prominent explanations for anchoring effects would not appear to predict an effect of the anchor. First, selective accessibility explanations for anchoring effect do not seem directly relevant to the possibility of finding anchoring with math problems (Mussweiler & Strack, 1999; Strack & Mussweiler, 1997; see also, Chapman & Johnson, 1999). These accounts assume that, when participants encounter an anchor (e.g., “Is the Mississippi River longer or shorter than 5000 miles?”), they first test whether the target is equal to the anchor value (e.g., “Is the Mississippi River 5000 miles long?”). Because people tend to engage in hypothesis consistent testing, they will recruit information that is consistent with the target being equal to the anchor. Selective accessibility accounts assume that the activated information is semantically related to anchor value (Mussweiler & Strack, 2000, 2001). When participants generate their final estimate, they use this biased set of accessible information to inform their estimate. Although selective accessibility can explain anchoring effects in many situations, because the account relies on a biased recruitment of information from memory, it does not seem to apply to situations where people are performing a calculation based on available information. Furthermore, because selective accessibility accounts assume anchors increase the accessibility of semantically related information, they have difficulty explaining anchoring effects with purely numeric information—as was used in the current studies.

A second account, anchoring and insufficient adjustment (Epley & Gilovich, 2001, 2004, 2005, 2006; Tversky & Kahneman, 1974), proposes a set of processes that do not seem tenable for explaining how anchors might influence solutions to math problems. The insufficient adjustment accounts suggest that participants use the anchor as a starting point and then adjust their estimate away from the anchor value. It is difficult to imagine why or how one would start an estimate at the anchor while also solving a math problem. Additionally, the anchors in the current studies were all externally provided and an adjustment process is only thought to occur from self-generated anchor (Epley & Gilovich, 2001, 2004, 2005, 2006).

Although neither the selective accessibility nor insufficient adjustment accounts would appear to predict an effect of irrelevant anchors on answers to math problems, there are two additional accounts that are more amenable to such a prediction. Numeric and magnitude priming accounts posit that anchors prime numbers or magnitudes similar to the anchor value. For example, participants’ arbitrary ID numbers influenced their estimates of the number of physicians in the phone book (Wilson, Houston, Etling, & Brekke, 1996). Presumably, viewing the ID number increased the accessibility of similar numbers. When participants generated their estimates, these primed numbers were more likely to come to mind, thereby influencing their estimates (see also, Critcher & Gilovich, 2007; Wong & Kwong, 2000). The magnitude priming account is similar, but rather than priming numbers, it assumes that anchors prime magnitude concepts (e.g., “large”, “small”) and these concepts influence the estimates that people give (Oppenheimer, LeBoeuf, & Brewer, 2008). For example, in one study, drawing a long line caused participants give longer estimates for the length of the Mississippi River as compared to participants who drew a short line.

The numeric and magnitude priming accounts underlie our prediction that irrelevant anchors will influence answers to math problems. Specifically, we propose that, unless the calculations required by the math problem are easy, people often generate an approximation of the answer. That is, whereas they might apply some mathematical rules, people also take shortcuts and estimate rather than strictly calculate. When people employ this type of strategy, anchors will exert a biasing influence through numeric or magnitude priming. That is, participants’ estimates will assimilate toward the values or magnitudes that are made accessible by the anchors. For example, if a person is exposed to a high anchor (e.g., 6,245), this value might increase the activation of numeric values near the anchor (Wilson et al., 1996) or related magnitudes (e.g., “large” or “big”; Oppenheimer et al., 2008). Then, while calculating the answer to a math problem (e.g., 234 + 798 + 912), a person’s estimate might assimilate toward the value or magnitude that was made accessible.

In both of our studies, participants answered math problems after comparisons with anchors. Because the participants had all the information they needed in order to make unbiased estimates, it was possible that participants would exclusively use calculation strategies and therefore arrive at correct, unbiased answers. However, we used time limits that forced participants to work quickly. We assumed that a 15 sec time limit would be enough to prohibit most participants from using a pure and precise calculation strategy. The important question was whether the deviation from the actual solutions would be systematically biased in the direction of the anchor value. In addition to a condition with a 15 second time limit, we also included a condition with a more severe time limit (4 sec in Study 1 and 5 sec in Study 2). This allowed us to test whether the effects of the anchor value would become stronger as time pressure increased. This pattern would be consistent with the idea that as the need to estimate (rather than formally calculate) increases, the potential for bias from external anchors also increases. An alternative data pattern is also plausible, however. Namely, time pressure might simply increase error—but not systematically in the direction of the anchor.

2  Study 1

2.1  Method

2.1.1  Participants and design

Seventy-five students from the University of Iowa enrolled in an introductory psychology course participated as partial fulfillment of their research requirement. This study was a 2 (Anchor: high/low) x 2 (Time limit: 4/15 sec) x 2 (Time limit order: 4 sec first/15 sec first) mixed model with anchor and time limit as within subjects factors and time limit order as the between subjects factor. Time limit order did not affect estimates in either study so this factor will not be discussed further.

2.1.2  Math questions and anchors

All of the math questions that the participants saw were of the form “X1 + X2 + X3 = Y.” To create these questions, three numbers (i.e., X1, X2, and X3) were randomly generated for each trial such that the solution (i.e., Y) was between 1100 and 1900.1 The anchor values were also randomly determined for each trial; low anchors were between 700 and 900 and high anchors were between 2100 and 2300. For example, a participant might be asked if the answer to “728 + 136 + 545 = ?” is more or less than 824 in the low anchor condition or 2192 in the high anchor condition.

The anchoring questions were grouped into two blocks. In a counterbalanced order, participants answered one block of questions with a 4 second time limit and the other block with a 15 second time limit. In each block, participants saw three high and three low anchor questions in a random order. In total, participants answered three questions in each of the four conditions (high anchor, 4-sec time limit; high anchor, 15-sec time limit; low anchor, 4-sec time limit; low anchor, 15-sec time limit). In addition to these critical questions, participants also answered four filler questions. The filler problems were identical for all participants and had anchors that were near the actual answers to the math problems. We included filler items to reduce the likelihood that participants would learn that the anchors were either high or low.

2.1.3  Procedure

The participants were instructed that they would be answering math equations on a computer and would only have a short amount of time to view each equation. Therefore, they should work as quickly and accurately as possible. The participants first answered 10 practice math problems. For example, a participant might see the equation “435 + 587 + 298 = ?” with a text-entry field below to enter the answer to the equation. After 4 sec, the equation was erased from the screen, but the text-entry field remained. Once the participant entered his/her response, he/she proceeded to the next problem.

After answering the practice problems, the participants were told that they would be answering a few more math problems in two stages. Specifically, they were told that they would first see an equation and compare the answer of the equation to a “randomly generated number.” Second, they would provide the answer to the equation. The participants were then shown an example to ensure they understood their task. Next, they were told they would have a short amount of time to view each equation, so they should work as quickly and accurately as possible. Finally, they were told that, if they were unsure of the exact answer to the equation, they should provide their best estimate.

Each trial began with the presentation of the anchor value (e.g., “Is the answer to the following equation smaller or larger than 784?”). After a 3 sec delay, the equation was displayed (e.g., “564 + 298 + 712”). The participants indicated whether the answer to the equation was larger or smaller than the anchor and then typed in their estimate of the answer to the equation.

Depending on the time limit condition, the equation was displayed for either 4 or 15 sec. A countdown timer in the bottom right of the screen indicated the number of seconds remaining before the equation was erased. After the participants answered the questions in the first block, they were told the time limit was changing (from 4 to 15 sec, or vice versa) and then answered the questions in the second block.

2.2  Results

2.2.1  Preliminary analyses

Three participants were dropped from the analyses because their responses indicated they were not attempting to provide accurate responses. We also removed a small number of estimates (7/864 or < 1%) that most likely resulted from typos (e.g., estimates below 100 and above 10,000). Regarding the comparative judgments, participants correctly identified whether the answer to the equation was larger or smaller than the anchor on 94.3% of the trials. Accuracy on these judgments did not differ based on the anchor or time pressure conditions. Overall, 59 of the 72 participants (81.9%) answered at least 11 of the 12 comparative judgments correctly.2 Regarding accuracy of participants’ final answers to the math problems, only 0.2% and 11.1% of the final answers were exactly correct in the 4 sec and 15 sec conditions, respectively. Using a more lenient criterion for accuracy, we found that only 3.0% and 18.5% of the final answers were within 5 units of the correct answer in the two conditions, respectively. These findings are consistent with our assumption that a 15 sec time limit would typically be enough to prohibit participants from using a pure and precise calculation strategy. They also suggest that 4 sec was even more restrictive in limiting such a strategy.

2.2.2  Main analyses

The key question at hand is whether participants’ final answers would show a significant bias in the direction of the anchors. A secondary question was whether this effect would be enhanced as the time pressure increased from 15 seconds to 4 seconds.

Because each math problem was randomly generated for each participant, the actual answers to the problems could, by chance, differ across the conditions. Therefore, for each of the 12 anchoring questions, we calculated the signed deviation of each participant’s estimate from the actual answer to the equation and then averaged the three deviation scores within a given condition. It should be noted that analyses conducted on participants’ raw estimates did not substantively differ from those conducted on their deviation scores. We conducted a 2 (anchor) x 2 (time limit) repeated-measures analysis of variance (ANOVA) on participants’ deviation scores.

Most importantly, there was the predicted main effect of anchor, F(1, 71) = 17.16, p < .001, partial η 2 = .20. Participants gave higher estimates after exposure to a high anchor than a low anchor. There was also a main effect of time limit, F(1, 71) = 9.28, p = .003, partial η 2 = .12; participants gave higher estimates in the 15 sec condition. The time limit main effect was not replicated in Study 2, so we will not discuss it in detail. The two main effects were qualified by a marginally significant interaction, F(1, 71) = 3.42, p = .07, partial η 2 = .05. For ease of interpretation, Figure 1 plots participants’ average estimate in each condition, rather than their deviation scores. As can be seen in Figure 1, participants’ estimates were influenced by the anchor values to a greater degree in the 4 sec condition as compared to the 15 sec condition. Although the anchoring effect was larger in the 4 sec condition, simple-effects tests revealed significant anchoring effects in both the 4 sec condition, F(1, 71) = 13.79, p < .001, partial η 2 = .16, and the 15 sec condition, F(1, 71) = 6.55, p = .01, partial η 2 = .08.


Figure 1: Participants’ average estimates in Study 1 as a function of anchor and time limit conditions. Error bars represent ±1 SE.

2.3  Discussion

The results of Study 1 show that participants’ answers to math problems were influenced by anchor values. That is, their estimates assimilated towards the anchor value. As discussed earlier, it would seem that the anchoring effects observed in this particular paradigm must be driven by numeric or magnitude priming. Presumably, exposure to the anchor value increased the accessibility of numbers or magnitudes similar to the anchor, and this increase in activation influenced the participants’ estimates.

Another interesting finding is that the anchoring effects were larger when participants’ were under greater time pressure. There are two closely related characterizations of this interaction. One possibility is that the anchors influenced estimates similarly in both time limit conditions, save for the times that the participants were able to literally calculate their final answer. A second possibility is that even among the set of estimates that were not achieved through full calculation, time pressure led to an enhanced influence of anchors. Both of these characterizations are compatible with our overall arguments. The results from Study 1 are somewhat ambiguous on which is more valid. If we remove all the responses that were within 5 units of the correct answer (which removes 3.0% and 18.5% of the responses in the 4 sec and 15 sec conditions), the results for the anchor X time limit interaction change only slightly, F(1, 69) = 2.32, p = .13, partial η 2 = .03. This suggests that, even among the set of estimates that were not achieved through full calculation, time pressure matters. The results of Study 2 provide a clearer conclusion on this matter.

3  Study 2

While the results of Study 1 are consistent with the numeric or magnitude priming accounts, there is an alternative explanation. It is possible that, even though the anchors were described as “randomly generated”, the participants viewed the anchor value as informative to their estimate or as a hint to the actual answers (Schwarz, 1994). In Study 2, we used extreme anchors in order to reduce the likelihood that the participants would view them as informative or possible answers to the equations. If the anchoring effects observed in Study 1 were due to participants viewing the anchors as informative, using extreme anchors should eliminate or reduce the anchoring effects. However, because the extreme anchors can still act as primes, the numeric and magnitude priming accounts predict that participants will show an anchoring effect even with extreme anchor values.

It is also noteworthy that the math problems in Study 2 were more complex than those in Study 1, which further reduced the ability of participants to employ a pure calculation strategy, even for the 15 sec condition.

3.1  Method

3.1.1  Participants and design

Thirty-two students from the University of Iowa enrolled in an introductory psychology course participated as partial fulfillment of their research requirement. This study was a 2 (Anchor: high/low) x 2 (Time limit: 5/15 sec) x 2 (Time limit order: 5 sec first/15 sec first) mixed model with anchor and time limit as within subjects factors and time limit order as the between subjects factor.

3.1.2  Math questions and anchors

As in Study 1, the math questions were of the form of “X1 + X2 + X3 = Y.” In this study, the sum of the numbers was randomly determined to be between 4,000 and 8,000. The low anchors were between 500 and 999 while high anchors were between 11,001 and 11,500. For example, a participant might be asked if the answer to “1964 + 1297 + 2636 = ?” is more or less than 783 in the low anchor condition or 11,243 in the high anchor condition.

3.1.3  Procedure

The procedures were the same as in Study 1 except: 1) more extreme anchors were used, 2) there were no practice questions, and 3) the time limit in the low-limit condition was 5 sec rather than 4 sec.

3.2  Results and discussion

3.2.1  Preliminary analyses

We first removed a small number of estimates (7/384 or < 2%) that most likely resulted from typos (e.g., estimates below 1,000 and above 15,000). On average, participants answered 95.3% of the comparative judgments correctly. Accuracy did not differ based on anchor or time pressure conditions. Overall, 28 of the 32 participants (87.5%) answered at least 11 of the 12 comparative judgments correctly. Regarding accuracy of participants’ final answers to the math problems, none of the answers was exactly correct in the 4 sec condition and only 2.1% were correct in the 15 sec condition. Even when using a more lenient criterion for accuracy, none of the final answers in the 4 sec condition and only 3.7% in the 15 sec condition were within 5 units of the correct answer. As anticipated, it was quite difficult for participants to precisely calculate the answers to the math problems in both time pressure conditions.

3.2.2  Main analyses

As in Study 1, we created deviation scores from the participants’ estimates. We then conducted a 2 (anchor) x 2 (time limit) repeated-measures ANOVA on the participants’ deviation scores. The ANOVA revealed the predicted main effect of anchor, F(1, 31) = 13.71, p = .001, partial η 2 = .31. Again, participants gave higher estimates after being exposed to a high anchor rather than a low anchor. Unlike Study 1, there was no main effect of time limit, F(1, 31) = 2.43, p = .13, partial η 2 = .07. There was, however, a significant anchor X time limit interaction, F(1, 31) = 6.06, p = .02, partial η 2 = .16. As can be seen in Figure 2, participants’ estimates were influenced by the anchor values to a greater degree in the 5 sec condition than in the 15 sec condition. Simple-effects tests revealed significant anchoring effects in the 5 sec condition, F(1, 31) = 12.40, p = .001, partial η 2 = .29, and almost-significant anchoring effects in the 15 sec condition, F(1, 31) = 3.41, p = .07, partial η 2 = .10.


Figure 2: Participants’ average estimates in Study 2 as a function of anchor and time limit conditions. Error bars represent ±1 SE.

In summary, participants in Study 2 exhibited robust anchoring effects when using anchors that were quite extreme. It seems unlikely that participants interpreted the extreme anchors as useful information, yet the influence of anchors in Study 2 was largely the same as in Study 1. The anchoring effects were again moderated by time pressure with the more restrictive time pressure leading to larger anchoring effects. Even when we removed all the responses that were within 5 units of the correct answer, the results for the anchor X time limit interaction remained significant, F(1, 31) = 5.64, p = .02, partial η 2 = .15. This reveals that, even among the set of estimates that were not achieved through full calculation, time pressure increases anchoring effects.

4  General discussion

Two key accounts of anchoring—selective accessibility and insufficient adjustment—would seem to suggest that anchors will not influence people’s estimates that are based on calculations. Numeric and magnitude priming accounts, on the other hand, do predict an effect. Consistent with this prediction, we found that irrelevant anchors influenced participants’ answers to math problems. Another important finding was that limiting participants’ ability to use the provided information increased the magnitude of the anchoring effects. Finally, this effect persisted even when the anchors were extreme relative to the answers to the math problems.

4.1  Why did time pressure increase anchoring?

A number of studies have found that anchoring effects from externally provided anchors are immune to cognitive load manipulations (e.g., Epley & Gilovich, 2006; Mussweiler & Strack, 1999). Therefore, it might seem odd that time pressure increased anchoring effects in our studies. We have assumed, however, that people often perform minimal calculations and then estimate their final answer. Putting people under an extreme time pressure will undoubtedly reduce people’s ability to calculate an accurate response and increase the tendency to estimate the answer. In turn, this will increase the biasing influence of anchors.

Our results are also consistent with a recent study demonstrating anchoring effects caused by numeric or magnitude priming (Blankenship, Wegener, Petty, Detweiler-Bedell, & Macy, 2008, Experiment 4). Specifically, when answering general knowledge questions, participants exhibited a significant anchoring effect when cognitive load was high, but no anchoring effect when cognitive load was low. According to Blankenship et al., this occurred because, when under high levels of load, participants were not able to recruit a significant amount of relevant information from memory to generate their response and were, therefore, influenced by the numbers or magnitudes that were made accessible by the anchors.

4.2  What was primed, numbers or magnitudes?

Our studies were not designed to determine whether numeric or magnitude priming was more responsible for the observed effects. However, we can speculate about this issue. Numerous studies have demonstrated that numeric primes can influence numeric processing (e.g., Brysbaert, 1995; den Heyer & Briand, 1986). For example, a number such as 66 is named faster when it is preceded by a close number (65) than a far number (52). Also, comparisons between a target and a particular standard are faster when a prime is similar to the target rather than dissimilar (Koechlin, Naccache, Block, & Dehaene, 1999). Given these findings, it seems possible that numeric anchors can prime numbers related to the anchor value.

However, there are two reasons why magnitude priming may better serve as the explanation for the effects observed in the current studies. First, most researchers agree that, with respect to the specific process of numeric priming, numbers involving three or more digits are reduced into their component values rather than processed holistically (for a discussion, see Ratinickx, Brysbaert & Fias, 2005). Therefore, a number like 1,313 might do more to prime the number 2 than the number 1,657. Second, numeric priming effects do not generally extend very far (Reynvoet & Brysbaert, 1999; Reynvoet, Brysbaert, & Fias, 2002). For example, when using masked primes, priming with 1 might facilitate the recognition of 2, but not 9. Results from Reynvoet and Brysbaert (1999) suggest that numbers do not prime values more than 3 away from the prime. In the current studies, the anchors were always larger than two-digit numbers and the estimates that the participants gave were generally quite disparate from the anchor values. Therefore, it seems unlikely that strict numeric priming, at least as it is defined within the cognitive literature on numeric priming, can account for the observed anchoring effects. A version of numeric priming could still be considered viable if a more liberal definition of numeric priming were used, in which categories of numbers might be primed (e.g., “upper hundreds,” “lower thousands”).

Also viable is a magnitude priming explanation, first proposed by Oppenheimer et al. (2008). In one of their studies, drawing a long line caused participants to give higher estimates of the length of the Mississippi River. In another study, drawing long lines increased the likelihood that participants completed the word fragment _all to form tall. Presumably, drawing a long line primed related magnitudes, and these magnitudes influenced participants’ estimates and performance on the word completion task. Similar to the effect of drawing long or short lines, it seems quite possible that the anchors increased the activation of corresponding magnitude representations. Only future research can determine whether magnitude priming or numeric priming of categories of numbers offer a better explanation for the anchoring effects we observed.

It is important to note that, although magnitude priming is our preferred explanation for these results, the anchoring effects may be driven by other mechanisms. While our results appear to be inconsistent with the other accounts of anchoring effects, the studies did not explicitly rule out the competing explanations. Furthermore, we certainly acknowledge that the selective accessibility and anchoring and insufficient adjustment accounts can explain anchoring effects observed in other situations.

4.3  Conclusion

In the current studies, participants explicitly considered the anchor values before providing an estimate. Admittedly, this explicit consideration of anchor values is rare in everyday environments. However, Critcher and Gilovich (2007) demonstrated that “incidentally” presented anchors, such as the number on the jersey of a football player, can influence estimates, such as the likelihood that the player would register a sack in an upcoming game. Their findings, combined with our results, suggest that incidentally presented anchors—like explicitly considered anchors—might influence calculations. Furthermore, such an effect might be most robust when people’s ability (or presumably motivation) to process information is limited. In short, math done in everyday environments, where time is short and motivation is less than perfect, could be routinely biased by anchors in ways that typically go unnoticed.

References

Blaneknship, K. L., Wegener, D. T., Petty, R. E., Detweiler-Bedell, B., & Macy, C. L. (2008). Elaboration and consequences of anchored estimates: An attitudinal perspective on numerical anchoring. Journal of Experimental Social Psychology, 44, 1465–1476.

Brysbaert, M. (1995). Arabic number reading: On the nature of the numerical scale and the origin of phonological recoding. Journal of Experimental Psychology: General, 124, 434–452.

Chapman, G. B., & Johnson, E. J. (1999). Anchoring, activation, and the construction of values. Organizational Behavior and Human Decision Processes, 79, 115–153.

Chapman, G. B., & Johnson, E. J. (2002). Incorporating the irrelevant: Anchors in judgments of belief and value. In T. Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment. (pp. 120–138) Cambridge University Press: New York.

Critcher, C. R., & Gilovich, T. (2007). Incidental environmental anchors. Journal of Behavioral Decision Making, 21, 241–251.

den Heyer, K., & Briand, K. (1986) Priming single digit numbers: Automatic spreading activation dissipates as a function of semantic distance. American Journal of Psychology, 99, 315–339.

Englich, B., Mussweiler, T., & Strack, F. (2006). Playing dice with criminal sentences: The influence of irrelevant anchors on experts’ judicial decision making. Personality and Social Psychology Bulletin, 32, 188–200.

Epley, N., & Gilovich, T. (2001). Putting adjustment back in the anchoring and adjustment heuristic: Differential processing of self-generated and experimenter-provided anchors. Psychological Science, 12, 391–396.

Epley, N., & Gilovich, T. (2004). Are adjustments insufficient? Personality and Social Psychology Bulletin, 30, 447–460.

Epley, N., & Gilovich, T. (2005). When effortful thinking influences judgmental anchoring: Differential effects of forewarning and incentives on self-generated and externally-provided anchors. Journal of Behavioral Decision Making, 18, 199–212.

Epley, N. & Gilovich, T. (2006). The anchoring-and-adjustment heuristic: Why the adjustments are insufficient. Psychological Science, 17, 311–318.

Jacowitz, K. E., & Kahneman, D. (1995). Measures of anchoring in estimation tasks. Personality and Social Psychology Bulletin, 21, 1161–1167.

Koechlin, E., Naccache, L., Block, E., & Dehaene, S. (1999). Primed numbers: Exploring the modularity of numerical representations with masked and unmasked semantic priming. Journal of Experimental Psychology: Human Perception and Performance, 25, 1882–1905.

Mussweiler, T., & Strack, F. (1999). Hypothesis-consistent testing and semantic priming in the anchoring paradigm: A selective accessibility model. Journal of Experimental Social Psychology, 35, 136–164.

Mussweiler, T., & Strack, F. (2000). The use of category and exemplar knowledge in the solution of anchoring tasks. Journal of Personality and Social Psychology, 78, 1038–1052.

Mussweiler, T., & Strack, F. (2001). The semantics of anchoring. Organizational Behavior and Human Decision Processes, 86, 234–255.

Oppenheimer, D. M., LeBoeuf, R. A., & Brewer, N. T. (2008). Anchors aweigh: A demonstration of cross-modality anchoring and magnitude priming. Cognition, 106, 13–26.

Ratinckx, E., Brysbaert, M., & Fias, W. (2005). Naming two-digit Arabic numerals: Evidence from masked priming studies. Journal of Experimental Psychology: Human Perception and Performance, 31, 1150–1163.

Reynvoet, B., & Brysbaert, M. (1999). Single-digit and two-digit Arabic numerals address the same semantic number line. Cognition, 72, 191–201.

Reynvoet, B., Brysbaert, M., & Fias, W. (2002). Semantic priming in number naming. Quarterly Journal of Experimental Psychology. Series A: Human Experimental Psychology, 55, 1127–1139.

Schwarz, N. (1994). Judgments in a social context: Biases, shortcomings, and the logic of conversation. In M. Zanna (Ed.). Advances in experimental social psychology (Vol. 26, pp. 125–162). San Diego: Academic Press.

Strack, F., & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology, 73, 437–446.

Thorsteinson, T. J., Breier, J., Atwell, A., Hamilton, C. & Privette, M. (2008). Anchoring effects on performance judgments. Organizational Behavior and Human Decision Processes, 107, 29–40.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131.

Wilson, T. D., Houston, C. E., Etling, K. M., & Brekke, N. (1996). A new look at anchoring effects: Basic anchoring and its antecedents. Journal of Experimental Psychology: General, 125, 387–402.

Wong, K. F. E., & Kwong, J. Y. Y. (2000). Is 7300 m equal to 7.3 km? Same semantics but different anchoring effects. Organizational Behavior and Human Decision Processes, 82, 314–333.


*
Corresponding author: Department of Psychology, University of Iowa, Iowa City, Iowa, 52242. Email: andrew-r-smith@uiowa.edu.
#
Department of Psychology, University of Iowa.
Data from the studies reported here are available through the journal’s table-of-contents page.
1
In Study 1, the computer created the math problems by first generating three numbers between 111 and 999 and then checking if the sum of these numbers was between the desired range—1100 and 1900. If the sum was not between this range, three new random numbers were generated and checked. This process was repeated until the answer to the equation was within the desired range. In Study 2, the same procedure was used, but the randomly generated numbers were between 1111 and 5500 with the requirement that the sum was between 4000 and 8000.
2
In both studies, analyses restricted to those participants who answered at least 11 of the 12 comparative judgments correctly did not differ substantially from analyses including all participants. Similarly, analyses restricted to only those estimates given after correctly answering the comparative judgment also reveal significant anchoring effects.

This document was translated from LATEX by HEVEA.