Judgment and Decision Making, vol. 6, no. 5, July 2011, pp. 413-422

The limited value of precise tests of the recognition heuristic

Thorsten Pachur*

The recognition heuristic models the adaptive use and dominant role of recognition knowledge in judgment under uncertainty. Of the several predictions that the heuristic makes, empirical tests have predominantly focused on the proposed noncompensatory processing of recognition. Some authors have emphasized that the heuristic needs to be scrutinized based on precise tests of the exclusive use of recognition. Although precise tests have clear merits, I critically evaluate the value of such tests as they are currently employed. First, I argue that using precise measures of the exclusive use of recognition has to go beyond showing that the recognition heuristic—like every model—cannot capture reality completely. Second, I illustrate how precise tests based on response times can lead to unsubstantiated conclusions if the fact that the recognition heuristic does not model the recognition judgment itself is ignored. Finally, I highlight two key but so far neglected aspects of the recognition heuristic: (a) the connection between recognition memory and the recognition heuristic; and (b) the mechanisms underlying the adaptive use of recognition.


Keywords: recognition heuristic, memory, noncompensatory, response times, ecological rationality.

“When I complain of my memory, they seem not to believe I am in earnest, and presently reprove me as though I accused myself for a fool; not discerning the difference between memory and understanding. [E]xperience rather daily showing us [] that a strong memory is commonly coupled with an infirm judgment.” (de Montaigne, 1595/2003, p. 22)

1  Introduction

In his Essays, the French philosopher Michel de Montaigne suggested that a good memory is not necessarily coupled with good decision making. In fact, he seems to imply that decisions can sometimes even benefit from deficits in memory. How could this be possible? One answer is that because structures in the mind often reflect meaningful regularities in the world (e.g., Anderson & Schooler, 1991; Pachur, Schooler, & Stevens, in press), blanks in memory can be exploited for making inferences about the world.

The notion that judgments feed on dynamics in memory has been taken up in several models of decision making. For instance, Tversky and Kahneman (1973) proposed that the ease with which instances or occurrences can be brought to mind “is an ecologically valid clue” (p. 209) about the world and that an availability heuristic based on this ease might operate when people judge probability or frequency. More recently, Goldstein and Gigerenzer (2002) described the recognition heuristic as a model of how people recruit recognition memory when making inferences more generally. In contrast to the availability heuristic, the recognition heuristic is a clearly specified computational model with precise search, stopping, and decision rules. Moreover, the recognition heuristic was proposed as an adaptive mental tool with specific boundary conditions (Gigerenzer, Todd, & the ABC Research Group, 1999).1

The recognition heuristic makes several testable predictions about recognition and its use in decision making. First, as the recognition heuristic is assumed to be ecologically rational (i.e., exploiting a regularity in the environment), recognition should be frequently correlated with quantities in the world. Second, people’s use of the recognition heuristic should be sensitive to the structure of the environment. Third, the recognition heuristic predicts that recognition is processed in a noncompensatory fashion—that is, recognition should supersede further cue knowledge. Finally, the heuristic predicts, under certain conditions, a counterintuitive less-is-more effect, where recognizing fewer objects can lead to more accurate inferences than recognizing more objects. (For an overview of tests of these predictions, see Pachur, Todd, Gigerenzer, Schooler, & Goldstein, 2011.)

The precise definition of the recognition heuristic and its assumed role as an adaptive mental tool made it an attractive study object. Maybe not surprisingly, not all empirical investigations have found evidence supporting the heuristic. Of the several predictions that the heuristic makes, it seems fair to say that the assumed noncompensatory processing of recognition has received the greatest attention so far—and has generated the strongest objection (Bröder & Eichler, 2006; Glöckner & Bröder, 2011; Hilbig & Pohl, 2008, 2009; Hilbig, Erdfelder, & Pohl, 2010; Hilbig, Pohl, & Bröder, 2009; Hochman, Ayal, & Glöckner, 2010; Newell & Fernandez, 2006; Newell & Shanks, 2004; Oeusoonthornwattana & Shanks, 2010; Oppenheimer, 2003; Pachur, Bröder, & Marewski, 2008; Pohl, 2006; Richter & Späth, 2006).

Some authors have emphasized the need for precise tests of the recognition heuristic, and (a) developed precise measures of the exclusive use of recognition, arguing that “precise models deserve precise measures” and (b) conducted precise tests of the information processing in recognition-based inference based on response times (Glöckner & Bröder, 2011; Hilbig, 2010a, 2010b; Hilbig, Erdfelder, et al., 2010; Hilbig & Pohl, 2008, 2009; Hilbig & Richter, 2011; Hilbig, Scholl, & Pohl, 2010). In the following, I discuss the value of such precise tests as they are currently used and argue that they have done little to advance our understanding of recognition-based inference. In addition, I highlight two key issues underlying the use of recognition in decision making that seem to have been neglected as a result of the strong focus on testing the noncompensatory processing of recognition. First, we need to better understand the relationship between recognition as studied in the memory literature and the recognition memory tapped by the recognition heuristic. Second, I summarize proposals of how people might adaptively adjust their reliance on recognition across different situations. Importantly, I do not argue that the developments of precise measures or demonstrations of the recognition heuristic’s failure to predict data should be ignored. Rather, I call for a more constructive way to use these findings for refining models of memory-based decision making.

2  Why precise tests of the recognition heuristic are not always useful

2.1  Precise measures of the exclusive use of recognition

The key factor enabling the recognition heuristic’s ecological rationality is that recognizing an object is often correlated with other properties of the object and can thus be used to infer these properties (Goldstein & Gigerenzer, 2002). Moreover, recognition often correlates with other cues (Marewski & Schooler, 2011). To illustrate, a recognized city is often more populous than an unrecognized one and it is also more likely to have a university or an international airport (both of which also predict city size). This collinearity between cues is a common situation in the real world and it is also key to Brunswik’s (1952) notion of vicarious functioning. Moreover, Davis-Stober, Dana, and Budescu (2010) have shown that under conditions of collinearity, restricting search to only one cue (as proposed by the recognition heuristic) can actually represent the optimal strategy to make inferences.

However, the fact that recognition is often correlated with other cues also makes it difficult to rigorously test the recognition heuristic. Specifically, Hilbig and colleagues pointed out that high adherence rates—that is, that people often infer a recognized object to have a higher criterion value than an unrecognized one—do not necessarily mean that people use the recognition heuristic (Hilbig & Pohl, 2008; Hilbig, Erdfelder, et al., 2010). As recognition and other cues often hint at the same object, people might have considered these cues as well (inconsistent with the heuristic’s predicted noncompensatory processing of recognition). To address this problem, measures were developed that reflect the exclusive reliance on recognition more precisely than the adherence rate. For instance, Hilbig & Pohl’s (2008) discrimination index (DI) expresses the degree to which the probability that the decision maker chooses a recognized object differs between cases where recognition leads to a correct (C) and cases where recognition leads to a false (F) response (for a similar approach, see Pachur & Hertwig, 2006; Pachur, Mata, & Schooler, 2009). The index is defined as DI = p(chooseR|C) – p(chooseR|F). As the recognition heuristic predicts that the decision maker ignores further cue knowledge when making inferences within a particular environment, DI should be zero. In various investigations, however, Hilbig and colleagues showed that for most participants DI is larger than zero—even when adherence rates are rather high.

In a further development, Hilbig, Erdfelder, et al. (2010) proposed a multinomial measurement model (r-model) that allows estimating the probability with which the decision maker applies the recognition heuristic (i.e., processing recognition in a noncompensatory fashion) as well as the probability that further cues are inspected.2 The model also allows to disentangle systematic and unsystematic (i.e., use of further cues vs. guessing) factors underlying the nonreliance on the recognition heuristic. In applications of the r-model, Hilbig, Erdfelder, et al. showed that the probability that participants strictly follow recognition is often considerably lower than if considering adherence rates only. It was concluded that, inconsistent with the prediction of the recognition heuristic, “information integration beyond recognition plays a vital role” (p. 123).

Clearly, these results demonstrate that people do not always strictly adhere to the recognition heuristic and that this is not merely due to unsystematic factors (i.e., guessing or inattention). Rather, violations of the heuristic’s predictions are often systematic, indicating that at least some people do not always ignore useful information beyond recognition. This may suggest that the noncompensatory recognition heuristic is a less adequate model than a compensatory strategy, which integrates several cues; and some authors have concluded that “any theory of comparative judgment must allow for use of further knowledge of information in recognition cases” (Hilbig, Erdfelder, et al., 2010, p. 132). However, a comparison of the recognition heuristic with various compensatory models showed that, although the recognition heuristic does not predict the data perfectly, it still provides the best account currently available (Marewski, Gaissmaier, Schooler, Goldstein, & Gigerenzer, 2010).

How then should we evaluate the violations of the recognition heuristic’s predictions as revealed by precise measures of the exclusive use of recognition (Hilbig & Pohl, 2008; Hilbig & Richter, 2011)? In my view, Marewski et al.’s (2010) results underline the limited value of using highly precise measures as applied in tests by Hilbig and Pohl (2009) and Hilbig, Erdfelder, et al. (2010). In fact, one way to interpret Hilbig, Erdfelder, et al.’s critical results for the recognition heuristic is that they remind us of the fact that the recognition heuristic merely models and therefore simplifies reality. But such an insight is not very useful if it remains unclear how exactly the recognition heuristic fails to capture the decision making process—and how to model the cognitive process instead. (In the next section, I discuss candidate mechanisms that might underlie people’s decision to suspend the recognition heuristic.) Without doubt, high precision in measurement is a useful goal to advance understanding of a phenomenon. The proposed precise measures of the use of the recognition heuristic (i.e., DI and the r-model, as well as Pachur & Hertwig’s, 2006, d’) therefore represent important progress over simple adherence rates and clearly should be used when investigating, for instance, adaptive changes or individual differences in the use of the heuristic (e.g., Pachur & Hertwig, 2006; Pachur et al., 2009). However, the development of more precise measures should go hand in hand with the development of more precise and accurate models and should not stop with demonstrations that a model somehow fails to predict some data.

Note that this issue is not restricted to the recognition heuristic. At least for models in the behavioral sciences, given precise enough measures violations can probably be found for every model ever proposed. For instance, prospect theory (Tversky & Kahneman, 1992), one of the most prominent models of risky choice, has clearly been rejected by some data (e.g., Birnbaum & Chavez, 1997; Brandstätter, Gigerenzer, & Hertwig, 2006; for an overview, see Birnbaum, 2008). Nevertheless, prospect theory still proves useful for investigating and quantifying risky choice (e.g., Pachur, Hanoch, & Gummerum, 2010) and continues to stimulate new challenges (Brandstätter et al., 2006). Similarly, in classification research, I am not aware of a model that does not fail to account for some data given sufficiently precise measures (for an overview, see Rouder & Ratcliff, 2004). Nevertheless exemplar models, prototype models, and rule-based models (as well as combinations thereof) still offer useful frameworks for understanding how people structure objects in the world.

To summarize: developing precisely formulated cognitive models is an important goal for understanding behavior. Therefore, a precise computational model like the recognition heuristic is easier to test than a vaguely described model like the availability heuristic. Nevertheless, higher precision in modeling also exacts a price: a precise model will be easier to falsify than a vague model, and the falsification will be more likely the more precise the measures used. Therefore, to retain the purpose of modeling, refinement in measurement should be accompanied by advancing model development. Refuting a model does not automatically confirm alternative but unspecified and untested models. Importantly, once an alternative model has been proposed, its descriptive superiority has to be demonstrated in a comparative test against the “null” model (see Brighton & Gigerenzer, 2011; Gigerenzer & Goldstein, 2011; Marewski et al., 2010). Although—as I have argued—precise measures can be of only limited value in isolated tests of a model, precise measures may be more useful in the context of such comparative tests.

Finally, let us not forget that descriptive adequacy, though a central dimension for model evaluation, is not the only one. For instance, Shiffrin, Lee, Kim, and Wagenmakers (2008) highlighted that, in addition to achieving a “basic level of descriptive adequacy” (p. 1249), a good model should provide insight, facilitate generalization, direct new empirical explorations, and foster theoretical progress. Demonstrations that the recognition heuristic cannot capture reality perfectly scarcely impair the achievements of the heuristic on these dimensions (e.g., predicting the less is more effect, modeling ecological rationality), though theory development should not stop there.

2.2  Response time tests of the recognition heuristic

The recognition heuristic models inferences from memory, that is, when cue values have to be retrieved from memory. Although search processes in memory are not amenable to direct observation, it has been proposed that they are nevertheless reflected in response time patterns (e.g., Bergert & Nosofsky, 2007; Pachur & Hertwig, 2006; Sternberg, 1966). Accordingly, one could argue that precise tests of the recognition heuristic should test the implications of the assumed limited information search for response times. However, as criticized by some (e.g., Dougherty, Franco-Watkins, & Thomas, 2008), when proposing the recognition heuristic, Goldstein and Gigerenzer (2002) did not provide a model of the recognition process and its temporal dynamics. As I illustrate next, this omission may not only miss an opportunity for theory integration (Dougherty et al., 2008; Katsikopoulos, 2010; Pachur, 2010; Pleskac, 2007; Schooler & Hertwig, 2005). Rather, neglecting the dynamics of the recognition process can also limit the value of precise response time tests of the recognition heuristic.

Based on Goldstein and Gigerenzer’s (2002) description of the recognition heuristic, Hilbig and Pohl (2009; see also Glöckner & Bröder, 2011) derived and tested several response time predictions of the recognition heuristic. For instance, response times should be faster when a recognized object is compared to an unrecognized object than when two recognized objects are compared. Further, when a recognized object is compared to an unrecognized object, the response time should be unaffected by (a) the amount of cue knowledge available for the recognized object, and (b) whether recognition leads to a correct or an incorrect decision. The predictions are based on the premise that response times in recognition-based inference provide a pure measure of the amount of processed cue information. Contradicting these derived predictions, in empirical tests Hilbig and Pohl did not find that people’s response times were consistently faster when only one rather than both objects were recognized. Moreover, response times were faster for recognized objects for which additional knowledge was available compared to recognized objects for which no additional knowledge was available. From these results, the authors concluded that “support was obtained for the integration of information and the impact of differences in evidence between objects. Decision times …supported the notion that the (speed of the) decision process is determined by the degree to which one object is superior and thus by the degree of conflict rather than by recognition alone.” (p. 1303)  They argued that the observed patterns are more in line with compensatory, “evidence accumulation” models.3

But does it make sense to derive and test response-time predictions from a model that does not account for the recognition process? It is well established that the temporal dynamics of the recognition process itself are sensitive to various factors, such as word frequency and word length (e.g., O’Regan & Jacobs, 1992). Moreover, the amount of time required for a recognition judgment—i.e., fluency—might depend strongly on the decision maker’s certainty in the recognition judgment. Erdfelder, Küpper-Tetzel, and Mattern (2011) showed that a model that integrates the dynamics of the recognition process with the recognition heuristic can account for response time patterns that Hilbig and Pohl (2009) interpreted as evidence against the recognition heuristic. To model the recognition process, Erdfelder et al. used a two-high threshold model (Snodgrass & Corwin, 1988; see also Bröder & Schütz, 2009). According to the model, fluency is mainly a function of the “memory state” of the decision maker—that is, how certain she is that the object was encountered before. Fluency is highest under certainty and lowest under uncertainty, where the recognition judgments concerning the recognized and the unrecognized objects are based on guessing.

How could Erdfelder et al.’s model (2011)—integrating the recognition heuristic with a two-high threshold memory model—account for the finding that the time people take to choose a recognized object varies as a function of whether they have additional knowledge or not—even if this additional knowledge is ignored? The main reason is that, in the real world, people’s memory state of an object—and by implication the fluency of the object’s name—is strongly correlated with the availability of further knowledge about the object (Marewski & Schooler, 2011). Moreover, fluency is often correlated with the criterion (Hertwig, Herzog, Schooler, & Reimer, 2008). As a result, a person is more likely to recognize those objects swiftly (a) about which she can retrieve further knowledge and (b) that score high on the criterion. People may thus decide faster because they recognize the object faster—and not because of less conflict during the inference process (as argued by Hilbig & Pohl, 2009). In other words, the observation that recognition-based responses are faster when they are correct or when additional knowledge about the recognized object is available does not necessarily mean that recognition was used in a compensatory fashion. Finally, Erdfelder et al.’s model can also account for the finding that response times in cases in which only one object is recognized are not consistently faster than in cases in which both objects are recognized.


Figure 1: Response times in Pachur, Bröder, and Marewski (2008; Experiments 1–3 collapsed), separately for participants classified as compensatory users or noncompensatory users of recognition. Shown are the marginal estimated means (based on response times z-standardized for each participant), controlling for the fluency of the recognized and unrecognized objects.

Taken together, combining the recognition heuristic with an established model of the recognition process reveals that response time patterns that have been interpreted as supporting compensatory processes can be fully consistent with a noncompensatory use of recognition (see Erdfelder et al., 2011, pp. 18–19). Precise tests of the recognition heuristic can thus be misleading if the precision of the test is not matched to the precision (or completeness) of the model. For a derivation of response time predictions for the recognition heuristic based on the ACT-R architecture, see Marewski and Mehlhorn (in press).

Admittedly, Hilbig and Pohl (2009; Experiment 3) attempted to control for possible differences in fluency in one experiment, and repeated their analyses based on residual response times (after regressing response times on fluency). They found that, on the aggregate level, similar patterns emerged as when fluency was not controlled for. It is well known, however, that analyses on the aggregate level can hide substantial individual differences in strategy use. Several studies have shown that, even if only a small proportion of participants choose systematically differently as predicted by the recognition heuristic, the pattern on the aggregate level can contradict the recognition heuristic (Gigerenzer & Goldstein, 2011; Pachur et al., 2008). This also holds for response time data. As a reanalysis of data reported by Pachur et al. reveals, response time patterns can differ considerably between different strategy users. As shown in Figure 1, for 51 of the 105 participants included in the analysis who were classified as not following the recognition heuristic (for details see Pachur et al., 2008, p. 203–204), the response times (controlling for fluency) were considerably faster when there was less conflicting knowledge (i.e., many cues supporting recognition) compared to when there was more conflicting knowledge (in the experiments, participants always had three additional cues, which either supported or contradicted recognition). For the 54 participants classified as following the recognition heuristic (because they always chose the recognized object), by contrast, this trend was considerably attenuated (although it did not disappear completely). Focusing on the aggregate level only might thus lead to the erroneous conclusion that the response-time patterns of all participants were strongly affected by the amount of conflicting knowledge.

3  Neglected issues in studying recognition-based inference

Without doubt, the thesis that recognition supersedes additional cue knowledge is a strong prediction. It may therefore not seem too surprising that a large proportion of empirical tests have focused on this aspect of the recognition heuristic. However, the recognition heuristic offers a much richer conceptual framework for studying adaptive decision making. Moreover, it provides a great opportunity for bridging memory and decision-making research (e.g., Dougherty, Gronlund, & Gettys, 2003; Weber, Goldstein, & Barlas, 1995; see Tomlinson, Marewski, & Dougherty, 2011). In the following, I highlight two important aspects of the recognition heuristic that seem to have been overlooked as a result of the overwhelming attention to the predicted noncompensatory processing: (a) the connection between research on the recognition heuristic and research on recognition memory, and (b) the mechanisms underlying people’s adaptive use of the recognition heuristic.

3.1  Different types of recognition memory

Above I have illustrated how ignoring the processes underlying the recognition judgment can make precise tests of the recognition heuristic based on response times rather uninformative. But the need to better understand the contribution of recognition memory to recognition-based inference goes further. For instance, Pleskac (2007) found in mathematical analyses that the accuracy of recognition memory should play a crucial role in the performance of the recognition heuristic (i.e., the recognition validity). In a study comparing recognition-based inferences by young and older adults, however, Pachur et al. (2009) found no association between the accuracy of people’s recognition memory and their individual recognition validity. This discrepancy suggests that the type of recognition memory usually studied in the memory literature might differ from the type of recognition memory tapped by the recognition heuristic. In common measures of recognition memory, participants are first asked to study a list of known words and are later asked to discriminate these studied words from other known words that were not studied. This episodic recognition thus requires the recollection of contextual information (such as source, time and place, feelings) about previous encounters (see Neely & Payne, 1983; Tulving, 1972). Semantic recognition, by contrast, which is crucial for tasks such as lexical decisions (e.g., Scarborough, Cortese, & Scarborough, 1977) relies on context-independent features. It is possible that the two types of recognition memory play different roles in the use of recognition in decision making. For instance, semantic recognition might be crucial for distinguishing previously seen from novel objects, whereas the ability for episodic recognition might be key for evaluating whether using recognition in a particular situation is appropriate or not (Hertwig et al., 2008; Marewski, Gaissmaier, Schooler, Goldstein, & Gigerenzer, 2009; Volz et al., 2006; for a discussion, see Pachur et al., 2009). A stronger connection to concepts in the memory literature could thus be helpful for research on the recognition heuristic, leading to better understanding the role of recognition in recognition-based inference and, in particular, to better explanations of individual differences in the use of the recognition heuristic.

3.2  The adaptive use of recognition

How do people decide whether to follow the recognition heuristic or not? Although a central question for the notion that the recognition heuristic is an adaptive tool, it has received relatively little attention so far. In one of the few studies examining the mechanisms underlying the adaptive use of recognition directly, Pachur and Hertwig (2006) tested three different hypotheses. According to the threshold hypothesis, people’s reliance on the recognition heuristic in a particular environment depends on whether the recognition validity exceeds a certain threshold or not. According to the matching hypothesis, people follow the heuristic with a probability that matches their individual recognition validity. According to the suspension hypothesis, the nonuse of the recognition heuristic results from object-specific knowledge, rather than being directly linked to the recognition validity (which is the same for all objects in an environment). Pachur and Hertwig found that the individual adherence rates were uncorrelated with the individual recognition validities (see also Pohl, 2006), inconsistent with both the matching and the threshold hypotheses. Supporting the suspension hypothesis, however, the degree to which participants followed recognition varied considerably across the different objects (focusing, of course, on those cases where the object was recognized). This suggests that the decision of whether to use the recognition heuristic or not is made for each individual pair of objects rather than for an entire environment.

Nevertheless, there is clear evidence that across different environments, people follow the recognition heuristic more when the recognition validity in an environment is high compared to when it is low (Gigerenzer & Goldstein, 2011; Pachur et al., 2011). It thus seems that the question of adaptivity can be posed on two levels: First, within an environment, is it useful to follow recognition in a particular pair of objects? Second, is the recognition heuristic an appropriate tool in a particular environment? Pachur et al. (2009) referred to these two levels as item adaptivity and environment adaptivity, respectively. In the following, I discuss what mechanisms might give rise to item and environmental adaptivity.

3.2.1  Item adaptivity

The results of Pachur and Hertwig (2006) indicated that reliance on the recognition heuristic is based on object-specific information. What might the information be that people recruit to evaluate the adequacy of using the recognition heuristic? One possibility is recognition speed (i.e., fluency). There are at least two reasons why fluency might be a useful indicator for the appropriateness of following recognition. First, as fluency is often correlated with the criterion (e.g., Hertwig et al., 2008), following recognition when the recognized object was recognized swiftly should, ceteris paribus, lead to more correct decisions than when the recognized object was recognized slowly (see Marewski et al., 2010). Second, as outlined by Erdfelder et al. (2011), fluency might indicate the certainty (and thus accuracy) of the recognition judgment—that is, whether the object was indeed previously encountered or not.

An alternative possibility is that additional cue knowledge—rather than being used directly to make an inference—is used as a “meta-cue” to decide whether to use or to suspend the recognition heuristic. For illustration, consider a person is asked to judge whether Chernobyl or an unrecognized Russian city is larger. Because the person knows that Chernobyl is well known due to a nuclear disaster, she might suspend the recognition heuristic in that particular case and choose the unrecognized city (see Oppenheimer, 2003). A third possibility is that processes of source monitoring (Johnson, Hashtroudi, & Lindsay, 1993; Lindsay & Johnson, 1991) influence the decision of whether to follow recognition or not. Specifically, one might infer simply from one’s ability to retrieve specific knowledge about the source of an object’s recognition—for instance, that a city is recognized from a friend’s description of a trip—that recognition is an unreliable cue in this case. Why? One indication that recognition is a potentially valid predictor is when an object is recognized after encountering it multiple times in many different contexts (e.g., hearing a name in several conversations with different people, or across various media), rather than through one particular, possibly biased source. Thus, being able to easily think of one particular source could indicate unreliability. Conversely, if an object has appeared in many different contexts, retrieving information about any specific context is more difficult and associated with longer retrieval times than when an object has appeared in only one particular context (known as the “fan effect”—Anderson, 1974). As a consequence, difficulty in retrieving detailed information concerning a particular context in which an object was encountered could indicate that recognition has been produced by multiple sources and is therefore an ecologically valid cue (see Pachur et al., 2011).

3.2.2  Environment adaptivity

As mentioned above, the average adherence rate in an environment usually follows the average recognition validity rather closely (Gigerenzer & Goldstein, 2011; Pachur et al., 2009, 2011). How do people achieve this apparent adaptive use of the recognition heuristic? Given that within an environment individual recognition validities are uncorrelated with individual adherence rates (Pachur & Hertwig, 2006; Pohl, 2006), individual learning seems an unlikely factor. What are the alternatives? One possibility is that the mechanisms underlying item adaptivity and environment adaptivity are closely connected. For instance, if the fluency of recognized objects tends to be lower and discrediting cue or source knowledge is more likely to be prevalent in environments with a low than in those with a high recognition validity, item adaptivity might lead to environment adaptivity. Another possibility is that people have subjective theories about the predictive power of recognition in different environments and adjust their reliance based on these beliefs (e.g., Wright & Murphy, 1984). Although these theories may not always be correct, they could nevertheless capture relative differences in recognition validity between environments rather well.

Taken together, because tests of the recognition heuristic have been primarily concerned with testing the predicted noncompensatory processing, we know relatively little about the principles underlying people’s decision to use or suspend the recognition heuristic. Nevertheless, from the little we know, the emerging picture suggests that there are actually many different reasons—rather than only one reason—for people to discard the recognition heuristic and use alternative strategies instead.

4  Conclusion

Consistent with other models of decision heuristics, the recognition heuristic assumes limited search and noncompensatory processing. Clever empirical tests based on precise measures of noncompensatory processing have shown that this assumption is sometimes violated. Should we therefore retire the recognition heuristic, as some have demanded? I have argued for a more cautious and constructive approach to testing the recognition heuristic. In fact, it is not surprising that the recognition heuristic cannot capture all the data. Like every model, it is a simplification of reality and thus wrong. Mere demonstrations that a model deviates from reality are not very helpful to advance science. What is required in addition is a new (or modified) model that can accommodate the violations of the rejected model. Moreover, given that the recognition heuristic as proposed by Goldstein and Gigerenzer (2002) does not provide a complete account of cognition (e.g., by not modeling the recognition process), highly precise tests can yield rather ambiguous results. Although it is violated by some data, the recognition heuristic is, in my view, currently still the best model we have available to predict people’s recognition-based inferences. And having an imperfect model is clearly better than having no model at all (or only a vague one). When considering possible alternative models, it should also not be overlooked that recognition-based inference can probably only be understood if we continue to focus on the close link between the mind and the environment. Only then can we further refine our understanding why, as Montaigne observed, failures in memory can actually be beneficial for making good judgments.

References

Anderson, J. R. (1974). Retrieval of propositional information from long-term memory. Cognitive Psychology, 5, 451–474.

Anderson, J. R., & Schooler, L. J. (1991). Reflections of the environment in memory. Psychological Science, 2, 396–408.

Bergert, F. B., & Nosofsky, R. M. (2007). A response-time approach to comparing generalized rational and take-the-best models of decision making. Journal of Experimental Psychology: Learning, Memory and Cognition, 33,107–129.

Birnbaum, M. H. (2008). New paradoxes of risky decision making. Psychological Review, 115, 463–501.

Birnbaum, M. H., & Chavez, A. (1997). Tests of theories of decision making: Violations of branch independence and distribution independence. Organizational Behavior and Human Decision Processes, 71, 161–194.

Brandstẗter, E., Gigerenzer, G., & Hertwig, R. (2006). The priority heuristic: Making choices without trade-offs. Psychological Review, 113, 409–432.

Brighton, H., & Gigerenzer, G. (2011). Towards competitive instead of biased testing of heuristics: A reply to Hilbig and Richer (2011). Topics in Cognitive Science, 3, 197–205.

Bröder, A., & Eichler, A. (2006). The use of recognition information and additional cues in inferences from memory. Acta Psychologica, 121, 275–284.

Bröder, A., & Schütz, J. (2009). Recognition ROCs are curvilinear-or are they? On premature arguments against the two-high-threshold model of recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 587–606.

Brunswik, E. (1952). The conceptual framework of psychology. Chicago: University of Chicago Press.

Davis-Stober, C. P., Dana, J., & Budescu, D. V. (2010). Why recognition is rational: Optimality results on single-variable decision rules. Judgment and Decision Making, 5, 216–229.

Dougherty, M. R. P, Franco-Watkins, A. M., & Thomas, R. (2008). Psychological plausibility of the theory of probabilistic mental models and the fast and frugal heuristics. Psychological Review, 115, 199–213.

Dougherty, M. R. P., Gronlund, S. D., & Gettys, C. F. (2003). Memory as a fundamental heuristic for decision making. In S. L. Schneider & J. Shanteau (Eds.) Emerging perspectives on judgment and decision research (pp. 125–164). Cambridge, MA: Cambridge University Press.

Erdfelder, E., Küpper-Tetzel, C. E., & Mattern, S. D. (2011). Threshold models of recognition and the recognition heuristic. Judgment and Decision Making, 6, 7–22.

Fiedler, K. (1983). On the testability of the availability heuristic. In R. W. Scholz (Ed.), Decision making under uncertainty: Cognitive decision research, social interaction, development and epistemology (pp. 109–119). Amsterdam: North-Holland.

Gigerenzer, G. (1996). On narrow norms and vague heuristics: A reply to Kahneman and Tversky. Psychological Review, 103, 592–596.

Gigerenzer, G., & Goldstein, D. G. (2011). The recognition heuristic: A decade of research. Judgment and Decision Making, 6, 100–121.

Gigerenzer, G., Todd, P. M., & the ABC Research Group (1999). Simple heuristics that make us smart. New York, US: Oxford University Press.

Glöckner, A., & Bröder, A. (2011). Processing of recognition information and additional cues: A model-based analysis of choice, confidence, and response time. Judgment and Decision Making, 6, 23–42.

Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review, 109, 75–90.

Hertwig, R., Herzog, S. M., Schooler, L. J., & Reimer, T. (2008). Fluency heuristic: A model of how the mind exploits a by-product of information retrieval. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 1191–1206.

Hilbig, B. E. (2010a). Precise models deserve precise measures: A methodological dissection. Judgment and Decision Making, 5, 272–284.

Hilbig, B. E. (2010b). Reconsidering “evidence” for fast and frugal heuristics. Psychonomic Bulletin and Review, 17, 923–930.

Hilbig, B. E., & Pohl, R. F. (2008). Recognizing users of the recognition heuristic. Experimental Psychology, 55, 394–401.

Hilbig, B. E., & Pohl, R. F. (2009). Ignorance- versus evidence-based decision making: A decision time analysis of the recognition heuristic. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 1296–1305.

Hilbig, B. E., & Richter, T. (2011). Homo heuristicus outnumbered: Comment on Gigerenzer and Brighton (2009). Topics in Cognitive Science, 3, 187–196.

Hilbig, B. E., Pohl, R. F., & Bröder, A. (2009). Criterion knowledge: A moderator of using the recognition heuristic? Journal of Behavioral Decision Making, 22, 510–522.

Hilbig, B. E., Erdfelder, E., & Pohl, R. F. (2010). One-reason decision making unveiled: A measurement model of the recognition heuristic. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 123–134.

Hilbig, B. E., Scholl, S. G., & Pohl, R. F. (2010). Think or blink—Is the recognition heuristic an “intuitive” strategy? Judgment and Decision Making, 5, 300–309.

Hochman, G., Ayal, S., & Glöckner, A. (2010). Physiological arousal in processing recognition information: Ignoring or integrating cognitive cues? Judgment and Decision Making, 5, 285–299.

Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source monitoring. Psychological Bulletin, 114, 3–28.

Katsikopoulos, K. V. (2010). The less-is-more effects: Predictions and tests. Judgment and Decision Making, 5, 244–257.

Lindsay, D. S., & Johnson, M. K. (1991). Recognition memory and source monitoring. Bulletin of the Psychonomic Society, 29, 203–205.

Marewski, J. N., & Mehlhorn, K. (in press). Using the ACT-R architecture to specify 39 quantitative process models of decision making. Judgment and Decision Making.

Marewski, J. N., Gaissmaier, W., Schooler, L. J., Goldstein, D. G., & Gigerenzer, G. (2009). Do voters use episodic knowledge to rely on recognition? In N. A. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society (pp. 2232–2237). Austin, TX: Cognitive Science Society.

Marewski, J. N., Gaissmaier, W., Schooler, L. J., Goldstein, D. G., & Gigerenzer, G. (2010). From recognition to decisions: Extending and testing recognition-based models for multi-alternative inference. Psychonomic Bulletin and Review, 17, 287–309.

Marewski, J. N., & Schooler, L. J., (2011). Cognitive niches: An ecological model of strategy selection. Psychological Review, 118, 393–437.

de Montaigne, M. (1595/2003). The complete essays. London: Penguin.

Neely, J. H., & Payne, D. G. (1983). A direct comparison of recognition failure rates for recallable names in episodic and semantic memory tests. Memory and Cognition, 11, 161–171.

Newell, B. R., & Fernandez, D. (2006). On the binary quality of recognition and the inconsequentiality of further knowledge: Two critical tests of the recognition heuristic. Journal of Behavioral Decision Making, 19, 333–346.

Newell, B. R., & Shanks, D. R. (2004). On the role of recognition in decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 923–935.

Oeusoonthornwattana, O., & Shanks, D. R. (2010). I like what I know: Is recognition a non-compensatory determiner of consumer choice? Judgment and Decision Making, 5, 310–325.

Oppenheimer, D. M. (2003). Not so fast! (and not so frugal!): Rethinking the recognition heuristic. Cognition, 90, B1–B9.

O’Regan, J. K., & Jacobs, A. M. (1992). Optimal viewing position effect in word recognition: A challenge to current theory. Journal of Experimental Psychology: Human Perception and Performance, 18, 185–197.

Pachur, T. (2010). Recognition-based inference: When is less more in the real world? Psychonomic Bulletin and Review, 17, 589–598.

Pachur, T., Bröder, A., & Marewski, J. N. (2008) .The recognition heuristic in memory-based inference: is recognition a non-compensatory cue? Journal of Behavioral Decision Making, 21, 183–210.

Pachur, T., Hanoch, Y., & Gummerum, M. (2010). Prospects behind bars: Analyzing decisions under risk in a prison population. Psychonomic Bulletin and Review, 17, 630–636.

Pachur, T., & Hertwig, R. (2006). On the psychology of the recognition heuristic: Retrieval Primacy as a key determinant of its use. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 983–1002.

Pachur, T., Mata, R., & Schooler, L. J. (2009). Cognitive aging and the adaptive use of recognition in decision making. Psychology and Aging, 24, 901–915.

Pachur, T., Schooler, L. J., & Stevens, J. R. (in press). When will we meet again? Regularities in the dynamics of social contact reflected in memory and decision making. In R. Hertwig, U. Hoffrage, & the ABC Research Group, Simple heuristics in a social world. New York: Oxford University Press.

Pachur, T., Todd, P. M., Gigerenzer, G., Schooler, L. J. & Goldstein, D. G. (2011). The recognition heuristic: A review of theory and tests. Frontiers in Cognitive Science, 2, article 147, 1–14.

Pleskac, T. J. (2007). A signal detection analysis of the recognition heuristic. Psychonomic Bulletin and Review, 14, 379–391.

Pohl, R. F. (2006). Empirical tests of the recognition heuristic. Journal of Behavioral Decision Making, 19, 251–271.

Richter, T., & Späth, P. (2006). Recognition is used as one cue among others in judgment and decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 150–162.

Rieskamp, J. (2008). The probabilistic nature of preferential choice. Journal of Experimental Psychology. Learning, Memory, and Cognition, 34, 1446–1465.

Rouder, J. N., & Ratcliff, R. (2004). Comparing categorization models. Journal of Experimental Psychology: General, 133, 63–82.

Scarborough, D. L., Cortese, C., & Scarborough, H. S. (1977). Frequency and repetition effects in lexical memory. Journal of Experimental Psychology: Human Perception and Performance, 3, 1–17.

Schooler, L. J., & Hertwig, R. (2005). How forgetting aids heuristic inference. Psychological Review, 112, 610–628.

Sherman, S. J., & Corty, E. (1984). Cognitive heuristics. In R. S. Wyer & T. K. Srull (Eds.), Handbook of social cognition (Vol. 1, pp. 189–286). Hillsdale, NJ: Erlbaum.

Shiffrin, R. M., Lee, M. D., Kim, W. J., & Wagenmakers, E.-J. (2008). A survey of model evaluation approaches with a tutorial on hierarchical Bayesian methods. Cognitive Science, 32, 1248–1284.

Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition memory: Applications to dementia and amnesia. Journal of Experimental Psychology: General, 117, 34–50.

Sternberg, S. (1966). High-speed scanning in human memory. Science, 153, 652–654.

Tomlinson, T., Marewski, J. N., & Dougherty, M. (2011). Four challenges for cognitive research on the recognition heuristic and a call for a research strategy shift. Judgment and Decision Making, 6, 89–99.

Tulving, E. (1972). Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.), Organization of memory (pp. 381–403). New York, NY: Academic Press.

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207–232.

Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5, 297–323.

Volz, K. G., Schooler, L. J., Schubotz, R. I., Raab, M., Gigerenzer, G., & von Cramon, D. Y. (2006). Why you think Milan is larger than Modena: Neural correlates of the recognition heuristic. Journal of Cognitive Neuroscience, 18, 1924–1936.

Wallsten, T. S. (1983). The theoretical status of judgmental heuristics. In R. W. Scholz (Ed.), Decision making under uncertainty: Cognitive decision research, social interaction, development and epistemology (pp. 21–39). Amsterdam: Elsevier.

Weber, E. U., Goldstein, W. M., & Barlas, S. (1995). And let us not forget memory: The role of memory processes and techniques in judgment and choice. In J. R. Busemeyer, R. Hastie & D. L. Medin (Eds.), Decision making from the perspective of cognitive psychology (pp. 33–81). New York: Academic Press.

Wright, J. C., & Murphy, G. L. (1984). The utility of theories in intuitive statistics: The robustness of theory-based judgments. Journal of Experimental Psychology: General, 113, 301–322.


*
University of Basel, Department of Psychology, Missionsstrasse 60–62, 4055 Basel, Switzerland. Email: Thorsten.pachur@unibas.ch.
I thank Jonathan Baron, Benjamin Hilbig, Julian Marewski, Rüdiger Pohl, and Oliver Vitouch for comments on an earlier draft of this paper, and Laura Wiles for editing the manuscript.
1
The availability heuristic, by contrast, has often been criticized as being only vaguely defined (e.g., Fiedler, 1983; Wallsten, 1983). Moreover, neither its boundary conditions nor its relationship to other heuristics (such as representativeness; see Sherman & Corty, 1984) have been specified (Gigerenzer, 1996).
2
For a comparison of various approaches to measure the use of the recognition heuristic, such as adherence rates, DI, and the r-model, see Hilbig (2010a).
3
Note that compensatory models can differ considerably in their processing assumptions and predicted decisions, making it rather uninformative to collapse them all (e.g., Rieskamp, 2008).

This document was translated from LATEX by HEVEA.