Judgment and Decision Making, vol. 6, no. 1, February 2011, pp. 89-99

Four challenges for cognitive research on the recognition heuristic and a call for a research strategy shift

Tracy Tomlinson*   Julian N. Marewski#
Michael Dougherty
%

The recognition heuristic assumes that people make inferences based on the output of recognition memory. While much work has been devoted to establishing the recognition heuristic as a viable description of how people make inferences, more work is needed to fully integrate research on the recognition heuristic with research from the broader cognitive psychology literature. In this article, we outline four challenges that should be met for this integration to take place, and close with a call to address these four challenges collectively, rather than piecemeal.


Keywords: recognition heuristic, recognition memory, formal modeling, strategy selection, cognitive architectures.

1  Introduction

Goldstein and Gigerenzer (1999) proposed the recognition heuristic as a mechanism to describe inferential judgments. Briefly, this heuristic asserts that an inference can be made merely on the basis of the presence or absence of information in memory: “if one of two objects is recognized and the other is not, then infer that the recognized object has the higher value” (Goldstein & Gigerenzer, 2002, p. 76). The elegance of the recognition heuristic as a decision rule lies in its simplicity — not only does it purport that inferences can be achieved through a simple rule, but it is also appears to be an intuitive process model.

The development of the recognition heuristic by Goldstein and Gigerenzer (1999, 2002; see also Gigerenzer & Goldstein, 1996) has fueled over a decade of research within behavioral decision-making. While the recognition heuristic has been well received by many researchers (e.g., Borges, Goldstein, Ortmann, & Gigerenzer, 1999; Hertwig, Herzog, Schooler, & Reimer, 2008; Pachur, 2010; Pachur & Hertwig, 2006; Pachur & Biele, 2007; Pachur, Mata & Schooler, 2009; Serwe & Frings, 2006; Snook & Cullen, 2006; Volz et al., 2006), others have been rather critical of its accuracy in describing the underlying psychological processes (e.g., Dougherty, Franco-Watkins, & Thomas, 2008; Hilbig & Pohl, 2008; Bröder & Eichler, 2006; B. R. Newell & Fernandez, 2006; Oppenheimer, 2003; Pohl, 2006; Richter & Späth, 2006). These criticisms notwithstanding, the recognition heuristic has led to a focus on recognition-based inference and, much like other heuristics, such as representativeness, has managed a permanent place within the judgment and decision making vernacular (Tversky & Kahneman, 1973; Tversky & Kahneman, 1974). While the construct of representativeness proved transformational to judgment and decision making research, it has arguably run its course as a useful theoretical construct (Gigerenzer, 1996). The question, however, is whether the recognition heuristic will follow a similar path.

In this article, we present four closely related challenges for researchers investigating the recognition heuristic, as well as a call for a fundamental shift in research strategy. We argue that these challenges should be met and our call for a strategy shift be taken up if theoretical progress in research on the recognition heuristic is to move forward.

Before going into detail, a few comments are warranted. First, our four challenges and our call for a research strategy shift are not intended as criticisms, but rather as proposals for future research directions. Second, while we frame these challenges and our call for strategy shift with respect to the recognition heuristic, they may apply to related models of inference, including classics such as the availability heuristic. Importantly, it is likely that meeting the challenges will require extending and revising the recognition heuristic; whether this heuristic will then still be named recognition heuristic is immaterial to our points, because, as Hintzman (1990) observed, “the explanatory burden is carried by the proposed mechanism , not by what they are called” (p. 121).

Finally, some of our challenges are intended to consider the full import of memory processes for judgment and decision making. The literature on memory theory provides fertile ground for enriching and extending theories of judgment and decision making. While recent work in decision theory has emphasized the adaptive role that recognition may play in facilitating accurate judgment, errors and biases are commonplace amongst memory phenomena, including such effects as false-memories (Roediger & McDermot, 1995), misinformation (Loftus, 2005), imagination inflation (Garry, Manning, Loftus, & Sherman, 1996), and reality and source monitoring errors (M. Johnson, Hashtroudi, & Lindsay, 1993), amongst many others (see Koriat, Goldsmith, & Pansky, 2000). Inasmuch as memory processes are exploited for the purposes of judgment and decision making, then understanding the full range of effects associated with common usage of memory seems relevant for understanding judgment and decision making. In a similar vein, inasmuch as there are multiple strategies that may be deployed for guiding judgment and decision making—as is assumed by the framework in which the recognition heuristic has been developed—there is a need for developing cognitive architectures that both accommodate multiple strategies and model how people select amongst them. For example, it is not sufficient to postulate that some choices are made by relying on recognition memory; one must model as precisely as possible how the decision maker has come to utilize recognition, as opposed to some other process. Thus, the challenges posed below are embedded in a far greater challenge—a meta-challenge, of sorts—which is to build integrative models that respect the empirical and theoretical foundations of both memory and judgment and decision making.

2  Challenge 1: Define and model the basis of recognition as a function of memory

The recognition heuristic, as specified by Goldstein and Gigerenzer (1999, 2002), operates on the output of memory, and therefore does not directly model the processes of recognition memory. On the one hand, modeling inferences as a function of the output of memory seems perfectly appropriate—much can be learned by starting where memory leaves off. However, we suggest that much richer theoretical insights can be made by integrating models of memory with models of choice, and that a full account of recognition-based inference as assumed by the recognition heuristic necessitates that we bridge the gap between the memory processes underlying recognition and the rules that operate on the output of memory.

Why is it necessary to understand the memorial basis of recognition in order to study recognition-based inference? Perhaps the most compelling reason for integrating memory theory with research on the recognition heuristic is that the underlying basis of the recognition heuristic is in fact memory. Thus, in order to understand inference behavior that is assumed to operate on the output of memory, it seems important to understand the processes that enable one to recognize elements within the choice set, that is, within the set of objects about which an inference is to be made.

The second reason is that the assumptions one makes about memory can have important consequences for understanding the factors that affect the recognition decision, and how likely it is that a person making inferences in line with the recognition heuristic will be accurate. Similar arguments have been made by Wixted (2007) in comparing the unequal variance signal detection theory and dual-process models in accounting for recognition memory data. This point was also demonstrated by Schooler and Hertwig (2005) in their implementation of the recognition heuristic in the ACT-R cognitive architecture (e.g., Anderson et al., 2004) and in a signal detection model, and Pleskac (2007) in his application of signal detection theory to the recognition heuristic. As this work illustrates, depending on the assumptions one makes regarding the underlying memory processes, different predictions regarding inference behavior and the accuracy of a person’s inferences will be realized. For example, Schooler and Hertwig (2005) showed that the forgetting of memory contents over time will systematically influence the accuracy of decisions that can be made with the recognition heuristic.

In short, the challenge for researchers moving forward is to better describe the memorial processes that underpin the recognition decision. In our view, if one were to accept the assumption that recognition-based inference is dependent on the particular assumptions made regarding the memorial processes, then specifying these memory processes must take precedence over research on the recognition heuristic itself. Thus, the study of recognition-based inference should take place within the context of well-specified models of memory. First steps in this direction, albeit without actually implementing well-specified memory models, have already been taken a couple of years ago. For instance, B. R. Newell and Fernandez (2006) experimentally examined whether people make inferences based on a binary or a continuous recognition trace, and Pachur and Hertwig (2006) ran two experiments to investigate whether the recognition of an alternative is assessed faster than knowledge about the alternative being retrieved from memory. How these and other questions can be addressed by using a well-specified model of memory has recently been shown by Marewski and Schooler (2010). They use the ACT-R architecture to predict quantitatively the retrieval of knowledge beyond recognition, the binary recognition judgment, and the underlying continuous memory activation. Using a much different modeling architecture, Thomas, Dougherty, Sprenger, Harbison’s (2008) HyGene model is able to account for a host of phenomena within both the memory and judgment and decision making literature. Within this model, the same processes required for recognizing a colleague at a conference or a word in a memory experiment are used as the basis of inference and probability judgment. Finally, as of writing this article, Erdfelder, Küpper-Tetzel, and Mattern (2011) studied the recognition heuristic from the perspective of a two-high-threshold model of recognition memory (Bredenkamp & Erdfelder, 1996; Snodgrass & Corwin, 1988) that belongs to the class of multinomial processing tree models (Batchelder & Riefer, 1990; Erdfelder et al., 2009). The idea that memory theory is important beyond mere recognition-based inference motivates our second challenge.

3  Challenge 2: Study decision tasks beyond two-alternative forced choice

The recognition heuristic, as originally formulated by Goldstein and Gigerenzer (1999, 2002), is a model for a situation in which a decision maker sees two names, one recognized and the other not, and has to decide which of the two options possesses a higher score on a given criterion. While people may face this type of task in some situations, people routinely utilize recognition processes in a variety of other contexts. We suspect that much can be learned about recognition-based inference by studying such tasks. Let us point to a couple of examples.

Multi-alternative inference. Recent work has begun to extend research on recognition-based inferences beyond the choice between two names (e.g., Frosch, Beaman, & McCloy, 2007). For example, in the marketing literature, many theories of choice assume a two-stage process: When we are evaluating multiple options, such as which of 20 jams to buy, or which food to choose in a restaurant, a smaller set of relevant options is formed first, then a choice is made after more detailed examinations of the options in this consideration set (Alba & Chattopadhyay, 1985; Hauser & Wernerfelt, 1990; Howard & Sheth, 1969). Marewski, Gaissmaier, Schooler, Goldstein, and Gigerenzer (2010) have proposed that the recognition heuristic allows a person to create such consideration sets, consisting of options with recognized names. In a second stage, recognized options can be ranked with heuristics that use alternatives’ attributes as cues—say, knowledge about a jam’s ingredients, or about a meal’s taste. Unrecognized options can be put aside. By ignoring the unheard-of and unrecognized, the recognition heuristic reduces the complexities of choosing among many options—much like Tversky’s (1972) classic elimination by aspects model does on the basis of other probabilistically selected criteria. However, the process specified by Marewski, Gaissmaier, et al. applies only in those cases where the participant has been provided the k options, say when choosing among jams in supermarket shelves, or when choosing among meals in a restaurant’s menu. Thus, this process does not address the case where the options are generated from memory in the first place.

Option generation. In many real-world inference tasks, it is incumbent on the decision maker to generate the set of options to be considered. For example, before considering where to eat dinner, the decision maker must generate a set of options (e.g., Mexican, Japanese, or German food); prior to nominating a student for an academic award, the set of viable contenders must be defined. In such cases, the choice set is not laid out in front of the decision maker; instead the choice set must be defined internally; one must consult his or her memory to determine the set of viable dining options, or to identify particularly meritorious students. For making a choice among such options, the recognition heuristic would scarcely be applicable, since, by definition, all retrieved options will necessarily be recognized.1

The determination of the choice set constitutes what Gettys and Fisher (1979) termed pre-decision processes—processes that take place prior to the implementation of a decision rule. According to Gettys and Fisher (1979), these processes are foundational to the decision process: Decision and judgment can take place only after the choice set has been defined. Moreover, inasmuch as the memory processes responsible for generating the choice set are prone to errors or biases, these errors in memory can easily cascade into errors or biases in judgment (Dougherty & Hunter, 2003; Dougherty & Sprenger, 2006; Dougherty, Thomas, & Lange, 2010; Thomas, et al. 2008).

The HyGene model of Thomas et al. (2008) attempts to bridge the gap between memory and judgment by implementing decision rules within the context of a well-specified model of memory (see also Dougherty, Gettys, & Ogden, 1999; Juslin, & Persson, 2002; Reyna, Lloyd, & Brainerd, 2003; Schooler & Hertwig, 2005). Thus, HyGene models the process by which choice sets are generated, by explicitly linking the generation process to memory retrieval. Within HyGene options are recalled from memory using contextual or evidence-based information and maintained in a capacity limited set of leading contenders (e.g., the set of best options), which then are input into an decision rule that allows individuals to choose the best option. The decision rule in HyGene is based on memory activation, which is akin to a recognition signal in models of memory. In as much as the choice set is determined by the recall of options from memory, it will affect later decision making (see Dougherty, Thomas, & Lange, 2010). This directly leads us to our next challenge for the recognition heuristic.

4  Challenge 3: Describe how people choose to use recognition as opposed to other inference processes, where applicable

The thesis that people possess a repertoire of strategies to choose from has been formulated in many areas, including choice (Einhorn, 1970; Fishburn, 1980; Payne, 1976; Payne, Bettman, & Johnson, 1988, 1993; Rapoport & Wallsten, 1972), social interactions (Erev & Roth, 2001; Stahl, 1999), mathematical skill (Siegler, 1988), word recognition (Eisenberg & Becker, 1982), and question answering (Reder, 1987), to name a few. In a similar manner, the recognition heuristic is realized as one of a repertoire of heuristics—an adaptive toolbox—with each heuristic being adapted to different decision contexts or ecologies. Assuming a set of heuristics to describe behavior may well be useful for understanding the complexities of human decision making. Yet, for such a multiple-strategy approach to drive long-term theoretical advances, work must be done to integrate the heuristics within a common theoretical framework that allows one to predict when one heuristic is chosen over another (e.g., Dougherty et al., 2008; Gigerenzer, Hoffrage, & Goldstein, 2008; Glöckner, Betsch, & Schindler, 2010; Marewski, 2010; Marewski, Schooler, Gigerenzer, 2010).

In fact, much work has tackled the question of how people choose between the different processes, decision strategies, operators, routines, and production rules available to them (e.g., Lovett & Anderson, 1996, Beach & Mitchell, 1978; Busemeyer & Myung, 1992; Rieskamp & Otto, 2006, Payne, Bettman, & E. J. Johnson, 1988, 1993), and some of this work has also addressed the issue of strategy selection for the recognition heuristic (e.g., Pachur & Hertwig, 2006; Marewski, Gaissmaier, et al., 2009, 2010; Volz et al., 2006; Hertwig et al., 2008). However, except for Marewski and Schooler (2010), strategy selection for the recognition heuristic has not been cast into a detailed architectural and quantitative model. Hence, the challenge here is to develop modeling approaches that allow one to capture behavior both at the heuristic level, and at the strategy selection level.

Various theoretical pathways can be taken to tackle this modeling challenge, with various perspectives disagreeing on which path is most productive. Dougherty and colleagues (Dougherty et al., 2008; Thomas et al., 2008; Dougherty et al., 2010) have argued that judgment and decision making is best modeled within the context of a highly constrained cognitive architecture. In their view, theoretical advances require that the multiple-strategy approach provide a unifying theory that describes how individual strategies are selected, and which places constraints on the growth of the proposed repertoire of heuristics. In their view, without such constraints, the repertoire of heuristics runs the risk of continuing to grow, with a new heuristic identified for each unique context, environment, or task in which a decision is made.

An example of a constrained system is exemplified by HyGene. In this model, the requirement that the model retain its ability to capture and model memory phenomena in addition to judgment phenomena imposes tight constraints on making modifications to accommodate new experimental findings. Within the HyGene model judgment and decision making is assumed to be based on (a) recall of options from long-term memory, (b) maintenance of these options in working memory, and (c) a small-set of rules that guide choice, enable the formulation of probability judgments, and inform information search (e.g., hypothesis testing). The recognition heuristic, and other recognition-based decision mechanisms are special cases of more general memory retrieval processes (Dougherty et al., 1999), which also serve as input into algorithms for computing subjective probability and selecting information for hypothesis testing. By assuming a set of algorithms for probability judgment and hypothesis testing, HyGene technically qualifies as a multiple-strategy model; however, the operation of these rules is constrained by the retrieval processes that precede their implementation (see Dougherty et al., 2010). Moreover, the algorithms are conceived of as fairly flexible and general, that is, applicable across different decision problems.

An alternative theoretical framework for modeling strategy selection is the ACT-R cognitive architecture, which has been used to implement the recognition heuristic, the fluency heuristic, and other decision strategies within a unifying quantitative theory (e.g., Schooler & Hertwig, 2005; Marewski & Schooler, 2010). In ACT-R, decision strategies can be realized as sets of production rules (i.e., if-then rules) that operate on motor, perceptual, memory, intentional, and other processes, which in turn are implemented as a set of modules and information-processing bottlenecks (called buffers). Being perhaps the most detailed quantitative theory of cognition developed to date, ACT-R—much like HyGene—imposes strong constraints on accommodating new empirical findings.

However, in contrast to Dougherty et al. (2010) who have argued for a small set of general cognitive processes, the heuristics that have been implemented in ACT-R are typically considered to form part of a larger repertoire. According to this view, (e.g., Gigerenzer, Todd, & the ABC Research Group , 1999; Marewski, Gaissmaier, & Gigerenzer, 2010-a, 2010-b) human judgment can be error-prone; yet, across many tasks the ability to make accurate and fast inferences is fundamental to adaptive behavior. As a number of computer simulation studies, mathematical analyses, and experiments shows, this goal may be better served by a repertoire of specialized heuristics than by more general, less specialized tools (e.g., Brighton, 2006; Czerlinski, Gigerenzer, Goldstein, 1999; Gigerenzer & Brighton, 2009; Gigerenzer & Goldstein, 1996).

Finally, a very different position than those embraced by Dougherty, Gigerenzer, Marewski, and others, is taken by Glöckner et al. (2010), who suggest replacing the notion of multiple decision strategies with an all-purpose mechanism that applies to all tasks, using just a single decision rule. Metaphorically, the distinction between the various theoretical views can be stated thusly: do people have a collection of specialized tools in their mental toolbox (e.g., a wrench, a hammer, screwdriver, etc.), or do they have a small number of general purpose tools—or possibly even just one—akin to an adjustable wrench (B. R. Newell , 2005)? As a matter of theory development, the explanatory power of theories arguably rests, at least in part, on its generality. Within the multiple-strategy approaches, such as those advocated by Gigerenzer, Marewski, and others, this generality is captured by both the architectural theory of strategy selection and the specific tools—regardless of the number of tools. Within the single-strategy approach taken by Glöckner and colleagues, the generality is captured by the individual tool. The approach taken by Dougherty and colleagues is a compromise between these two positions, wherein the decision maker possesses a limited number of general purpose tools.

While it is easy to point out the distinction between the various multiple and single-strategy approaches outlined above, they are not necessarily at odds with one another. For example, specific heuristic mechanisms (as postulated by Gigerenzer and colleagues, for example) might be best conceptualized as being embedded within the context of a small set of more general processes (as postulated by Dougherty and colleagues), which oversees resource allocation, strategy selection, and option generation (amongst other things). Such a hierarchical system would be analogous to contemporary models of working memory, which assume that the working memory system consists of a collection of more specific processes (set shifting, updating, inhibition, maintenance, search, etc.; Baddeley, 2003; Friedman & Miyake, 2004; Unsworth & Engle, 2007).

Contemporary cognitive architectures provide corresponding hierarchical systems. For instance, as mentioned above, ACT-R consists of a smaller number of modules that are coordinated through a production system. As well, HyGene is conceived of as a cognitive architecture consisting of a small-set of general processes. In principle, both of these architectures enable one to model the processes underlying heuristic mechanisms, including the recognition heuristic, within a single, overarching quantitative theory. Within the context of these hierarchical models, the distinction between the adjustable-wrench and tool-kit metaphors is largely dissolved, as they are merely different levels of the same hierarchy.

In short, in developing models of strategy selection for the recognition heuristic and other mechanisms of decision-making, it may be useful to consider how individual strategies are related to more general information processing mechanisms. This would enable researchers to predict when the recognition processes will be employed and when other decision mechanisms will come into play. As we will argue next, such predictions are important to test the recognition heuristic.

5  Challenge 4: Development of multiple methods for examining recognition use in decision making: Towards tests of competing models

There are two potential reasons that a decision strategy does not predict behavior. One is that the strategy is not used because people choose not to use it in a particular situation. A second is that the decision strategy is generally not a good model of behavior. The question of when people select to use the recognition heuristic or any other mechanism of inference is thus directly related to the need for developing methods to evaluate how good the recognition heuristic is as a model of behavior (see Marewski, Schooler, et al., 2010, for a discussion).

Past evaluations of the recognition heuristic’s descriptive adequacy have largely focused on one central assumption of this model, namely that further knowledge about alternatives’ attributes (i.e., about cues) is not integrated into inference. Rather, decisions are based solely on recognition. Different approaches have been taken to test this assumption. These can be roughly categorized in evaluations in absolute and evaluations in relative terms.

Past evaluations in absolute terms: Accordance rates. The most common approach entails reporting the proportion of inferences that are made consistent with the recognition decision; often referred to as accordance or adherence rate (Bröder & Eichler, 2006; Goldstein & Gigerenzer, 2002; Hilbig & Pohl, 2008; B. R. Newell & Fernandez, 2006; B. R. Newell & Shanks, 2004; Oppenheimer, 2003; Pachur, Bröder, & Marewski, 2008; Pohl, 2006). To illustrate this point, Richter and Späth (2006) ran a series of studies and—observing that a smaller proportion of inferences were consistent with the recognition heuristic when knowledge that contradicted recognition were available—concluded that there was no evidence for the decision processes assumed by the recognition heuristic. Similar absolute evaluations of the recognition heuristic based on such accordance rates have also been made by many others, including Goldstein and Gigerenzer (2002), who, however, came to opposite conclusions than Richter and Späth, suggesting that further knowledge about alternatives’ attributes is not integrated into the inference.

Past evaluations in absolute terms: Measurement tools. Besides the proportion of inferences consistent with the recognition heuristic, there are a number of other absolute measures available, including d´ and related indices based on signal detection theory (Pachur & Hertwig, 2006; Pachur et al., 2009), the discrimination index (Hilbig & Pohl, 2008), and a multinomial processing tree model (Hilbig, Erdfelder, & Pohl, 2010). To illustrate this point, the multinomial processing tree model attempts to separate out inferences based solely on recognition from those in which people may rely on other information in addition to recognition, thereby providing an indicator of what one may deem recognition heuristic use. Based on this indicator, Hilbig, et al. argued that the rate of recognition heuristic usage tends to be overestimated by the proportion of people making decisions consistent with the recognition decision.2

Past evaluations in relative terms: Formal comparisons of competing models. Only a few studies (i.e., Marewski, Gaissmaier et al., 2009, 2010; Pachur & Biele, 2007; see also Bröder & Glöckner, 2011, for decisions in a non-memory based paradigm) have evaluated the recognition heuristic in relative terms, which entails comparing the match of the recognition heuristics predictions and observed data to the match between other models’ predictions and observed data. As has been argued repeatedly in the model selection literature, formal model comparisons establish yardsticks for evaluating the descriptive adequacy of competing models, with the models being each other’s benchmarks in model evaluation (on some of the merits and complications of modeling, see Fum, Del Missier, & Stocco, 2007; Hintzman, 1991; Lewandowsky, 1993; Marewski & Olsson, 2009; Pitt, Myung, & Zhang, 2002). When just one model is tested—and thus evaluated in absolute terms—a seemingly large discrepancy between the model’s predictions and the observed data might lead a researcher to reject that model. In contrast, with a comparison, the researcher may find that all models suffer, enabling her to find out which model suffers least. Moreover, with a comparison a researcher does not have to specify a threshold (or other criterion) in order to decide what counts as good or bad performance of a model, as has been done (implicitly or explicitly) in the absolute evaluations of the heuristic hitherto conducted.

In terms of theory testing, both absolute and relative model evaluations can be useful, but at different times in the evolution of a theory. Early on, absolute model testing may be useful for establishing, rejecting, or motivating the modification of a theory. However, once alternative theories have been developed, then (ideally) absolute model testing should give way to comparative model evaluation. Given developments on recognition-based inference over the past several years, the time is ripe to begin more vigorous comparative modelling evaluation, where various instantiations of the recognition heuristic and competing models are pitted against one another, rather than tested in isolation. This way, researchers will not only (a) learn how well the recognition heuristic really predicts behavior in comparison to alternative models, but also have (b) the chance of identifying a model that accounts for behavior better than the recognition heuristic, as well as (c) the chance of building a unifying model that can account for both, inference processes based on mere recognition, and inference processes based on knowledge, recognition, and/or other information. We also argue that such comparisons should use multiple approaches to model evaluation, perhaps not limited to those discussed above.

For example, another potentially useful methodology that has not yet been employed is state-trace analysis (Bamber, 1979). State trace analysis permits the researcher to test hypotheses about whether the underlying cognitive processes are best described by a single- versus multi-process model. This analysis would enable researchers to test whether a single process, such as recognition, best accounts for recognition-based inference or whether a dual process with recognition and knowledge best accounts for the data. For example, state-trace theory has been applied to meta-cognitive judgments, such as remember-know judgments (Dunn, 2008). For remember-know judgments participants are asked to judge whether they recognize a word from a previous testing because they explicitly remember seeing the word (i.e., they have a specific recollection of the specific item) or they know the word was presented because it is so familiar (i.e., absent recollection, one my have a feeling of knowing based on a high familiarity). Dunn (2008) applied state-trace theory to these judgments and found that a single-process model of “strength of evidence” explained the remember-know task better than a dual-process model of recollection and familiarity.

Similarly, state-trace analysis may be applied to recognition-based inference to assess if one (recognition) or two (recognition and knowledge) processes best explain the data. While state-trace analysis does not allow one to assess the relative contribution of multiple processes (in this case recognition and knowledge), it may provide a useful framework for determining if the data are best represented by a single versus dual process model, which may be a useful step in the development of formal models of decisions from recognition and knowledge.

6  Call for a research strategy shift

Research on the recognition heuristic has steadily grown over the last 15 years. However, is the recognition heuristic, and more generally, recognition-based decision processes, really as simple as they may seem to be? In detailing the four challenges in this paper, our original goal was to provide a roadmap of sorts for researchers interested pursing work on the recognition heuristic. However, while writing this article, we came to the view that truly transformative work on the recognition heuristic, and other judgment and decision making phenomena for that matter, will require addressing a number of very complex problems. While the recognition heuristic is intuitively simple as an explanation, its implementation in any given context requires the coordination of a number of more complex underlying cognitive (and indeed neural) mechanisms. For example, as we have pointed out, inasmuch as the recognition heuristic is based on underlying memory processes, then models of the recognition heuristic ought to stay true the fundamental properties of memory. That is, any model of recognition-based inference should retain its ability to account for recognition memory phenomena.

So how should researchers proceed in addressing our challenges? There are two possibilities. On the one hand, researchers could proceed as they have over the past 15 years: by carving a specific research problem (i.e., challenge) and tackling it in isolation; by building and testing verbal hypotheses; and by theorizing in terms of simple dichotomies, such as whether the memory process underlying recognition judgments are continuous or binary (e.g., B. R. Newell & Fernandez, 2006), whether recognition is used with our without combing it with additional knowledge (e.g., Richter & Späth, 2006); or whether decision making is best described by repertoire of decision strategies or by a single all-purpose mechanism (e.g., B. R. Newell, 2005).

Alternatively, researchers could strive to meet all of our challenges in concert, by developing models that transcend each challenge and connect them through a unified theoretical framework—a path advocated by the present authors and others (A. Newell, 1973; Nikolić, 2009). It is not sufficient to meet one, two, or even all four of the challenges, for a truly cumulative account of judgment and decision-making requires that all of the pieces to the puzzle be connected and firmly in place.

References

Alba, J. W., & Chattopadhyay, A. (1985). Effects of context and part-category cues on recall of competing brands. Journal of Marketing Research, 22, 340–349.

Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., & Qin, Y. (2004). An integrated theory of the mind. Psychological Review, 111, 1036–1060.

Anderson, J. R., Bothell, D., Lebiere, C., & Matessa, M. (1998). An integrated theory of list memory. Journal of Memory and Language, 38, 341–380.

Baddeley, A. D. (2003). Working memory: Looking back and looking forward. Nature Reviews Neuroscience, 4, 829–839.

Bamber, D. (1979). State-trace analysis: A method of testing simple theories of causation. Journal of Mathematical Psychology, 19, 137–181.

Batchelder, W. H., & Riefer, D. M. (1990). Multinomial processing models of source monitoring. Psychological Review, 97, 548–564.

Beach, L. R. & Mitchell, T. R. (1978). A contingency model for the selection of decision strategies. Academy of Management Review, 3, 449–449.

Borges, B., Goldstein, D. G., Ortmann, A., & Gigerenzer, G. (1999). Can ignorance beat the stock market? In G. Gigerenzer, P. M. Todd, & the ABC Research Group (Eds.), Simple heuristics that make us smart (pp. 59–72). New York: Oxford University Press.

Bredenkamp, J., & Erdfelder, E. (1996). Methoden der Gedächtnispsychologie [Methods of the psychology of memory]. In D. Albert & K.-H. Stapf (Hrsg.), Gedächtnis (Enzyklopädie der Psychologie, Themenbereich C, Serie II, Band 4, S. 1–94). Göttingen: Hogrefe.

Brighton, H. (2006). Robust inference with simple cognitive models. In C. Lebiere & B. Wray (Eds.), Between a rock and a hard place: Cognitive science principles meet AI-hard problems. Papers from the AAAI Spring Symposium (AAAI Tech. Rep. No. SS-06–03, pp. 17–22). Menlo Park, CA: AAAI Press.

Bröder, A., & Eichler, A. (2006). The use of recognition information and additional cues in inferences from memory. Acta Psychologica, 121, 275–284.

Busemyer, J. R., & Myung, I. J. (1992). An adaptive approach to human decision making: Learning theory, decision theory, and human performance. Journal of Experimental Psychology: General, 121, 177–184.

Czerlinski, J., Gigerenzer, G., & Goldstein, D. G. (1999). How good are simple heuristics? In G. Gigerenzer, P. M. Todd, & the ABC Research Group, Simple heuristics that make us smart (pp. 97–118). New York: Oxford University Press.

Dougherty, M. R., Franco-Watkins, A. M., & Thomas, R. (2008). Psychological plausibility of the theory of Probabilistic Mental Models and the Fast and Frugal Heuristics. Psychological Review, 115, 199–213.

Dougherty, M. R. P., Gettys, C. F., & Ogden, E. E. (1999). MINERVA-DM: A memory processes model for judgments of likelihood. Psychological Review, 106, 180–209.

Dougherty, M. R. P. & Hunter, J. E. (2003). Probability judgment and subadditivity: The role of working memory capacity and constrained retrieval. Memory & Cognition, 31, 968–982.

Dougherty, M. R. & Sprenger, A. (2006). The influence of improper sets of information on judgment: How irrelevant information can bias judged probability. Journal of Experimental Psychology: General, 138, 262–281.

Dougherty, M. R., Thomas, R. P., & Lange, N. (2010). Toward an integrative theory of hypothesis generation, probability judgment, and hypothesis testing. The Psychology of Learning and Motivation, Volume 52, Academic Press.

Dunn, J. C., (2008). The dimensionality of the remember-know task: A state-trace analysis. Psychological Review, 115, 426–446.

Einhorn, H. J. (1970). The use of nonlinear, noncompensatory models in decision making. Psychological Bulletin, 73, 221–230.

Eisenberg, P., & Becker, C. A. (1982). Semantic context effects in visual word recognition, sentence processing, and reading: Evidence for semantic strategies. Journal of Experimental Psychology: Human Perception and Performance, 8, 739–756.

Erdfelder, E., Küpper-Tetzel, C. E., & Mattern, S. D. (2011). Threshold models of recognition and the recognition heuristic. Judgment and Decision Making, 6, 7–22.

Erdfelder, E., Auer, T.-S., Hilbig, B. E., Aßfalg, A., Moshagen, M., & Nadarevic, L. (2009). Multinomial processing tree models: A review of the literature. Zeitschrift für Psychologie / Journal of Psychology, 217, 108–124.

Erev, I., & Roth, A. E. (2001). Simple reinforcement learning models and reciprocation in the prisoner’s dilemma game. In G. Gigerenzer & R. Selten (Eds.), Bounded rationality: The adaptive toolbox (pp. 215–231). Cambridge, MA: MIT Press.

Fishburn, P. C. (1980). Lexicographic additive differences. Journal of Mathematical Psychology, 21, 191–218.

Friedman, N. P., & Miyake, A. (2004). The relations among inhibition and interference control functions: A latent variable analysis. Journal of Experimental Psychology: General, 133, 101–135.

Frosch, C. A., Beaman, C. P., & McCloy, R. (2007). A little learning is a dangerous thing: An experimental demonstration of recognition-driven inference. The Quarterly Journal of Experimental Psychology, 60, 1329–1336.

Fum, D., Del Missier, F., & Stocco, A. (2007). The cognitive modeling of human behavior: Why a model is (sometimes) better than 10,000 words. Cognitive Systems Research, 8, 135–142.

Garry, M., Manning, C., Loftus, E., & Sherman, S. (1996). Imagination inflation: Imagining a childhood event inflates confidence that it occurred. Psychonomic Bulletin & Review, 3(2), 208–214.

Gettys, C. F., & Fisher, S. (1979). Hypothesis plausibility and hypothesis generation. Organizational Behavior and Human Decision Processes, 24, 93–110.

Gigerenzer, G. (1996). On narrow norms and vague heuristics: A reply to Kahneman and Tversky. Psychological Review, 103, 592–596

Gigerenzer, G., & Brighton, H. (2009). Homo heuristicus: Why biased minds make better inferences. Topics in Cognitive Science, 1, 107–143.

Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103, 650–669.

Gigerenzer, G., Hoffrage, U., & Goldstein, D. G. (2008). Fast and frugal heuristics are plausible models of cognition: Reply to Dougherty, Franco-Watkins, & Thomas (2008). Psychological Review, 115, 230–239.

Gigerenzer, G., Todd, P. M., & the ABC Research Group. (1999). Simple heuristics that make us smart. New York, NY: Oxford University Press.

Glöckner, A. & Bröder, A. (2011). Processing of recognition information and additional cues: based analysis of choice, confidence, and response time. Judgment and Decision Making, 6, 23–42.

Glöckner, A., Betsch, T., & Schindler, N. (2010). Coherence shifts in probabilistic inference tasks. Journal of Behavioral Decision Making, 23, 439–462.

Goldstein, D. G., & Gigerenzer, G. (1999). The recognition heuristic: How ignorance makes us smart. In G. Gigerenzer, & P. M. Todd, (Eds.) Simple heuristics that make us smart. Oxford: Oxford University Press.

Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review, 109, 75–90.

Hauser, J. R., & Wernerfelt, B. (1990). An evaluation cost model of consideration sets. The Journal of Consumer Research, 16, 393–408.

Hertwig, R., Herzog, S. M., Schooler, L. J., & Reimer, T. (2008). Fluency heuristic: A model of how the mind exploits a by-product of information retrieval. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 1191–1206.

Hilbig, B. E. & Pohl, R. F. (2008). Recognizing users of the recognition heuristic. Experimental Psychology, 55, 394–401.

Hilbig, B. E., Erdfelder, E., & Pohl, R. F. (2010). One-reason decision making unveiled: A measurement model of the recognition heuristic. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 123–134.

Howard, J. A., & Sheth, J. N. (1969). The theory of buyer behavior. New York: John Wiley.

Hintzman, D. L. (1990). Human learning and memory: Connections and dissociations. Annual Review of Psychology, 41, 109–139.

Hintzman, D. L. (1991). Why are formal models useful in psychology? In W. E. Hockley & S. Lewandowsky (Eds.), Relating theory and data: Essays on human memory in honor of Bennet B. Murdock (pp. 39–56). Hillsdale, NJ: Erlbaum.

Johnson, M., Hashtroudi, S., & Lindsay, D. (1993). Source monitoring. Psychological Bulletin, 114(1), 3–28.

Juslin, P., & Persson, M. (2002). PROBabilities from EXemplars (PROBEX): A ’lazy’ algorithm for probabilistic inference from generic knowledge. Cognitive Science: A Multidisciplinary Journal, 26(5), 563–607.

Koriat, A., Goldsmith, M., & Pansky, A. (2000). Toward a psychology of memory accuracy. Annual Review of Psychology, 51481–537.

Loftus, E. (2005). Planting misinformation in the human mind: A 30-year investigation of the malleability of memory. Learning & Memory, 12, 361–366.

Lewandowsky, S. (1993). The rewards and hazards of computer simulations. Psychological Science, 4, 236–243.

Lovett, M. C., & Anderson, J. R. (1996). History of success and current context in problem solving: Combined influences on operator selection. Cognitive Psychology, 31, 168–217.

Marewski, J. N. (2010). On the theoretical precision, and strategy selection problem of a single-strategy approach: A comment on Glöckner, Betsch, and Schindler. Journal of Behavioral Decision Making, 23, 463–467.

Marewski, J. N., & Olsson, H. (2009). Beyond the null ritual: Formal modeling of psychological processes. Zeitschrift für Psychologie / Journal of Psychology, 217, 49–60.

Marewski, J. N., Gaissmaier, W., & Gigerenzer, G. (2010a). Good judgments do not require complex cognition. Cognitive Processing, 11, 103–121.

Marewski, J. N., Gaissmaier, W., & Gigerenzer, G. (2010b). We favor formal models of heuristics rather than loose lists of dichotomies: A reply to Evans and Over (2009). Cognitive Processing, 11, 177–179.

Marewski, J. N., Gaissmaier, W., Schooler, L. J., Goldstein, D. G., & Gigerenzer, G. (2009). Do voters use episodic knowledge to rely on recognition? In N. A. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society (pp. 2232–2237). Austin, TX: Cognitive Science Society.

Marewski, J. N., Schooler, L. J. (2010). Cognitive niches: An ecological model of emergent strategy selection. Manuscript submitted for publication.

Marewski, J. N., Schooler, L. J., & Gigerenzer, G. (2010). Five principles for studying people’s use of heuristics. Acta Psychologica Sinica, 1, 72–87.

Marewski, J. N., Gaissmaier, W., Schooler, L. J., Goldstein, D. G., & Gigerenzer, G. (2010). From recognition to decisions: Extending and testing recognition-based models for multi-alternative inference. Psychonomic Bulletin and Review, 17, 287- 309.

Newell, A. (1973). You Can’t play 20 questions with nature and win: Projective comments on the papers of this symposium. In W. G. Chase (Ed.), Visual information processing (pp. 283–310). Academic Press: New York.

Newell, B. R., (2005). Re-visions of rationality. Trends in cognitive science, 9, 11–15.

Newell, B. R. & Fernandez, D. (2006). On the binary quality of recognition and the inconsequentiality of further knowledge: Two critical tests of the recognition heuristic. Journal of Behavioral Decision Making, 19, 3330–346.

Newell, B. R., & Shanks, D. R. (2004). On the role of recognition in decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 923–935.

Nikolić, D. (2009). Model this! Seven empirical phenomena missing in the models of cortical oscillatory dynamics. Proceedings of the International Joint Conference on Neural Networks, IJCNN 2009.

Oppenheimer, D. M. (2003). Not so fast! (and not so frugal!): Rethinking the recognition heuristic. Cognition, 90, B1-B9.

Pachur, T. (2010). Recognition-based inference: When is less more in the real world? Psychonomic Bulletin and Review, 27, 589–598.

Pachur, T., & Biele, G. (2007). Forecasting from ignorance: The use and usefulness of recognition in lay predictions of sports events. Acta Psychologica, 125, 99–116.

Pachur, T., & Hertwig, R. (2006). On the psychology of the recognition heuristic: Retrieval primacy as a key determinant of its use. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 983–1002.

Pachur, T., Bröder, A., & Marewski, J. (2008). The recognition heuristic in memory-based inference: Is recognition a non-compensatory cue? Journal of Behavioral Decision Making, 21, 183–210.

Pachur, T., Mata, R., & Schooler, L. J. (2009). Cognitive aging and the use of recognition in decision making. Psychology and Aging, 24, 901–915

Payne, J. W. (1976). Task complexity and contingent processing in decision making: An information search and protocol analysis. Organizational Behavior & Human Performance, 16, 366–387.

Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 534–552.

Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker. New York, NY: Cambridge University Press.

Pitt, M. A., Myung, I. J., & Zhang, S. (2002). Toward a method for selecting among computational models for cognition. Psychological Review, 109, 472–491.

Pleskac, T. J. (2007). A signal detection analysis of the recognition heuristic. Psychonomic Bulletin & Review, 14, 379–391.

Pohl, R. F. (2006). Empirical tests of the recognition heuristic. Journal of Behavioral Decision Making, 19, 251–271.

Rapoport, A., & Wallsten, T. S. (1972). Individual decision behavior. Annual Review of Psychology, 21, 131–176.

Reder, L. M. (1987). Strategy selection in question answering. Cognitive Psychology, 19, 90–138.

Reyna, V., Lloyd, F., & Brainerd, C. (2003). Memory, development, and rationality: An integrative theory of judgment and decision making. Emerging perspectives on judgment and decision research (pp. 201–245). New York, NY US: Cambridge University Press. Retrieved from PsycINFO database.

Richter, T., & Späth, P. (2006). Recognition is used as one cue among others in judgment and decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 150–162.

Rieskamp, J., & Otto, P. E. (2006). SSL: A theory of how people learn to select strategies. Journal of Experimental Psychology: General, 135, 207–236.

Roediger, H., & McDermott, K. (1995). Creating false memories: Remembering words not presented in lists. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(4), 803–814.

Schooler, L. J., & Hertwig, R. (2005). How forgetting aids heuristic inference. Psychological Review, 112, 610–628.

Serwe, S., & Frings, C. (2006). Who will win Wimbledon? The recognition heuristic in predicting sports events. Journal of Behavioral Decision Making, 19, 321–332.

Siegler, R. S. (1988). Strategy choice procedures and the development of multiplication skill. Journal of Experimental Psychology: General, 117, 258–275.

Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition memory: Applications to dementia and amnesia. Journal of Experimental Psychology: General, 117, 34–50.

Snook, B., & Cullen, R. M. (2006). Recognizing national hockey league greatness with an ignorance-based heuristic. Canadian Journal of Psychology, 60, 33–43.

Stahl, D. O. (1999). Evidence based rule learning in symmetric normal-form games. International Journal of Game Theory, 28, 111–130.

Thomas, R. P., Dougherty, M. R., Sprenger, A., & Harbison, J. I. (2008). Diagnostic hypothesis generation and human judgment. Psychological Review, 115, 155–185

Tversky, A. (1972). Elimination by aspects: A theory of choice. Psychological Review, 79, 281–299.

Tversky, A., & Kahneman, D. (1973). Availability: a heuristic for judging frequency and probability. Cognitive Psychology 5, 207–232.

Tversky, A. & Kahneman, D. (1974). Judgments under uncertainty: Heuristics and biases. Science, 185, 1124–1131.

Unsworth, N., & Engle, R. W. (2007). The nature of individual differences in working memory capacity: Active maintenance in primary memory and controlled search from secondary memory. Psychological Review, 14, 104–132.

Wixted, J. T. (2007). Dual-process theory and signal-detection theory of recognition memory. Psychological Review, 114, 152–176.

Volz, K. G., Schooler, L. J., Schubotz, R. I., Raab, M., Gigerenzer, G., & Cramon, D. Y. von. (2006). Why you think Milan is larger than Modena: Neural correlates of the recognition heuristic. Journal of Cognitive Neuroscience, 18, 1924–1936.


*
Department of Psychology; University of Maryland; College Park MD, 20742.
#
Max Planck Institute for Human Development; Lentzeallee 94; 14195 Berlin, Germany, and IESE Business School, Barcelona, Spain. Email: marewski@mpib-berlin.mpg.de.
%
Department of Psychology; University of Maryland; College Park MD, 20742. Email: mdougherty@psyc.umd.edu.
All authors contributed equally to this work, and order of authorship is arbitrary. The positions represented in this article represent a compromise between the authors’ various positions. Please address inquiries concerning this article to either JM (marewski@mpib-berlin.mpg.de) or MD (mdougherty@psyc.umd.edu).
1
A common assumption in the recognition and decision making literatures is that retrieving an option (e.g., a jam’s name) entails recognizing the name (e.g., Anderson, Bothell, Lebiere, & Matessa, 1998; Schooler & Hertwig, 2005).
2
In contrast to the proportion of inferences consistent with the recognition decision (i.e., the accordance or adherence rates), the measurement tools are not free of assumptions. For instance, the multinomial processing tree model makes assumptions about how many correct inferences (e.g., about the size of cities) a person should be able to make when ignoring further knowledge and using the recognition heuristic. Similarly, the discrimination index assumes that people using the recognition heuristic will not be able to distinguish whether an inference (e.g., about the size of cities) will be correct or wrong. Using these measurement tools entails buying into these assumptions. Yet, these assumptions do not directly follow from the recognition heuristic, and may in fact turn out to be untenable when implementing this heuristic in models of memory, and/or when developing a theory of strategy selection for it, as we have called for in our second and third challenge, respectively. To illustrate this, it has been argued that the strength of the underlying recognition signal may guide people’s reliance on the recognition heuristic, enabling people to rely more often on the heuristic when using this heuristic is also likely to help them make a correct inference (Marewski, Gaissmaier, et al., 2010).

This document was translated from LATEX by HEVEA.