Judgment and Decision Making, vol. 3, no. 3, March 2008, pp. 215-228

Modeling option and strategy choices with connectionist networks: Towards an integrative model of automatic and deliberate decision making

Andreas Glöckner*
Max Planck Institute for Research on Collective Goods
  Tilmann Betsch
University of Erfurt

We claim that understanding human decisions requires that both automatic and deliberate processes be considered. First, we sketch the qualitative differences between two hypothetical processing systems, an automatic and a deliberate system. Second, we show the potential that connectionism offers for modeling processes of decision making and discuss some empirical evidence. Specifically, we posit that the integration of information and the application of a selection rule are governed by the automatic system. The deliberate system is assumed to be responsible for information search, inferences and the modification of the network that the automatic processes act on. Third, we critically evaluate the multiple-strategy approach to decision making. We introduce the basic assumption of an integrative approach stating that individuals apply an all-purpose rule for decisions but use different strategies for information search. Fourth, we develop a connectionist framework that explains the interaction between automatic and deliberate processes and is able to account for choices both at the option and at the strategy level.


Keywords: System 1, Intuition, Reasoning, Control, Routines, Connectionist Model, Parallel Constraint Satisfaction

1  Automatic and deliberate processes in decision making

Many cognitive operations function without or even in opposition to deliberate control. Textbooks in psychology provide a plethora of empirical findings giving evidence for the power of the automatic system. Prominent examples are perceptual illusions of object size (e.g., moon illusion), interferences between mental tasks (e.g., the Stroop effect) or counter-intentional behavior (e.g., relapse errors). Aside from these negative effects, we have to appreciate that a great deal of adaptive learning would be impossible without the service of the automatic system. Organisms automatically record fundamental aspects of the empirical world such as the frequency (e.g., Hasher & Zacks, 1984; Sedlmeier & Betsch, 2002) and the value of events (e.g., Betsch, Plessner et al., 2001). Implicit knowledge of these variables systematically informs subsequent behavior. The vast literature on animal choice (e.g., Davis et al., 1993) suggests that behavioral decisions can be made even by species that are probably not capable of making rational, reasoned or planned decisions.

The automatic mode of information processing is usually contrasted with a deliberate mode. Kahneman and Frederick (2002) have summarized several dual-process models in a general two-systems framework (for a classic dual processing approach see Schneider & Shiffrin, 1977; Shiffrin & Schneider, 1977; for a different perspective see Hammond, Hamm, Grassia, & Pearson, 1987). System 1 is based on intuitive, automatic processing. Information is processed rather rapidly and in parallel; processing is associative, effortless and opaque to the decision maker. In contrast, system 2 is based on reflective, deliberate processing in which information is processed in a controlled fashion and step-by-step. Processing involves deductive reasoning and is effortful (see Betsch, 2007, for a discussion).

By virtue of its historical embedding, decision making has been widely considered a matter of reason and control (Dawes, 1998) and, thus, neglected automatic processes for a long time. A rational decision maker — the homo oeconomicus — consciously anticipates consequences, evaluates risks and values, and eventually decides after a careful analysis of expected utility. These deliberate operations are costly because they consume cognitive and task-related resources (e.g., time, money). Herbert Simon was among the first to identify the boundaries of the deliberate system. He doubted whether humans are able to perform the complex operations of evaluation and information integration prescribed by the rational model of choice (Simon, 1955; 1982). According to his bounded rationality approach, decision makers use simple strategies that reduce the amount of information and the number of cognitive operations. Following Simon’s work, psychologists identified a number of such strategies or heuristics that provide shortcuts to deliberation (e.g., Beach & Mitchell, 1978; Gigerenzer, Todd, & the ABC Research Group, 1999; Gigerenzer, 2004; Payne, 1982; Payne, Bettman, & Johnson, 1993). Amongst the many rules, one can find very simple strategies such as the lexicographic rule (LEX, Fishburn, 1974) that only considers information on the most important attribute. There are also more complex ones, such as the equal-weight strategy (EQW, e.g., Payne, Bettman & Johnson, 1988) that integrates values within options but neglects differences in importance or probability (see below for a more thorough discussion).

The majority of decision strategies described in the literature involve conscious consideration of given information. As such they are deliberate heuristics and are silent about the potentials of the automatic system (Frederick, 2002). Over half a century ago, however, Herbert Simon already anticipated its powers: “My first empirical proposition is that there is a complete lack of evidence that, in actual choice situations of any complexity, these [rational] computations can be, or are in fact, performed …but we cannot, of course, rule out the possibility that the un-conscious is a better decision maker than the conscious.” (Simon, 1955, p. 104; italics added).

Yet, it took the field of decision research a couple of decades until automatic processes were systematically considered at the theoretical level. This development coincided with the increasing interest that researchers devoted to memory processes in decision making (Weber, Goldstein & Busemeyer, 1991). Accordingly, a number of different models emerged assuming that automatic processes of recognition, affect generation and activation of prior knowledge play a central role in behavioral choice (Damasio, 1994; Dougherty et al., 1999; Haidt, 2001; Hogarth, 2001; Klein, 1993, 1999; Lieberman, 2000; Slovic et al., 2002). For example, decisions by experienced actors may often be based on recognition of a situation and identification of learned behavioral rules (Klein, 1999). These processes are primarily performed by the automatic system and involve quick and simultaneous consideration of multiple pieces of information. Memory processes are also involved in affect-based decision making (e.g., Slovic et al., 2002). Via feedback learning, behavioral options and their consequences can be associated in memory with affective responses. As a consequence, encountering the behavior in the future entails spontaneous activation of affective reactions reflecting past experience. As such, decisions made by relying on affective responses are also guided by automatic processes, at least during the initial steps when affective reactions are generated vis-à-vis the given information.

The automatic system may not only guide operations of recognition, affect generation and activation of behavioral knowledge from memory. It may also direct subsequent processes that pertain to information integration and choice. Consistency maximizing models assume that not only memory operations but also processes of information integration and choice may be performed by the automatic system (Betsch, 2005; Glöckner, 2006; Holyoak & Simon, 1999; Simon & Holyoak, 2002; Simon, 2004). The model to be presented below advances the consistency maximizing approach within a connectionist framework. We first outline the basic idea behind this approach and thereafter delineate our computational model.

2  A connectionist approach to decision making

One of the basic ideas of Gestalt psychology (e.g., Köhler, 1947) is that the cognitive system tends automatically to minimize inconsistency between given piece of information in order to make sense of the world and to form consistent mental representations (“Gestalten”). By holistic processing, a preferred interpretation of a constellation of information is automatically identified, and information is modified to fit this interpretation (cf. Read et al., 1997). Prominent demonstrations of these mechanisms are images with changing figure/ground relationships like the“Rubinian vase” (Rubin, 1915/1921) in which automatic consistency maximizing processes produce either the perception of a vase or the perception of two faces. Note that both of these qualitatively different conscious perceptions are based on – obviously automatically produced — interpretations of the same objective information. The subjective interpretation of each piece of information is actively modified to fit the former or the latter consistent mental representation. Consistency maximizing theories also have a long tradition in social psychology (Heider, 1946; Festinger, 1957; for an overview see Simon & Holyoak, 2002), and their predictions concerning social cognition and interaction have been supported by ample evidence (e.g., Wicklund & Brehm, 1976).

With the introduction of connectionist theories and particular parallel constraint satisfaction network models (e.g., Rumelhart & McClelland, 1986; Thagard, 1989), it became possible to extend the idea of consistency maximizing from simple dyadic or triadic constellations to more complex constellations of information (Read et al., 1997). Such complex constellations can be represented in symbolic networks (i.e., networks in which meaning is not completely distributed among nodes). An iterative updating algorithm can be used to simulate consistency maximizing by spreading activation. Such parallel constraint satisfaction (PCS) network models have been successfully applied to explain processes of letter and word perception (McClelland & Rumelhart, 1981), social perception (Read & Miller, 1998), analogical mapping (Holyoak & Thagard, 1989), the evaluation of explanations (Thagard, 1989), dissonance reduction (Shultz & Lepper, 1996), impression formation (Kunda & Thagard, 1996), the selection of plans (Thagard & Millgram, 1995), legal decision making (Holyoak & Simon, 1999; Simon, 2004), preferential choice (Simon, Krawczyk, & Holyoak, 2004) and probabilistic decisions (Glöckner, 2006; Glöckner, 2007; Glöckner & Betsch, submitted).

The potentials of the connectionist approach for modeling decisions have been repeatedly highlighted. Thagard and colleagues have demonstrated convincingly that a strategic selection of plans (Thagard & Millgram, 1995), as well as jury decisions (Thagard, 2003), could be plausibly simulated by employing a parallel constraint satisfaction (PCS) network. Furthermore, Holyoak, Simon and colleagues (Holyoak & Simon, 1999; Simon, Krawczyk, & Holyoak, 2004; Simon, Snow, & Read, 2004) showed that individuals tend to increase coherence even while the decision is made. Note that such coherence shifts cannot be explained by either rational choice models or simple heuristics, which share the assumption that stimulus information remains stable during subsequent decision processes once it is represented in the mind (Brownstein, 2003; Glöckner, Betsch, & Schindler, submitted). Dan Simon (2004) summarized his findings concerning the consistency maximizing mechanism in (legal) decision making as follows: (1) with the emergence of the decision task, the mental representation of the task shifts towards a state of internal consistency (coherence shifts): the information that supports the emerging decision is accepted, and the information that supports the alternative is devalued or ignored; (2) people are not aware of these coherence shifts, and the ensuing decision is“experienced as rationally warranted by the inherent values of the variables, rather than by an inflated perception imposed by the cognitive system” (Simon, 2004, p. 545); (3) these coherence shifts, which are caused by consistency maximizing processes,“play an operative role in the decision process” (p. 546); (4) consistency maximizing processes influence information directly involved in the decision, as well as beliefs and background knowledge; (5) changes in one aspect of the mental model may trigger changes in other information throughout the model because pieces of information are interdependent; (6) motivation and attitudes can influence the direction of coherence shifts; (7) coherence shifts caused by consistency maximizing processes are of a transitory nature since they are produced to solve the decision task at hand, but usually disappear after a certain time; (8) deliberate instructions to consider the opposite position reduce the size of coherence shifts.

Taken as a whole, these findings support the notion that automatic consistency maximizing processes are a general mechanism in human cognition that help people make sense of information by actively structuring it. As reported by Simon (2004), people are not aware of the underlying processes, but they are certainly aware of the results, namely, the resulting consistent mental representation. In line with this work, we propose that consistency maximizing processes play an operative role in decision making and are not only an epiphenomenon or post-decisional rationalization (Simon & Holyoak, 2002).

We go one step further and state that automatic consistency maximizing processes are the core information integration process in decision making, and assume that a sufficient level of consistency is a precondition for terminating the decision process. A consistent representation can be reached mainly by modifying information so that one option clearly dominates the others (for similar approaches, see Montgomery, 1989; Svenson, 1992). Processes of consistency maximizing always automatically operate to foster such consistent mental representations. Because they are automatic, they cannot be simply turned off (Bargh & Chartrand, 1999). We argue that the direction of dominance structuring is determined by the initial structure of information. Simply stated, dominance structuring operates in favor of the option which is initially supported by the strongest evidence, and the process automatically accentuates this advantage. It is not necessary that the individual has a (conscious) initial preference. The automatic system determines the best candidate; it accentuates its initial advantages and the individual finally becomes aware of the dominant option (i.e., the one producing the most coherent mental representation in the context of all other pieces of information considered). Such a model could — for instance — explain the finding that even lower animals like sticklebacks select mating partners by integrating trait information in a complex compensatory strategy (Künzler & Bakker, 2001).

In contrast to lower animals, humans have developed the ability to supervise and deliberately affect these automatic processes (Betsch, 2005). Although the computational power of the deliberate system is limited, it is important for providing further information to the network. By modifying the network of considered information, it allows for fast adaptations to changes in the environment. We will consider the interaction of deliberate and automatic processes in the next section. In the remainder of this section, we briefly outline the computational model and close with a short review of empirical evidence.

Specifically, we developed a parallel constraint satisfaction (PCS) network model for probabilistic decision tasks (i.e., decisions based on probability cues). The PCS model proposes that probabilistic decision tasks can be represented in a simple network structure (Figure 1). Cues and options are nodes in the network. Logical relations are represented by inhibitory or excitatory links between these nodes. All links are bidirectional, which means that cues not only facilitate (or inhibit) options, but also vice versa. The strength of the relation between nodes is represented by weights, which can vary from −1.0 to 1.0. Excitatory (inhibitory) links between cues and options represent positive (negative) prediction of cues for options. Strong inhibitory links between options reflect the fact that only one option can be chosen. The general validity node activates the network and has a constant activation of 1. The strength of the excitatory links between the general validity node and the cues indicate the initial validity of the cues. The spread of activation in the network is simulated by an iterative updating function that maximizes consistency under the given constraints.

We use a sigmoid activation function to simulate spreading activation in the network (McClelland & Rumelhart, 1981; Rumelhart & McClelland, 1982; Rumelhart, Hinton, & McClelland, 1986). The algorithm maximizes consistency and, after a certain number of iterations, leads to a balanced state in which activations stop changing. All nodes start at an activation of zero at time t = 0. The activation of all nodes at each following time period t+1 is computed simultaneously by:

     
ai (t+1) = ai (t)(1−decay) +            (1)






  if  inputi (t) < 0
         inputi (t)(ai(t)−floor)
  if  inputi (t) ≥ 0
         inputi (t)(ceiling − ai(t))
 
   

in which ai (t) is the current activation of the node i, which is multiplied by a decay factor. The resulting product is increased or decreased by the incoming activation for the node inputi (t), which is multiplied with a scaling factor. If the incoming activation for the node is negative, the incoming activation is multiplied by the current activation of the node minus the minimum activation value floor. If the incoming activation for the node is positive, it is multiplied by the maximum activation value ceiling minus the current activation of the node. The incoming activation for each node is computed by the weighted sum of the links (i.e., connection weights) between the focus node and any other node multiplied by the activation of the other node:

inputi (t) = 
 
j=1 → n
 wij aj (t)     (2)

with wij being the strength of the link between the focus node i and any connected node j and aj(t) being the current activation of node j. In our simulations we use a maximum node activation of ceiling = 1 and a minimum node activation of floor = -1. The decay parameter is usually set to 0.05.

According to the updating function, activations of nodes are modified until a stable solution of the network is found that represents the state of maximized consistency (McClelland & Rumelhart, 1981; Read et al., 1997). In the process the activation level of the nodes that represent options and cues is jointly modified according to the underlying structure of interdependencies. In the stable state, one option will usually dominate the other options and will be highly activated. Cues that support this option will be highly activated, too, whereas cues that oppose this option will have a lower level of activation.


Figure 1: General parallel constraint satisfaction network for the simulation of probabilistic decisions.

We postulate that the model captures the essential automatic consistency maximizing process in decision making based on probability cues. In a series of studies (Glöckner, 2006; Glöckner & Betsch, 2007) which were designed to test the PCS rule against fast-and-frugal heuristics (Gigerenzer et al., 1999), participants worked on probabilistic decision tasks. In the city-size task, for example, individuals decide which of two cities is larger based on a set of probabilistic cues (e.g., is the city a state capital or not?). The cues are predictive for the decision criterion (i.e., city size). The complexity of the decision tasks was varied within and between studies by using either three or six cues (Figure 2). Information was presented in an open information matrix; no information about cue validity was provided and participants were instructed to make good decisions and to proceed as quickly as possible. Choices, decision times and in some of the studies confidence judgments were recorded as dependent variables.


 City ACity B
State Capital+
University++
1st League Soccer Team+
Art Gallery+
Airport+
Cathedral+
 WiesbadenFreiburg
State Capital+
University+
1st League Soccer+
 DresdenLeverkusen
State Capital+
University+
1st League Soccer+
Figure 2: Examples of decision tasks between two cities based on six and three cues (e.g. State Capital). Positive / negative cue values are represented by the symbols + / .

A maximum likelihood analysis of the individual choice patterns (Bröder & Schiffer, 2003; Wasserman, 2000) and an additional analysis of decision time predictions (cf. Bergert & Nosofsky, 2007) were used to identify choice strategies.1 In experiments with three and six cues (see Glöckner, 2007, for an overview), choice pattern suggested that the majority of participants used a weighted compensatory rule to integrate all cue values instead of a fast-and-frugal heuristic (Gigerenzer et al., 1999) such as Take the Best, Equal Weight or Random Choice. Even in the six cue decision tasks (Glöckner, 2007, Exp. 2b), the median decision time was below three seconds. Thus, in line with the predictions of the PCS approach, most individuals were able to integrate multiple pieces of information very quickly in a weighted compensatory manner.

In all experiments, consistency was varied between decision tasks. An example for this manipulation is presented in Figure 2. For participants that estimated the cue “1st League Soccer Team” as the least valid one, consistency was lower in the Wiesbaden vs. Freibug decision task than in the decision task below. According to fast-and-frugal heuristics (i.e., Take The Best), decision times should not differ between the two decision tasks because the number of computational steps that are necessary to select an option does not differ between these decision tasks. According to the PCS approach, in the Wiesbaden vs. Freibug decision task, decision time should be higher and confidence judgments should be lower than in the decision task below. Both predictions could be supported empirically (Glöckner, 2006) and the findings were replicated using different decision tasks (including memory based decision task; Glöckner & Hodges, submitted) and different materials (Glöckner & Betsch, submitted).

Finally, we investigated whether coherence shifts occur in city-size decision tasks (Glöckner, Betsch, & Schindler, 2007). After explaining the concept of cue validity and conditional likelihoods, in a pre-test participants were asked to judge the cue validity for a set of cues. Then individuals were instructed to reflect how they would decide in a certain city-size decision task (see Figure 2) without actually making a decision. Afterwards, they were asked to judge the same cue validities in a post-test (using the same format as the pre-test). In line with our hypotheses, we found clear coherence shifts (i.e., differences between ratings in the pre- and the post-test) for cue validities in the study.

In sum, empirical evidence suggests that (a) decisions can be made rapidly, but can nevertheless be in line with weighted compensatory rules for information integration; (b) decision times increase with an increase of the inconsistency in the decision situation (for similar results, see Cartwright & Festinger, 1943; Bergert & Nosofsky, 2007); and (c) confidence judgments decrease with increasing inconsistency.

Thus, the results lend additional support to the view that consistency maximizing processes might play a central role in decision making, particularly in the process of information integration and structuring. Evidence concerning coherence shifts, choices, decision times and confidence judgments corroborate the hypothesis that consistency maximizing processes automatically operate towards consistent mental representations by holistically weighing information and accentuating the dominant structure in decision tasks.2

In the previous section we introduced a connectionist approach to decision making. It capitalizes on a PCS decision rule that processes information in parallel. We propose that this PCS rule is a fundamental principle of decision making and not just another strategy from the “heuristic toolbox.” Any new theory of decision making, however, has to be evaluated in the light of the wealth of findings on decision strategies (heuristics) and their application. In the next section, we briefly review the evidence on strategies in decision making and discuss some problems with the multiple strategy view. Specifically, we doubt whether the evidence really allows for the conclusion that individuals employ different decision strategies. Rather, we claim that individuals employ different strategies of search and structuring of the problem space but still process this information by an all-purpose decision strategy, the PCS-rule. Based on this assumption, we advance our PCS approach and put forward an integrative theoretical framework accounting for both decisions among options and search strategies.

3  Evaluating the multiple strategy approach and a new starting point for theorizing

With the rise of a process view in the 1970’s, psychologists began to seek the strategies humans actually use in decision making. Soon, this quest yielded a rich harvest: the Lexicographic Rule (LEX, Fishburn, 1974), Elimination by Aspects (EBA, Tversky, 1972), Satisficing (SAT, Simon, 1955), the Majority of Confirming Dimensions Rule (Russo & Dosher, 1983) and the Equal Weight Rule (e.g., Dawes, 1998) are only the most prominent examples of decision strategies designed to avoid the complex calculations of a weighted additive rule — the compensatory aggregation principle of utility theory (e.g., Payne et al., 1993, for an overview). However, the pursuit of such strategies has still not reached its climax. The hunting horns are blowing more loudly than ever (Gigerenzer, 2004), and more and more strategies are being crammed into the toolbox the decision maker is assumed to carry in his mind. Some of these new entries rely on potential correlates of value, such as affective reactions (Damasio, 1994; Slovic et al., 2002), majority behavior (Bohner et al., 1995), the expertise of communicators (Petty & Cacioppo, 1986), familiarity (Tyszka, 1986) or recognition (Klein, 1993; Goldstein & Gigerenzer, 2002). Others, such as the Peak-and-End Heuristic (Kahneman et al., 1993) and the Priority Heuristic (Brandstätter et al., 2006), describe operations of the selective processing of values or reasons.

Obviously, from a multiple-strategy view, one has to deal with the problem of strategy selection. When does an individual apply a certain strategy? Models of strategy selection can be sorted into at least three categories according to the mechanism proposed for strategy selection: (i) decision, (ii) learning and (iii) context.

The decision approach assumes that decision makers decide how to decide. Contingent upon the situation, strategy candidates are assessed in a meta-calculus, trading off costs (in terms of time and processing effort) and benefits (the expected accuracy achieved by application of a certain strategy). The strategy with the best balance is chosen. Therefore, the decision approach restores the notion of utility maximization on the super-ordinate level of strategy choice. Well known examples are models of contingent decision making (Beach & Mitchell, 1978) and adaptive strategy selection (Payne et al., 1988; 1993). The decision approach to strategy selection, however, obviously runs into problems. It initiates an infinite regress on the theoretical level (Betsch, 1995; Payne, 1982). If we accept that individuals apply multiple strategies for behavioral decisions, then why shouldn’t they use these shortcuts on a higher level as well? Consequently, we may ask how people decide how to decide how to decide and so on — a chain of justification that can only be truncated arbitrarily.

The learning approach assumes that strategy selection often functions in a bottom-up fashion (Payne & Bettman, 2001). By virtue of feedback learning, decision makers can acquire strategy routines (e.g., Bröder & Schiffer, 2006). These processes can be described in terms of reinforcement learning (Rieskamp & Otto, 2006) or the formation of production rules (Pitz, 1977). Subsequently, the selection of strategies can be driven by the recognition of cues that signal the appropriateness of a strategy in recurring situations (cf. also the Recognition-Primed Decision Model by Klein, 1993; 1999). In light of the huge literature on problem solving and expertise (e.g., Frensch & Funke, 1995), such a view can hardly be questioned. Obviously, any theory of strategy choice should address the role of learning. However, approaches that rely exclusively on learning and domain specificity will have a limited scope, because they cannot predict strategy selection in new situations.

The context approach refrains from spelling out a mechanism for strategy selection. It concentrates on identifying crucial task and context factors that predict types of strategies rather than tokens. Prominent examples are the dual process models from attitude research, such as Fazio’s MODE model (Fazio, 1990) and the Elaboration Likelihood Model (Petty & Cacioppo, 1986). As a common denominator, these models posit that ability and motivation are key determinants for strategy selection. If cognitive abilities are constrained (e.g., due to time limits or distraction) and motivation is low, individuals will rely on low-effort decision-making heuristics or even automatic response rules. In contrast, a high degree of ability and high motivation will result in the application of strategies that involve a deeper elaboration of relevant information. Obviously, the problem with these models is the lack of precision. It is not possible to predict, say, when a non-compensatory or a compensatory search strategy will be applied.

These different theoretical approaches coexist. None of them has been sufficiently elaborated and empirically tested to satisfactorily explain and predict the process of strategy and option selection. Actually, we doubt whether any of the above approaches represents a promising starting point for solving the problem of strategy selection in the near future. Each of the models has shortcomings that are inherent in their theoretical line of thinking. Moreover, we claim that all of the above approaches suffer from a common sophism. Implicitly or explicitly, they take for granted that people really use different kinds of strategies for decision making.

Decision strategies described in the literature indeed seem to be very different. The lexicographic rule (LEX), for instance, starts comparing options on the most important attribute and selects the option with the best value. It goes on without information integration and compensation. In contrast, the weighted additive rule (WADD) first integrates information within each option and then selects the option with the highest aggregated value. Moreover, decision making looks different if one considers process measures. All studies using an information board paradigm converge by showing that patterns of information acquisition vary substantially, contingent upon task and context factors. The patterns of information search actually used by individuals map onto a number of decision strategies described in the literature, such as LEX and WADD. There is also evidence indicating that choices correspond with distinct types of strategies (for classification methods based on a joint consideration of patterns of choices and/or process measures, see Bröder & Schiffer, 2003; Glöckner, 2006). Altogether, these findings seem to provide ample support for the notion that individuals apply different decision rules.

With a closer look, however, the evidence is not conclusive. Researchers measure observable variables such as information search (e.g., movements in a matrix), choices and response latencies. The decision itself — comprised of information integration and the application of a decision rule — cannot be directly observed. To make things even more complicated, different decision rules can produce similar outcomes. Moreover, based on different information, the same decision rule can produce different outcomes (Lee & Cummins, 2004). Consider, for example, an artificial system that is programmed to apply a single decision rule, say,“choose the alternative with the highest expected value.” This rule will produce different choices in the same environment, depending on the amount and type of information that is fed into the system. If the input only contains information about the most important attribute, choices will converge with those made by applying a LEX rule (Lee & Cummins, 2004). However, it would be false to conclude from the observation of search and choice patterns that the system has applied a LEX rule in making its choice (cf. Bergert & Nosofsky, 2007).

The distinction between strategies for search and strategies for decision is crucial. We interpret the results of process research as conclusive evidence for the view that people employ different strategies for information search. However, in line with other recent unifying decision approaches (Lee & Cummins, 2004; Newell, 2005), we doubt, that individuals actually use different strategies for making preferential decisions. We start our theoretical contribution with the assumption that there is only one decision rule (rule for information integration and choice) for making all kinds of decisions. We further propose that this rule follows the PCS mechanism described above. We assume that the underlying process operates automatically. In contrast, processes of information search, production and changing information are assumed to be primarily under deliberate control. The latter are open to introspection, can be verbalized and give the individual the feeling that he or she is deciding based on reasoning. However, most of the choices we make during a lifetime do not require processes of deliberate construction.

4  Towards a PCS framework for option and strategy choice3

Decisions can occur without deliberate mental control. The core operations of the decision process — information integration and the selection of a behavioral option — are often quickly performed by the automatic system (Betsch, 2005; Glöckner, 2006). Earlier in this paper, we showed that these operations can be understood, described and modeled as a parallel constraint satisfaction process (PCS). We posit that PCS processes are instigated any time a preferential or probabilistic decision has to be made, regardless of whether the decision is primarily instantiated by situational or internal factors. We therefore consider the PCS rule an all-purpose mechanism for information integration and selection in decision making.

The PCS rule holistically considers the information contained in a network. The network consists of all pieces of information that comprise the decision problem (cues, goals, options, evaluations, etc.). In many mundane situations, the constitution of the network does not require any sort of active information search. Salient features of the environment and currently activated memory entries provide the input to the network. As already noted, PCS processes set in at once and attempt to find an option that serves the goals at stake. We refer to the network installed spontaneously when encountering a decision situation as the primary network (see Figure 3). All operations performed on the primary network are dedicated to successfully solving the decision problem by identifying the most promising choice option in the network. In the beginning, all decisions are assumed to be option-centered. In contrast to models of contingent decision making (e.g., Beach & Mitchell, 1978; Payne et al., 1988), we posit that the process of decision making does not start at the strategy level. In our framework, the term “option” refers to behavioral candidates for achieving the goals that constitute the decision problem represented in the primary network. In contrast, we use the term “strategy” for candidates contained in the secondary network to be described below. Such strategies involve deliberate activities that are concerned with changing the primary network, for example, by active search and adding new information, by changing elements of the network (e.g., via inference and reinterpretation) or by changing the weights of the connections among nodes in the network. We refer to these processes as deliberate constructions (DC).

The next theoretical steps are straightforward. We have to (i) determine the conditions for initiating DC operations, (ii) describe the types of strategies in more detail and (iii) pin down the selection mechanism among them.

4.1  Initiating DC operations

The PCS rule strives to find the most coherent or consistent solution to a decision problem by changing the activation level of the elements contained in the working network. The architecture of the network provides the constraints under which the quest for consistency evolves. Thus, the final level of consistency is bounded by the weights assigned to the connections between the elements of the network. If the level of consistency (C) exceeds a certain threshold (θ), PCS processes will be terminated and the option with the highest activation will be chosen. Under these conditions, DC operations are not necessary to solve a decision problem.

Under which conditions are DCs required for arriving at a decision? We seek to identify an endogenous factor, without thereby claiming that exogenous factors are irrelevant. The consistency in the primary network is one such endogenous factor. We assume that if the level of consistency falls short of the threshold (C < θ, see Figure 3), then this is a sufficient condition for initiating DC operations. At this moment, a secondary network is created and an appropriate DC strategy will be chosen and implemented.4 It is important to note that the resulting DC operations will not directly lead to a choice from among the behavioral options. They only serve to help the primary network reach an acceptable level of consistency so that the decision rule (choose the option with the highest activation) can be applied. In other words, strategies of DC operations do not substitute the PCS rule for information integration and choice.

Thus, the two networks serve different functions. The job of the primary network is to make the behavioral decision (i.e., to select an option). In many routine decision situations, PCS processing will immediately find a coherent pattern of activations and, thus, can detect the option to be chosen. Decision making, under such conditions, can be guided by automatic processes only. The secondary network functions as an aiding system in order to help the primary network do its job. It selects strategies that help to restructure the primary network, or, if the primary network is empty (e.g., because no relevant information is accessible or salient in the environment), to form the network (e.g., by opening boxes in a mouselab). Note that the secondary network impacts option decisions in an indirect fashion by providing or changing information. Nevertheless, decisions are also made in the secondary network. These are made, however, among strategies of search, information generation and change. Apart from their different functions, the two networks obey the same principles of consistency maximizing.

Concerning the quality of the resulting decisions, it has to be noted that a high level of consistency within the primary network is not a measure of the quality (or rationality) of a decision per se. Decisions can be based on highly consistent mental representations and may nevertheless be dead wrong. One major reason for this could be that the primary network is not tuned to the environment and thus does not accurately represent the structure of the decision task (Glöckner, in press). In probabilistic inferences like the city-size decisions described above, the level of consistency that is finally reached in the primary network reflects to a certain extent the likelihood that one option is better on the distal criterion than the other(s). Similarly, in legal cases the consistency of different possible interpretations of the evidence is related to the likelihood of these interpretations (cf. Thagard, 2003). In preference decisions (which we have not touched in the discussion so far) the level of consistency reflects the profitability of different options according to the considered goals or attributes. Networks consist of options and goals (the latter replacing the cue nodes) and the option which is most consistent with the considered goals (and which is thus most profitable) will be selected.


Figure 3: An integrative PCS model for the selection of options (primary network) and deliberate construction strategies (secondary network).

The threshold of the acceptable level of consistency is not considered a constant. This level may be adjusted, conditional upon personal, contextual and task-related factors. For instance, decision makers may lower the threshold level if time constraints increase. They may elevate the level if the decision is highly relevant for them or someone else. A more thorough discussion of moderating factors is provided by Betsch (2005).

Note that the PCS rule shares the general idea of a certain level of confidence which has to be reached to make a decision with decision field theory (Busemeyer & Townsend, 1993) and other evidence-accumulation models (e.g., Lee & Cummins, 2004; Usher & McClelland, 2004). In evidence-accumulation models, pieces of information for different options are added up in a serial manner until one option is sufficiently better than the other(s) so that this option can be selected. Although there are conceptual similarities, the PCS rule postulates a completely different process, which is based on the idea that information is considered in its complex constellation and is not serially added up. Whereas evidence-accumulation models stick with the idea that pieces of information are merely used to infer a choice in a unidirectional manner, the PCS rule postulates a hermeneutic reasoning process in which pieces of information and options are evaluated and interpreted in a bidirectional manner (Holyoak & Simon, 1999).5

4.2  Types of DC strategies

The choice alternatives contained in the secondary network are strategies for searching, producing or changing information. “Search for information in the environment according to the importance of cues across options” or “consider all the outcomes of an option before considering a further option” are examples of search strategies. Note that the former conforms to non-compensatory search strategies and the latter to compensatory ones. Production strategies refer to both rehearsal strategies for accessing information from memory and rules of inference and deduction. The latter may help to anticipate the risk of future events (e.g., if a firm has performed extremely well on the stock market during the past years and the Dow Jones index has reached a climax, then it is likely that the stocks of this firm will fall in the next months). Strategies of information change involve a reinterpretation of the relations among goals, options and behaviors. A routine decision maker might realize that the world has changed and that the routine option no longer promotes his or her goals. Due to the prior success of the routine, the connection between the goals and the behavior are positive. By virtue of active mental control, the decision maker may adapt the weights temporarily (a lasting change can only be achieved via associative learning, cf., Betsch et al., 2004; Betsch, 2005).

Like options on the behavioral level, DC strategies can be learned and become routinized (Bröder & Schiffer, 2006). Over the course of their lifetime, deciders will eventually accumulate a set of DC routines that suit specific types of decision situations. These routines need not be learned via first-hand experience. They can also be handed down via communication and instruction. For instance, we teach our MA and PhD students to use the PsycINFO search engine before deciding which line of research they should pursue further.

In new situations, deciders may remain focussed on generalized strategies for information production. Although this varies among individuals, these generalized strategies may remain comparatively stable within a person. They manifest themselves in individual differences regarding the scrutiny, the focus of attention and the direction in which a person considers information. Some people generally prefer to consider a larger amount of information and to explore the problem space more thoroughly than others. These people score high on pertinent inventories such as the Maximizing Scale (Greifeneder & Betsch, 2006; Schwartz et al., 2002) and the Need for Cognition Scale (Cacioppo & Petty, 1982). There is also evidence that individuals differ with regard to the type of information they primarily focus on when making a decision. For example, some people prefer to focus on the experiential or affective level, whereas others are more responsive to the noetic or cognitive level of information (C. Betsch, 2004). Differences in reading direction may be especially manifested when information is presented graphically or in written form. One can speculate, for example, about whether search movements in an information board (e.g., the mouselab) might systematically vary across cultures in accordance with differences in the direction of reading. Moreover, information search may generally be biased towards the confirmation rather than disconfirmation of a starting hypothesis (e.g., Wason, 1960). If individuals start with the hypothesis that option A might be better than option B (e.g., due to the fact that A performed well in the past), they might reveal a tendency to search for evidence that favors A or challenges B (e.g., Betsch, Haberstroh et al., 2001).

4.3  Selection among DC strategies

We propose that decision making among DC strategies follows the same principle as decision making among choice options. As a general purpose mechanism for decision making, the PCS rule will also serve the secondary network. The goals contained in this network are instrumental to the primary decision problem. The main goal is to help the primary network to find a solution, which means that the utility of a strategy depends on the extent to which it helps establish consistency in the primary network. Other goals relating to accuracy and effort complement the motivational part of the secondary network. Again, the content and structure of the network are strongly determined by prior experience and learning. Needless to say, such a view can easily incorporate the notion of strategy routines (Bröder & Schiffer, 2006; Rieskamp & Otto, 2006). The nature of processing information in the primary and the secondary network are identical. The only difference is that the secondary network serves the primary network. Specifically, we assume that after a DC candidate is chosen and implemented, the output of these operations (e.g., new information) is fed into the primary network. Secondary processes (network formation, implementation of DC operations) will operate until an acceptable level of consistency is installed in the primary network (Figure 3).

4.4  Modeling information search and choice within the PCS framework

The PCS model posits that preferential decision making starts with the attempt to make a decision on the subordinate level (i.e., selecting an option that serves the goals constituting the decision problem). As such, our framework differs from those accounts that claim that the process of decision making starts with a decision among strategies on the superordinate level (e.g., the contingency model: Beach & Mitchell, 1978; the effort accuracy framework: Payne & Bettman, 2001; SSL: Rieskamp & Otto, 2006). In many if not most of the choices we make during a lifetime, deliberate processes of search, production and the change of information are not necessary to discover a consistent solution to a decision problem. In contrast, laboratory settings usually create conditions that are not representative of mundane decisions in that they hamper the formation of a primary network. Consider, for example, the mouselab, an often-used tool to study information search in decision making (e.g., Payne et al., 1988; cf., Glöckner & Betsch, 2007 for a discussion of this method). In the mouselab, the individual is unfamiliar with the options and all the relevant information is hidden in a covered matrix. Hence, the primary network is nearly empty (it still contains goals and representations of the decision problem). Such experimental conditions do not merely invite processes of DC operations; rather, they are a precondition for making a decision. As such, the secondary network will be formed immediately upon encountering the task, and the individual will first decide how to gather or produce information.

Context sensitivity is a major feature of the PCS model. The formation of a working network is conceived as an automatic process, which is not selective with regard to the relevance of the information provided. Whenever a decision problem is encountered, all salient aspects of the environment and currently accessible information from memory feed into the working network (see Figure 3). Deliberation is not a necessary condition for starting decision making. Primary network formation at an early stage can be considered a process of passive contextualization. It seizes the given and is blind to the unstated (processes of DC can help to remedy this problem, but recall that they are optional). Any piece of information encoded in the environment or activated from memory, whether it is objectively relevant or not, will be considered as long as it can be tied into the network (this primarily depends on prior associative learning). Many of the observed violations of the axioms of utility theory have to do with the impact of objectively irrelevant information (e.g., framing, Tversky & Kahneman, 1981). From the viewpoint of our theory, context dependency either in its negative (violation of invariance principle) or in its positive form (adapting to changing contexts) is an inevitable consequence of the automatic processes guiding the initial representation of the decision problem.

The model explicitly adopts a learning perspective on human decision making. Accordingly, decisions are embedded in a stream of behavioral experiences, and choices are conceived as having a past and future (Betsch & Haberstroh, 2005). On the theoretical level, past experiences manifest themselves in the structure of weights in the network reflecting prior associated learning. Future experiences will provide the decision maker with feedback. In technical terms, feedback can cause lasting changes to the weights of associations. One consequence of feedback learning is that individuals establish a repertoire of routines both on the level of options (e.g. Betsch, Haberstroh et al., 2001; Betsch et al., 2004) and on the level of DC strategies (e.g., search routines, Bröder & Schiffer, 2006). Betsch (2005) provides a detailed discussion on how these effects can be accounted for within a PCS approach.

5  Summary

We have outlined the fundamentals of a PCS framework for option and strategy choice. The framework starts with the notion that there are different building blocks for information search and production, but there is only one mechanism for information integration and choice. This mechanism is described in terms of a PCS process that is able to work automatically. Processes of deliberation in decision making are mainly concerned with actively constructing the problem space. Major processes are the search for information, the production of information via inference and temporary changes of pre-established knowledge. Both the automatic and the deliberate operations are important for adaptive decision making. We tied them together in an integrative framework. Strategies in this framework are not strategies of decision making but strategies of search, editing and changing information. In a nutshell, these strategies are behaviors and individuals are assumed to select among them in the same manner as they select among all sort of behaviors - by applying their all-purpose PCS rule of decision making.

References

Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist, 54, 462–479.

Beach, L. R., & Mitchell, T. R. (1978). A contingency model for the selection of decision strategies. Academy of Management Review, 3, 439–449.

Bergert, F. B., & Nosofsky, R. M. (2007). A response-time approach to comparing generalized rational and take-the-best models of decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 107–129

Betsch, C. (2004). Präferenz für Intuition und Deliberation. Inventar zur Erfassung von affekt- und kognitionsbasiertem Entscheiden. [Preference for intuition and deliberation (PID): An inventory for assessing affect- and cognition-based decision-making]. Zeitschrift für Differentielle und Diagnostische Psychologie, 25, 179–197.

Betsch, T. (1995). Das Routinen-Modell der Handlungsselektion [The routine model of selection among options]. Aachen: Shaker.

Betsch, T., Plessner, H., Schwieren, C., & Gütig, R. (2001). I like it but I don‘t know why: A value-account approach to implicit attitude formation. Personality and Social Psychology Bulletin, 27, 242–253.

Betsch, T., Haberstroh, S., Glöckner, A., Haar, T., & Fiedler, K. (2001). The effects of routine strength on information acquisition and adaptation in recurrent decision making. Organizational Behavior and Human Decision Processes, 84, 23–53.

Betsch, T., Haberstroh, S., Molter, B., & Glöckner, A. (2004). Oops – I did it again: When prior knowledge overrules intentions. Organizational Behavior and Human Decision Processes. 93, 62–74.

Betsch, T. (2005). Preference theory: An affect based approach to recurrent decision making. In T. Betsch & S. Haberstroh (Eds.), The Routines of decision making (pp. 39–66). Mahwah, NJ: Lawrence Erlbaum.

Betsch, T., & Haberstroh, S. (2005). Current research on routine decision making: Advances and prospects. In T. Betsch & S. Haberstroh (Eds.), The routines of decision making (pp. 359–376). Mahwah, NJ: Lawrence Erlbaum Associates Publishers.

Betsch, T. (2007). The nature of intuition and its neglect in research on judgment and decision making. In H. Plessner, C. Betsch, and T. Betsch, (Eds.), Intuition in judgment and decision making (pp. 3–22). Mahwah, NJ: Lawrence Erlbaum.

Bohner, G., Moskowitz, G. B., & Chaiken, S. (1995). The interplay of heuristic and systematic processing of social information. In W. Stroebe & M. Hewstone (Eds.), European Review of Social Psychology (Vol. 6, pp. 33–68). Chichester: Wiley.

Brandstätter, E., Gigerenzer, G., & Hertwig, R. (2006). The priority heuristic: Making choices without trade-offs. Psychological Review, 113, 409–432.

Bröder, A., & Schiffer, S. (2003). Bayesian strategy assessment in multi-attributive decision making. Journal of Behavioral Decision Making, 16, 193–213.

Bröder, A. & Schiffer, S. (2006). Adaptive flexibility and maladaptive routines in selecting fast and frugal decision strategies. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 904–918.

Brownstein, A. L. (2003). Biased predecision processing. Psychological Bulletin, 129, 545–568.

Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: A dynamic cognitive approach to decision making in an uncertain environment. Psychological Review, 100, 432–459.

Cartwright, D. & Festinger, L. (1943). A quantitative theory of decision. Psychological Review, 50, 595–621.

Cacioppo, J. T. & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42, 116–131.

Damasio, A. R. (1994). Descartes’ error: Emotion, reason, and the human brain. New York: Avon Books.

Davis, D. G. S., Staddon, J. E. R., Machado, A., & Palmer, R. G. (1993). The process of recurrent choice. Psychological Review, 100, 320–341.

Dawes, R. M. (1998). Behavioral decision making and judgment. In D.T. Gilbert, S.T. Fiske & G. Lindzey (Eds.), The handbook of social psychology — 4th ed. (Vol. 1, p. 497–548). Boston, MA: McGraw-Hill.

Dougherty, M. R. P., Gettys, C. F., & Ogden, E. E. (1999). MINERVA-DM: A memory process model for judgments of likelihood. Psychological Review, 106, 180–209.

Fazio, R. H. (1990). Multiple processes by which attitudes guide behavior: the MODE model as an integrative framework. Advances in Experimental Social Psychology, 23, 75–109.

Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press.

Fishburn, P. C. (1974). Lexicographic orders, utilities, and decision rules: A survey. Management Science, 20, 1442–1472.

Frederick, S. (2002). Automated choice heuristics. In D. Griffin, T. Gilovich & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 548–558). New York: Cambridge University Press.

Frensch, P. A., & Funke, J. (Eds.). (1995). Complex problem solving - The European perspective. Hillsdale, NJ: Erlbaum.

Gigerenzer, G., Todd, P. M., & the ABC Research Group (1999). Simple heuristics that make us smart. Oxford: Oxford University Press.

Gigerenzer, G. (2004). Fast and frugal heuristics: The tools of bounded rationality. In D. Koehler & N. Harvey (Eds.), Handbook of judgment and decision making (pp. 62–88). Oxford, UK: Blackwell.

Glöckner, A. (2006). Automatische Prozesse bei Entscheidungen [Automatic processes in decision making]. Hamburg: Kovac.

Glöckner, A., & Betsch, T. (submitted). Multiple-reason decision making based on automatic processing.

Glöckner, A., Betsch, T., & Schindler, N. (submitted). Construction of probabilistic inferences by constraint satisfaction.

Glöckner, A. (2007). Does intuition beat fast and frugal heuristics? A systematic empirical analysis. In H. Plessner, C. Betsch, and T. Betsch (Eds.), Intuition in judgment and decision making (pp. 309–325). Mahwah, NJ: Lawrence Erlbaum.

Glöckner, A., & Hodges, S. D. (submitted). Strategy selection in memory based decisions: Simplifying fast and frugal heuristics versus weighted compensatory strategies based on automatic information integration.

Glöckner, A., Betsch, T., & Schindler, N. (2007). Construction of probabilistic inferences by constraint satisfaction. Manuscript submitted for publication.

Glöckner, A. (in press). How evolution outwits bounded rationality: The efficient interaction of automatic and deliberate processes in decision making and implications for institutions. In C. Engel and W. Singer (Eds.), Better than conscious. FIAS Workshop Report. Cambridge, MA: MIT Press.

Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review, 109, 75–90.

Greifeneder, R. & Betsch, C. (2006). Lieber die Taube auf dem Dach! Eine Skala zur Erfassung interindividueller Unterschiede in der Maximierungstendenz [Validation and German translation of the maximizing scale]. Zeitschrift für Sozialpsychologie, 37, 233–243.

Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological Review, 108, 814–834.

Hammond, K. R., Hamm, R. M., Grassia, J., & Pearson, T. (1987). Direct comparison of the relative efficiency on intuitive and analytical cognition. IEEE Transactions on Systems, Man and Cybernetics, 17, 753–770.

Hasher, L., & Zacks, R. T. (1984). Automatic processing of fundamental information: The case of frequency of occurrence. American Psychologist, 12, 1372–1388.

Heider, F. (1946). Attitudes and cognitive organization. Journal of Psychology, 21, 107–112.

Hogarth, R. (2001). Educating intuition. Chicago: University of Chicago Press.

Holyoak, K. J., & Thagard, P. (1989). Analogical mapping by constraint satisfaction. Cognitive Science, 13, 295–355.

Holyoak, K. J., & Simon, D. (1999). Bidirectional reasoning in decision making by constraint satisfaction. Journal of Experimental Psychology: General, 128, 3–31.

Kahneman, D. & Frederick, S. (2002). Representativeness revisited: attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 49–81). New York: Cambridge University Press.

Kahneman, D., Fredrickson, B. L., Schreiber, C. A., & Redelmeier, D. A. (1993). When more pain is preferred to less: Adding a better end. Psychological Science, 4, 401–405.

Klein, G. A. (1993). A recognition-primed decision (RPD) model of rapid decision making. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision Making in Action: Models and Methods (pp. 138–147). Norwood, NJ: Ablex Publishing Corporation.

Klein, G. (1999). Sources of power. How people make decisions. Cambridge, MA: MIT Press.

Köhler, W. (1947). Gestalt psychology: An introduction to new concepts in modern psychology. New York: Liveright.

Kunda, Z., & Thagard, P. (1996). Forming impressions from stereotypes, traits, and behaviors: A parallel constraint satisfaction theory. Psychological Review, 103, 284–308.

Künzler, R., & Bakker, C. M. (2001). Female preferences for single and combined traits in computer animated stickleback males. Behavioral Ecology, 12, 681–685.

Lee, M. D., & Cummins, T. D. R. (2004). Evidence accumulation in decision making: Unifying the“take the best” and the“rational” models. Psychonomic Bulletin & Review, 11, 343–352.

Lieberman, M. D. (2000). Intuition: A social cognition neuroscience approach. Psychological Review, 126, 109–137.

McClelland, J. L., & Rumelhart, D. E. (1981). An interactive model of context effects in letter perception. Part 1. An account of basic findings. Psychological Review, 88, 375–407.

Montgomery, H. (1989). From cognition to action: The search for dominance in decision making. In H. Montgomery & O. Svenson (Eds.), Process and structure in human decision making (pp. 23–49). New York: Wiley.

Newell, B. R. (2005). Re-visions of rationality? Trends in Cognitive Science, 9, 11–15.

Payne, J. W. (1982). Contingent decision behavior. Psychological Bulletin, 92, 382–402.

Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of Experimental Psychology: Learning, Memory, & Cognition, 14, 534–552.

Payne, J. W., Bettman, J. R, & Johnson, E. J. (1993). The adaptive decision maker. New York: Cambridge University Press.

Payne, J. W., & Bettman, J. R. (2001). Preferential choice and adaptive strategy use. In G. Gigerenzer & R. Selten (Eds.), Bounded rationality — the adaptive toolbox (pp. 123–145). Cambridge, MA: MIT Press.

Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. Advances in Experimental Social Psychology, 19, 123–205.

Pitz, G. F. (1977). Decision making and cognition. In H. Jungermann & G. de Zeeuw (Eds.), Decision making and change in human affairs (pp. 403–424). Dordrecht, Holland: Reidel.

Read, S. J., Vanman, E. J., & Miller, L. C. (1997). Connectionism, parallel constraint satisfaction and gestalt principles: (Re)introducting cognitive dynamics to social psychology. Personality and Social Psychology Review, 1, 26–53.

Read, S. J. & Miller, L. C. (1998). On the dynamic construction of meaning: An interactive activation and competition model of social perception. In S. J. Read & L. C. Miller (Eds.), Connectionist models of social reasoning and social behavior (pp. 27–70), Mahwah, NJ: Lawrence Erlbaum.

Rieskamp, J., & Otto, P. E. (2006). SSL: A theory of how people learn to select strategies. Journal of Experimental Psychology: General, 135, 207- 236.

Rubin, E. (1915/1921). Visuell wahrgenommene Figuren [Visual perception of figures]. Kopenhagen: Gyldendal.

Rumelhart, D. E., & McClelland, J. L. (1982). An interactive activation model of context effects in letter perception: II. The contextual enhancement effect and some tests and extensions of the model. Psychological Review, 89, 60–94.

Rumelhart, D. E., Hinton, G. E., & McClelland, J. L. (1986). A general framework for parallel distributed processes. In D. E. Rumelhart, J. L. McClelland, & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1, pp. 45–77). Cambridge, MA: MIT Press.

Rumelhart, D. E., & McClelland, J. L. (Eds.). (1986). Parallel distributed processing: Explorations in the microstructure of cognition: Vol. 1. Foundations. Cambridge, MA: MIT Press Bradford.

Rumelhart, D. E., Smolensky, P., & McClelland, J. L., & Hinton, G. E. (1986). Schemata and sequential thought processes in PDP models. In D. E. Rumelhart, J. L. McClelland, & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 2, pp. 7–57). Cambridge, MA: MIT Press.

Russo, J. E. & Dosher, B. A. (1983). Strategies for multi-attribute binary choice. Journal of Experimental Psychology: Learning, Memory and Cognition, 9, 676–696.

Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review, 84, 1–66.

Schwartz, B., Ward, A., & Monterosso, J., Lyubomirsky, S., White, K., & Lehman, D. R. (2002). Maximizing versus satisficing: happiness is a matter of choice. Journal of Personality and Social Psychology, 83, 1178–1197.

Sedlmeier, P., & Betsch, T. (Eds.). (2002). Etc. - Frequency processing and cognition. Oxford: Oxford University Press.

Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending, and a general theory. Psychological Review, 84, 127–190.

Shultz, T. R., & Lepper, M. R. (1996). Cognitive dissonance reduction as constraint satisfaction. Psychological Review, 103, 219–240.

Simon, D., & Holyoak, K. J. (2002). Structural dynamics of cognition: from consistency theory to constraint satisfaction. Personality and Social Psychology Review, 6, 283–294.

Simon, D., Krawczyk, D. C., & Holyoak, K. J. (2004). Construction of preferences by constraint satisfaction. Psychological Science, 15, 331–336.

Simon, D. (2004). A third view of the black box: cognitive coherence in legal decision making. University of Chicago Law Review, 71, 511–586.

Simon, D., Snow, C. J., & Read, S. J. (2004). The redux of cognitive consistency theories: evidence judgments by constraint satisfaction. Journal of Personality and Social Psychology, 86, 814–837.

Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69, 99–118.

Simon, H. A. (1982). Models of bounded rationality. Cambridge, MA: MIT Press.

Sloman, S. A., & Hagmayer, Y. (2006). The causal psycho-logic of choice. Trends in Cognitive Science, 10, 407–412.

Slovic, P., Finucane, M., Peters, E., & McGregor, D. G. (2002). The Affect Heuristic. In T. Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 397–420). New York: Cambridge University Press.

Svenson, O. (1992). Differentiation and consolidation theory of human decision making: A frame of reference for the study of pre- and post-decision processes. Acta Psychologica, 80, 143–168.

Thagard, P. (1989). Explanatory coherence. Behavioral and Brain Sciences, 12, 435–467.

Thagard, P. & Millgram, E. (1995). Inference to the best plan: A coherence theory of decision. In A. Ram & D. B. Leake (Eds.), Goal-driven learning (pp. 439–454). Cambridge, MA: MIT Press.

Thagard, P. (2003). Why wasn’t O.J. convicted? Emotional coherence in legal inference. Cognition and Emotion, 17, 361–383.

Tversky, A. (1972). Elimination by aspect: A theory of choice. Psychological Review, 79, 281–299.

Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211, 453–458.

Tyszka, T. (1986). Information and evaluation processes in decision making: The role of familiarity. In B. Brehmer, H. Jungermann, P. Lourens and G. Sevón (Eds.), New directions in decision research (pp. 151–161). Amsterdam: North-Holland.

Usher, M., & McClelland, J. L. (2004). Loss Aversion and Inhibition in Dynamical Models of Multialternative Choice. Psychological Review, 111, 757–769.

Wason, P.C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12, 129–140.

Wassermann, L. (2000). Bayesian model selection and model averaging. Journal of Mathematical Psychology, 44, 92–107.

Weber, E. U., Goldstein, W. M., & Busemeyer, J. R. (1991). Beyond strategies: Implications of memory representation and memory processes for models of judgment and decision making. In W. E. Hockley & S. Lewandowsky (Eds.), Relating theory and data: Essays on human memory in honor of Bennet B. Murdock. Hillsdale, NJ: Lawrence Erlbaum.

Wicklund, R. A., & Brehm, J. W. (1976). Perspectives on cognitive dissonance. Hillsdale, NJ: Lawrence Erlbaum.


*
We thank Christoph Engel, Martin Beckenkamp, Ben Newell, Arndt Bröder, Nina Horstmann, Tanja Ostermann, Andrea Ahlgrimm, Hendrik Hakenes and the reviewers for insightful comments on earlier manuscript drafts. Address: Andreas Glöckner, Max Planck Institute for Research on Collective Goods Kurt-Schumacher-Str. 10
D-53113 Bonn, Germany. Email: gloeckner@coll.mpg.de.
1
The latter is necessary because simple strategies like Take the Best and Equal Weight are always submodels of weighted compensatory strategies and thus can only be differentiated by using additional dependent variables (see also Lee & Cummins, 2004).
2
The focus of the presented research was to test the model predictions against fast-and-frugal heuristics. Further research will be needed to test distinct predictions concerning choice, decision time and confidence against other complex decision models which could, for instance, also account for quick compensatory choices (e.g., Busemeyer & Townsend, 1993; Sloman & Hagmayer, 2006; Usher & McClelland, 2004).
3
The suggested framework elaborates on connectionist models for option choice put forward recently by the authors (Preference Theory: Betsch, 2005; Consistency Maximizing Approach: Glöckner, 2006).
4
For a related discussion on solving complex decision tasks by two interacting network models and the implementation of sequential thought processes in parallel distributed processing models see Rumelhart, Smolensky, McClelland, and Hinton (1986).
5
The leaky competing accumulator (LCA) model of perceptual choice by Usher and McClelland (2004) is an advancement of evidence accumulation models that bears the highest resemblance to our model in that it postulates a non-linear activation function, a leakage of activation (i.e., decay) and a mutual inhibition between option nodes. Note, however, that in the LCA there are no backward connections from option nodes to the input preprocessing nodes so that the model cannot easily account for the findings concerning coherence shifts and hermeneutic reasoning. Another obvious distinction is that the LCA does not aim to model both automatic and deliberate processes at the same time. Further research will be required to derive and test distinct predictions of the LCA and the PCS rule.

This document was translated from LATEX by HEVEA.