Judgment and Decision Making, Vol. 16, No. 1, January 2021, pp. 57-93

Attentional shifts and preference reversals: An eye-tracking study

Carlos Alós-Ferrer*   Alexander Jaudas#   Alexander Ritschel$

Abstract:

The classic preference reversal phenomenon, where monetary evaluations contradict risky choices, has been argued to arise due to a focus on outcomes during the evaluation of alternatives, leading to overpricing of long-shot options. Such an explanation makes the implicit assumption that attentional shifts drive the phenomenon. We conducted an eye-tracking study to causally test this hypothesis by comparing a treatment based on cardinal, monetary evaluations with a different treatment avoiding a monetary frame. We find a significant treatment effect in the form of a shift in attention toward outcomes (relative to probabilities) when evaluations are monetary. Our evidence suggests that attentional shifts resulting from the monetary frame of evaluations are a driver of preference reversals.


Keywords: preference reversals, ranking, compatibility hypothesis, eye-tracking



1  Introduction

Attention matters. A growing literature is concentrating on the role of attention in human decision making. In the consumer behavior literature, there is little doubt that consumers’ attention is limited, and one of the main objectives of marketing campaigns is simply to attract and direct it (e.g., RobertsLattin, 1991; De Los SantosWildenbeest, 2012). Recent contributions in decision and game theory have shown how differences in attention and information processing correlate with decision-making styles and biases (e.g., Knoepfle et al., 2009; Reutskaja et al., 2011; Polonio et al., 2015). Prominent models from cognitive psychology conceive of decision values as the result of evidence accumulation processes (e.g., Ratcliff, 1978; RatcliffRouder, 1998; UsherMcClelland, 2001). A key insight of these models is that the construction (or discovery) of value is directed by visual attention, that is, evidence accumulates only if the alternative (or a corresponding attribute) is attended to. This is the essence, for instance, of the attentional drift-diffusion model (Krajbich et al., 2010; KrajbichRangel, 2011; Krajbich et al., 2012). This is in agreement with evidence from decision neuroscience suggesting that decision values (neural correlates of choices) are constructed by aggregating inputs from different decision processes or attribute evaluations (ShadlenKiani, 2013; ShadlenShohamy, 2016).

In this paper, we provide direct empirical evidence substantiating the role of attention for an important anomaly in decision making under risk, maybe one of the most famous and wide-ranging ones: the classic preference reversal phenomenon (LichtensteinSlovic, 1971; GretherPlott, 1979; see Seidl, 2002 for a detailed survey). The phenomenon refers to a pattern of decisions under risk where decision makers explicitly provide monetary values for long-shot lotteries which are above those of more moderate ones, but then choose the latter, in contradiction with Expected Utility Theory and any value-based theory as Cumulative Prospect Theory. We focus on eye-tracking measurements during a preference reversal experiment with two different treatments (varying the mode of evaluation of lotteries) to provide direct evidence on the role of attention.

A large literature has demonstrated the robustness of this preference reversal phenomenon and postulated different, sometimes competing, explanations (e.g., Tversky et al., 1988; Tversky et al., 1990; TverskyThaler, 1990; Casey, 1994; Fischer et al., 1999; Cubitt et al., 2004; SchmidtHey, 2004; ButlerLoomes, 2007). The phenomenon is typically demonstrated in paradigms involving pairs of lotteries consisting of a relatively safe lottery, called the P-bet (for “probability”), and a riskier lottery offering a larger prize (a long shot), called the $-bet. Individual preferences over such pairs are then elicited both through pairwise choices and by comparing valuations obtained separately for each lottery through (typically) stated minimal selling prices (Willingness To Accept, WTA). Decision makers often choose the P-bet in the direct choice task, but explicitly value the $-bet above the P-bet, in contradiction with the most basic tenets of decision theories under risk, and specifically with the indifference between a lottery and its certainty equivalent. This phenomenon reveals an inconsistency between elicitation methods which should be equivalent. In turn, this inconsistency is both highly relevant and consequential for applied economic analysis, because individual preferences are in practice often estimated on the basis of monetary valuations and related constructs (see Bateman et al., 2002, for an overview).

Of course, a number of reversals are to be expected simply because choices and evaluations are noisy, but the fundamental observation which needs to be explained is the asymmetry. That is, the reversal pattern described above, where P-bets are chosen but $-bets are valued above them (often called “predicted reversals”), occurs much more frequently (often above 50% of the time, conditional on the P-bet being chosen) than the opposite pattern, in which $-bets are chosen but P-bets receive a higher valuation (often called “unpredicted reversals”).

A prominent argument on the origins of the reversal phenomenon is the Compatibility Hypothesis (Tversky et al., 1988; Tversky et al., 1990). Essentially, it states that, when an evaluation is elicited, attributes that naturally map onto the evaluation scale are given predominant weight. That is, eliciting a monetary evaluation (willingness to accept) makes the monetary outcomes of lotteries more salient and might anchor valuations, giving rise to an overpricing of the $-bets, where the associated monetary outcomes are large. It is not difficult to see how, in a noisy environment, such a phenomenon might give rise to preference reversals as found in the literature. For, if the elicited evaluations of $-bets are systematically biased up with respect to their true certainty equivalents, it is likely that part of the choices where a P-bet is chosen are associated with overpriced $-bets, resulting in many predicted reversals. In contrast, for choices where the $-bet is chosen, the same overpricing makes it unlikely that the P-bet is valued above the $-bet, resulting in few unpredicted reversals. A formal model based on evaluation noise, making also predictions for the associated decision times, was proposed and tested in (Alós-Ferrer et al., 2016).

The Compatibility Hypothesis and related explanations of the preference reversal phenomenon essentially rest on the assumption that asking for a monetary valuation results in the overweighting of (salient) monetary outcomes. It is then reasonable to assume that eliciting valuations through a monetary scale shift visual attention toward monetary outcomes, compared to evaluation methods not relying on a monetary scale. We hence hypothesize a link between overweighting of monetary outcomes and visual attention on outcomes. Specifically, overweighting should be observable through an attentional shift. However, it should be noted that a failure to find supportive evidence for this hypothesis would not undermine the Compatibility Hypothesis in itself, while finding supportive evidence would be in line both with the Compatibility Hypothesis and our added hypothesis that overweighting should be reflected in a shift in visual attention.

In this study, we want to explicitly test this hypothesis by examining gaze data obtained through eye tracking during decisions under risk in the framework of the preference reversal phenomenon. To establish the link between monetary valuations and attentional shifts, we conduct a preference-reversal experiment with two treatments, one using a standard monetary valuation, and the other relying on an ordinal-based evaluation task without reference to monetary scales. This allows us to test whether the monetary valuation, relative to other evaluation methods, results in a shift in attention toward monetary outcomes.

A previous study by (Kim et al., 2012) investigated visual fixations in a preference-reversal experiment, but relied on a single treatment with monetary valuations. They observed that monetary amounts were fixated more than probabilities during evaluations but the opposite was true during choices, which can be taken as initial evidence in favor of a role of attention in preference reversals. However, their experiment departed from standard implementations in several ways. First, the description used to elicit monetary valuations by (Kim et al., 2012) (“bidding”) corresponds to Willingness To Pay, while the standard in preference reversal experiments is Willingness To Accept (experiments using WTP do not always find the preference reversal phenomenon, see Casey, 1991 and SchmidtHey, 2004). Second, lottery choices were repeated twice, which leads to different definitions of preference reversals. Our first treatment can be seen as a (conceptual) replication of this work, while relying on a standard implementation of preference reversal experiments. In particular, we will also compare the number of fixations in outcomes and probabilities within this treatment. However, without an additional treatment, it remains unclear whether the effect reported by (Kim et al., 2012) is due only to the presence of a monetary scale for evaluations (which is absent for choices), or whether it is confounded by the differences between evaluations of single lotteries, where a numerical estimate needs to be provided, and actual binary choices. For instance, people have notorious difficulties dealing with probabilities, hence it is to be expected that the default (in the absence of a monetary scale) is a larger number of fixations on probabilities than on the easy-to-understand outcomes, rather than an equal distribution, and these differences could interact with whether a choice or an evaluation is being made.

We aim to provide additional evidence in the form of a direct comparison across different evaluation methods, while also confirming the results of (Kim et al., 2012). That is, our hypotheses are that monetary amounts should be fixated more than probabilities in an evaluation phase where a monetary scale is used, compared to the choice phase (as in Kim et al., 2012), but also compared to a different evaluation phase where a monetary scale is absent. Confirming both hypotheses (evaluations vs. choices and monetary vs. non-monetary evaluations) would provide concurrent evidence for the link between overweighting and visual attention. For this purpose, we chose a second treatment where the monetary evaluation is replaced with the elicitation of an ordinal ranking within a small subset of lotteries. The reason is twofold. On the one hand, this treatment is a straightforward implementation of a ranking (as opposed to a monetarily-framed rating) which requires no reference to monetary values at all. On the other hand, it has been previously shown that this evaluation method shuts down the preference reversal phenomenon and, instead, elicits a “reversal of the preference reversal phenomenon” (Casey, 1991; Bateman et al., 2007; Alós-Ferrer et al., 2016; Alós-Ferrer et al., 2020) where the rate of unpredicted reversals exceeds the rate of predicted ones. Thus, this is a natural choice for a comparison treatment where overweighting can be assumed to be less relevant or nonexistent (see Alós-Ferrer et al., 2016; Alós-Ferrer et al., 2020, for details).

Our treatment comparison is related to the study of (Rubaltelli et al., 2012), who analyzed fixations on gambles which were evaluated according to two different methods (within). Their study did not include a choice phase (and hence preference reversals cannot be observed), and choices were not incentivized. However, their evaluation treatments conceptually parallel ours. The first was a pricing-based evaluation similar to ours, with the difference that they used Willingness To Pay instead of Willingness To Accept. The second asked subjects to evaluate gambles using levels of attractiveness (−5=“very unattractive” to 5=“very attractive”). Although this is not a purely-ordinal ranking task as ours, the abstract rating, though numerical, is in principle void of monetary content. Subjects fixated more on outcomes than probabilities when pricing the lotteries, but there were no differences when rating them according to attractiveness. Although the tasks are very different and not representative of the ones used in the literature on the standard reversal phenomenon, this is in line with the hypothesis that the overweighting of monetary valuations predicted by the Compatibility Hypothesis should correspond to an increased visual attention on outcomes during monetary evaluations.

Finally, we complement the demonstration of an attentional shift across different evaluation modes at the aggregate level with evidence for the role of attention at the level of individual decisions. Following (Alós-Ferrer et al., 2020), we included an additional, independent block of decisions under risk in the experiment, allowing for an out-of-sample estimation of individual utilities and certainty equivalents. For the treatment with monetary evaluations, we then obtain a quantitative measure of overpricing, in the form of the difference between the certainty equivalent and the elicited valuation. We then relate this measure of overpricing to visual attention by examining the effect of fixations on each lottery on the corresponding overpricing. The effects are admittedly modest, but we do find that attention on $-bets is associated with their overpricing, in line with the basic interpretation that increased attention boosts value. In contrast, attention on P-bets has no effect.

Our study belongs to the growing literature directly examining eye-tracking measurements in the social sciences. This technique is relatively common in psychology and neuroscience, but has only recently gained popularity for the study of individual decisions under risk (e.g., GlöcknerHerbold, 2011; Ludwig et al., 2020). Most of the recent studies in this and related fields target gaze and fixation data to study search patterns or processes of information acquisition (e.g., Knoepfle et al., 2009; Reutskaja et al., 2011; Polonio et al., 2015; Devetag et al., 2016; PolonioCoricelli, 2019). Exceptions are (Wang et al., 2010), who (in addition to fixation patterns) examined pupil dilation in sender-receiver games and found larger pupil dilation when deceiving messages were sent and (Alós-Ferrer et al., 2019b), who used pupil dilation as an indicator for cognitive effort in a Bayesian Updating task with varying incentives.

There are, of course, many other types of preference reversals in the literature, where two different choices stand in contradiction with a normative prediction. Other prominent examples of preference reversals are the asymmetric dominance or “decoy” effect (Huber et al., 1982; Pettibone, 2012), the compromise effect (Simonson, 1989), and the similarity effect (Tversky, 1972). In an eye-tracking experiment, (NoguchiStewart, 2014) investigate these effects in consumer choice tasks and conclude that they might be compatible with choices arising from a series of single-attribute comparisons. This view is conceptually aligned with ours in the sense that the relative weight of comparisons along different attributes is at the root of the respective effects.

The paper is structured as follows. Section 2 presents the experimental design in detail. Section 3 discusses the behavioral and eye-tracking results for the treatment comparisons. Section 4 discusses the utility estimation, the derivation of an overpricing measure, and its relation to attention data. Section 5 concludes. The Appendix includes a detailed description of the random utility model estimation procedure (Appendix A), a list of all lottery pairs used in the experiment (Appendix B), translated instructions (Appendix C), and example screenshots of the experiment (Appendix D).

2  Experimental Design and Procedures

Our dataset encompasses a total of 59 subjects (31 females, average age 22.6 years), who were measured in individual sessions.1 Individual sessions lasted 48 minutes on average, and subjects earned an average of 15.86 Euro (SD=9.86), plus a 4 Euro show-up fee. Subjects were recruited from the student population of the University of Cologne using ORSEE (Greiner, 2015), excluding students majoring in psychology and economics (who could have been taught about the preference reversal phenomenon), and subjects who had previously participated in similar experiments (involving lottery choice). The experiment was programmed in PsychoPy (Peirce, 2007). There were two treatments, Price and Rank, with 30 and 29 subjects, respectively.

2.1  Design

The experiment followed closely the general setup of behavioral experiments on preference reversals (e.g., Alós-Ferrer et al., 2016; Alós-Ferrer et al., 2020), with suitable modifications to accommodate eye-tracking measurements. Our intention was to establish attentional shifts within the classical paradigm without adding any potential confounds, and specifically compare it to the ranking design where the reversal of the preference reversal phenomenon has been elicited. We now describe the paradigm, the treatments, and the adjustments needed for eye-tracking measurements.

The experiment comprised three phases. The first and shortest one consisted of choices between 36 lottery pairs, unrelated to the P- and $-bets used in the subsequent two parts.2 32 of these choices were used for the estimation of individual preferences out of sample, which is relevant for the analysis in Section 4 below, and the remaining 4 choices were used to check for dominance violations.3

The main part of the experiment consisted of the second and third phases, which taken together correspond to a standard preference reversal experiment, except for the fact that eye-tracking data was collected. The second was the evaluation phase, in which we elicited the subjects’ valuations for 60 P-bets and 60 $-bets. In the Price treatment, subjects stated their willingness-to-accept (WTA) valuations for each lottery. Specifically, they were asked to state their minimal selling price for each of the 120 lotteries. Each lottery was presented on a separate screen. All lotteries were of the form A = (p, x), that is, A pays an amount x with probability p and zero otherwise. Subjects’ WTAs were limited to the range [0, x]. In the Rank treatment, we aimed to obtain ordinal evaluations as in (Bateman et al., 2007) or (Alós-Ferrer et al., 2016). The same 120 lotteries used in the Price treatment were presented in blocks of six, and subjects assigned ranks to them from their most (rank 1) to their least preferred option (rank 6) according to how much they desired to play each lottery. Each block contained three P-bets and three $-bets. To ensure comparability between treatments, the lotteries in Price treatment were also presented in 20 “rounds,” separated by screens announcing the next round. Each such round consisted of six lotteries presented sequentially, with the set of lotteries in a round corresponding to one block in the Rank treatment.4

The last phase was a a choice task, identical across treatments.5 Subjects faced again the lotteries from the evaluation phase, now presented in 60 pairs, each consisting of a $-bet and a P-bet. For each of the 60 pairs, subjects were asked to choose which lottery they preferred to play. Pairs were constructed in such a way that a block in the second phase contained exactly three of the pairs used in the third phase, but the order of presentation of the pairs was randomized (for ease of implementation, each subject was randomly assigned to one of four different, pre-randomized sequences of lottery pairs).

In all parts of the experiment, lotteries were presented in the form of two framed boxes stacked vertically, one showing the outcome and one showing the probability of the lottery. The position of the two boxes (i.e., whether the monetary amount was on top or not) was counterbalanced across subjects. This presentation ensured a physical separation of the different dimensions of a lottery allowing us to clearly distinguish the areas of interest for the eye-tracking analysis. Appendix C and Appendix D show the instructions and screenshots of the experiment, respectively. The screen position (left or right) of lotteries within pairs was also counterbalanced within subjects, with half of the pairs displaying the $-bet on the right.

After the three phases described above, subjects were asked to complete a short questionnaire eliciting various demographics (gender, age, field of studies) and numerical literacy (Lipkus et al., 2001). There was no feedback during the course of the experiment, that is, subjects did not receive any information regarding their earnings until the very end of the experiment. All decisions were made independently and at a subject’s individual pace.

After the questionnaire, for each subject, one randomly-chosen lottery from each phase was selected, played, and paid. For the first and third phases, one of the lottery pairs in the corresponding phase was randomly selected and the lottery chosen by the subject was played out. The second phase used the (incentive-compatible) Ordinal Payment Method (GoldsteinEinhorn, 1987; Tversky et al., 1990; Cubitt et al., 2004). Specifically, the computer selected one block at random, and then randomly selected two of the six lotteries in the block. The one that the subject had priced or ranked higher was then played out. We opted for this incentive scheme instead of the Becker-DeGroot-Marschak procedure because the latter is often found to be noisier (see, e.g., Alós-Ferrer et al., 2016). The total payoff from the experiment was the sum of the amounts received in each phase, plus a lab-mandated show-up fee.

2.2  Eye-tracking Setup

Visual fixations were measured using an SMI RED500 remote eye tracker. The subject’s head was supported by a chin-rest minimizing random movement. Subjects were placed 55 cm in front of a 22" screen which showed the stimuli with a resolution of 1680×1050 pixels. The pupil was recorded at 250 Hz using iView X software, version 2.8.43. The eye tracker was calibrated at the beginning of each part (after instructions) using a 5-point calibration routine. Blinks were removed after data collection using the tools provided by SMI. The raw data files were converted to fixations using the SMI IDF converter tool 3.0.16. To identify fixations the SMI IDF converter tool uses a dispersion-based algorithm with minimum fixation duration of 50 ms (GlöcknerHerbold, 2011; Glöckner et al., 2012) and a maximum dispersion of 85 pixel (see SalvucciGoldberg, 2000, for a comparison of different methods). Pre-defined non-overlapping Areas of Interest (AOIs) were defined around every piece of information (160×95 pixels per AOI).6 After collection of the data, fixations were corrected using an algorithm similar to (Vadillo et al., 2015), and the number and duration of fixations were computed and recorded.

3  Results: Attentional Shifts

We first present the purely-behavioral results to establish the presence of the preference reversal phenomenon (in the Price treatment) or its opposite (in the Rank treatment) as expected. Then we turn to our actual variables of interest, and examine attentional processes through eye-tracking data in two different subsections. Our main result in this section demonstrates an attentional shift toward outcomes (relative to probabilities) for monetary evaluations (Price treatment), compared to rankings (Rank treatment). This is accompanied by an attentional shift toward $-bets (relative to P-bets) for monetary evaluations, which is natural as $-bets involve larger outcomes.

The analysis in this section is based on subject averages (e.g., average number of fixations on the lotteries’ outcomes, computed at the subject level). All between-subject comparisons (across treatments) are made with Mann-Whitney-Wilcoxon (MWW) tests. All within-subject comparisons (differences between the choice and evaluation phases) are made with Wilcoxon-Signed-Rank (WSR) tests.

3.1  Behavioral Results

Figure 1 illustrates the behavioral results. In the Price treatment, the $-bet was evaluated higher than the P-bet in 68.78% of cases, but it was only chosen 29.17% of the time (WSR test, N=30, z=4.711, p<0.0001; Figure 1, left panel, left).7 This is a first reflection of the preference reversal phenomenon. In contrast, but as expected, in the Rank treatment the $-bet was ranked better than the paired P-bet in 24.89% of cases, but was chosen over the P-bet in 33.22% of cases. This difference is highly significant (WSR test, N=29, z=−3.691, p=.0002; Figure 1, left panel, right) and goes in the opposite direction of the Price treatment, reflecting the “reversal of the preference reversal phenomenon” which is characteristic of ordinal treatments (Bateman et al., 2007; Alós-Ferrer et al., 2016; Alós-Ferrer et al., 2020).


  
Figure 1: Left: Proportion of $-Bets preferred over the paired P-bets for both treatments and both phases. Right: Proportion of predicted and unpredicted reversals for both treatments.

Overall, we found 36.00% of reversals (of both types) in the Price treatment, and 18.45% in the Rank treatment. That is, ranking evaluations reduced the overall amount of preference reversals (MWW test, N=59, z=3.689, p=.0002). Crucially, and also as expected, it changed the dominant type of reversals. This is illustrated in the right–hand panel of Figure 1, which displays the reversal rates classified as predicted or unpredicted reversals.8 In the Price treatment, predicted reversals (46.80%) were far more frequent than unpredicted ones (5.00%; WSR test, N=27, z=4.324, p<.0001). The opposite pattern was observed in the Rank treatment, where predicted reversals were far less frequent (8.61%) than unpredicted ones (46.42%; WSR test, N=29, z=−4.227, p<.0001). The first observation reflects the well-established preference reversal phenomenon, while the second reflects its reversal as in (Alós-Ferrer et al., 2016). Of course, we also observe more predicted reversals in the Price than the Rank treatment (MWW, N=59, z=5.573, p<.0001), and fewer unpredicted ones (MWW, N=56, z=−5.652, p<.0001).

In summary, our behavioral data reflect the well-established preference reversal phenomenon and the previously-observed fact that this phenomenon is reversed if evaluations involve rankings instead of pricing. We now turn to eye-tracking data to study the attentional processes underlying preference reversals.

3.2  Attention Across Attributes


Figure 2: Average number of fixations on outcomes and probabilities in the choice and evaluation phases, for the Price treatments (left-hand panel) and the Rank treatment (center panel). The right-hand panel presents violin plots for the outcome/probability ratios for the number of fixations in the evaluation phases of both treatments (one outlier outside the picture).

Consider the Price treatment first. Figure 2 (left-hand panel) displays the individual-level average number of fixations (across lotteries) per attribute (outcome and probability) in each phase of this treatment.9 In the choice phase there were fewer fixations on outcomes (average 6.18 fixations on outcomes per lottery) than on probabilities (7.28; WSR, N=30, z=−2.643, p=.0082). This suggests that the default level of attention is larger for probabilities than for outcomes, which is compatible with the view that human beings generally find the former less intuitive than the latter. This is also illustrated in the heatmap in Figure 3. However, this difference disappears for the evaluation phase, where there was no significant difference between the number of fixations on outcomes (15.12) and probabilities (13.71; WSR, N=30, z=1.090, p=.2756). To show that the difference across phases is significant, we computed the individual-level difference in the average number of fixations on outcomes and on probabilities. This difference was significantly different across phases (WSR, N=30, z=2.705, p=.0068). This is consistent with the Compatibility Hypothesis, which suggests that the level of attention to outcomes should increase for the (monetarily-framed) evaluation phase compared to the choice phase. It is also aligned with the results of (Kim et al., 2012), who however used a different experimental implementation. Since choices and evaluations differ in fundamental ways, though, to test the Compatibility Hypothesis we now compare the results to those of the Rank treatment, where the monetary scale was absent during evaluation.


Figure 3: Heatmap for the choice phase (Treatment Price). Red spots represent the most visually salient areas of the screen. The least salient areas (dark blue spots) were eliminated from the heatmap for better visualization. The heatmap is deduced by convolving the fixations (of all individuals and lotteries) by an isotropic bidimensional Gaussian function. The standard deviation of the Gaussian function was set according to (Le MeurBaccino, 2013). In the actual choice screen, the lotteries were further apart and not labeled, and both the left-right position of lotteries and the top-bottom alignment of outcomes and probabilities were counterbalanced. Actual screenshots are depicted in the Appendix. The figure illustrates that, in general, more attention is devoted to probabilities than to outcomes. The analogous picture for Treatment Ranking displays similar features for the choice phase.

In particular, the enhanced focus on outcomes should be absent in the Rank treatment. This is indeed borne by the data (Figure 2, center panel). In this treatment, subjects fixated more on probabilities than on outcomes both in the choice phase (outcomes, 6.56; probabilities, 8.18; WSR, N=29, z=−3.708, p=.0002) and in the ranking phase (outcomes, 16.24; probabilities, 22.30; WSR, N=29, z=−4.444, p<.0001). That is, there is no attentional shift across phases in this treatment. Rather, probabilities are attended more in both phases. The difference between the average number of fixations on outcomes and on probabilities was different across phases (WSR, N=29, z=−4.444, p<.0001), which is not surprising since there are many more fixations in the ranking phase, hence the difference in fixations between probabilities and outcomes is even larger in the evaluation phase.

Our main result in this section, though, concerns the comparison across treatments. The very different setups of the evaluation phase of both treatments makes a direct comparison of number of fixations difficult. We therefore consider the outcome/probability ratios for the number of fixations in the evaluation phases of both treatments. The ratio indicates how visual attention in each evaluation phase was allocated to the two attributes, i.e., ratios above 1 mean a stronger focus on outcomes and ratios below 1 show a stronger focus on probabilities. This approach allows a simple, intuitive comparison of attention allocation across Price and Rank treatments. The ratios show a strong shift in attention across treatments (Figure 2, right-hand panel). The outcome/probability ratio was 1.42 in the Price treatment and only .73 in the Rank treatment. That is, there was a significant shift in attention toward the outcome in the Price treatment (MWW test, N=59, z=5.003, p<.0001) compared to the Rank treatment.10 These results confirm that pricing-based evaluations induce a stronger attentional focus on monetary outcomes.

For our purposes, it was important that our Price treatment reproduced standard preference reversal experiments as carried out in the extensive literature on this phenomenon. Unfortunately, this means that the visual layout in the evaluation phase of treatment Rank must differ from the one in the other treatment simply because in the latter several lotteries are presented at once. This criticism could also be leveled at the comparison between the choice and the evaluation phase, and hence also to the analysis in (Kim et al., 2012). To ameliorate this difficulty, however, the presentation of each individual lottery was identical across treatments, including stimuli size.11 Still, it could be argued that the differences in layout beyond the individual lotteries might lead to potential confounds (OrquinHolmqvist, 2018). In particular, the presence of multiple lotteries might have increased cognitive load and resulted in more dispersed attention. This would result in attention being more uniformly distributed across the attributes. However, the opposite is true (see Figure 2), and thus we can rule out this alternative explanation. As reported above, in the Rank treatment, there were significantly more fixations on probabilities than on outcomes (22.30 vs. 16.24), while the difference in the Price treatment (13.71 vs. 15.12) was not significant.

Some lotteries have single-digit outcomes only (mostly P-bets), which could possibly be perceived through peripheral vision.12 As a robustness check, we reran the analysis using only the $-bets. Since this still includes a few $-bets with single-digit outcomes and excluded a few P-bets with two-digit outcomes, we also ran further robustness checks excluding all single-digit outcome lotteries. All results remained qualitatively the same and some were strengthened. Outcomes were now fixated significantly more often than probabilities in the evaluation phase in the Price treatment. We also conducted a further robustness analysis classifying fixations differently, namely counting consecutive fixations in the same AOI as one. The results remained qualitatively unchanged.

Although the analysis above focuses on fixations, it has to be acknowledged that the differences in layouts increase the number of transitions in the evaluation phase of the Rank treatment simply because there are more areas of interest (multiple lotteries), which then also leads to an overall increase in fixations. Indeed, 37.94% of all transitions across AOIs in this phase were across lotteries. This might raise the concern that fixations arising from across-lottery comparisons might differ from other fixations and create a confound in our results. A similar point affects the comparison of fixations between phases (choices vs. evaluations), both in our treatments and in (Kim et al., 2012). To address this concern, we carried out a robustness analysis as follows. In the evaluation phase of the Rank treatment, 38.51% of all transitions start and end within the same AOI (either the outcome or the probability of a lottery). In the evaluation phase of the Price treatment, the corresponding number is 58.63%. Thus, we repeated the entire analysis using these AOI-internal transitions instead of fixations. For this analysis, we also used AOI-internal transitions for the choice phases. This dependent variable then ignores all transitions across lotteries and should more closely reflect attention within a lottery, and relies only on transitions unaffected by the different screen layouts across the treatments and across the phases within a treatment.

All tests reported above remained qualitatively unchanged with this new analysis, both for the comparison of phases within the treatments and for the comparison of evaluations across treatments. In the Price treatment, there were fewer internal transitions on outcomes (average 1.28) than on probabilities (1.66; WSR, N=30, z=−2.479, p=.0132), but there was no significant difference in the evaluation phase (4.93 vs. 4.26; WSR, N=30, z=0.956, p=.3388). The individual-level difference in the average number of internal transitions on outcomes and on probabilities was significantly different across phases (WSR, N=30, z=2.314, p=.0207). In the Rank treatment, there were less internal transitions on outcomes than on probabilities both in the choice phase (1.39 vs. 1.95; WSR, N=29, z=−3.503, p=.0005) and in the ranking phase (3.07 vs. 4.60; WSR, N=29, z=−4.152, p<.0001), but the difference between the average number of internal transitions on outcomes and on probabilities was significantly different across phases (WSR, N=29, z=−4.357, p<.0001). For the treatment difference, we computed the outcome/probability ratio for the number of internal transitions in the evaluation phases of both treatments, which was 1.71 in the Price treatment and only .693 in the Rank treatment. That is, as in the case of fixations, there was a significant shift in attention toward the outcome in the Price treatment (MWW test, N=59, z=4.427, p<.0001) compared to the Rank treatment.

We further investigated the effect of attribute values (actual outcomes and probabilities) on attention by conducting random effects panel regressions with robust standard errors for the (log-transformed) outcome/probability fixation ratios. Again, we interpret this variable as the level of attention on outcomes compared to probabilities. The regressions use the individual-level fixations for the evaluations of all lotteries during the evaluation phase for both treatments. Table 1 displays the results. The Rank treatment dummy is negative and highly significant in all three models, indicating an attentional shift toward probabilities compared to outcomes in that treatment, in agreement with the results reported above. The $-bet dummy is positive and highly significant, indicating a shift toward outcomes for $-bets compared to P-bets in the Price treatment (since the interaction term is included). The effect is negative and significant in all three models for the Rank treatment (linear combination test, β=−0.1084, −0.1850, and −0.1246, respectively), indicating a shift toward probabilities for $-bets in this treatment. In addition, we observe that the Outcome coefficient (monetary amount of the non-zero outcome of the lottery) is positive and highly significant in Models 2 and 3, demonstrating that a larger outcome results in a stronger shift toward outcomes compared to probabilities. Analogously, the Probability coefficient is positive, but misses significance at the 5% level (Model 3, p=.0796).


Table 1: Random Effects Panel Regression of the (log-transformed) Outcome/Probability Fixation Ratios.
ln(# Fix. Out/Prob)Model 1Model 2Model 3
Rank Treatment-0.2905^***-0.2896^***-0.2900^***
 (0.0710)(0.0709)(0.0709)
$-bet0.2082^***0.1311^***0.1914^***
 (0.0342)(0.0328)(0.0510)
Rank × $-bet-0.3165^***-0.3161^***-0.3160^***
 (0.0475)(0.0475)(0.0475)
Outcome 0.0079^***0.0093^***
  (0.0022)(0.0022)
Probability  0.1596
   (0.0910)
Constant-0.0307-0.0785-0.2134^*
 (0.0534)(0.0583)(0.1015)
R20.11120.11390.1144
Wald Test88.00^***98.66^***102.70^***
Observations676867686768
Standard errors in parentheses, * p<0.05, ** p<0.01, *** p<0.001

3.3  Attention Across Lottery Types


Figure 4: Number of Fixations on the $-bet and P-bet in the choice and evaluation phase for the Price treatment (left-hand panel) and the Rank treatment (center panel). The right-hand panel presents violin plots for the $-bet/P-bet ratios of fixations in the evaluation phases of both treatments.


Since $-bets involve larger outcomes than P-bets, an attentional shift across treatments should also be reflected in attentional differences between lottery types. In this subsection, we focus on this comparison. Figure 4 (left-hand panel) displays the individual-level average number of fixations per lottery in the Price treatment, separately for the evaluation and choice phases. For this treatment, when subjects were asked to generate a price (WTA) for a lottery, they fixated $-bets more (16.38) than P-bets (12.44; WSR, N=30, z=4.094, p<.0001). This can also be seen in the heatmap in Figure 5. In contrast, there were no significant differences in the number of fixations in the choice phase ($-bets, 6.82; P-bets, 6.64; WSR, N=30, z=0.391, p=.6959). To show that the difference across phases is significant, we computed the individual-level difference in the average number of fixations between $-bets and P-bets, which was larger for the pricing phase than for the choice phase (WSR, N=30, z=4.001, p<.0001). In summary, during the pricing phase subjects fixated more on the $-bets than on the P-bets, while in the choice phase both lotteries were given similar levels of attention. This is in agreement with (Kim et al., 2012), who found more fixations on $-bets than on P-bets in their bidding phase, and the opposite in their choice phase. It would be tempting to interpret these results as evidence for the Compatibility Hypothesis. This, however, would be unwarranted. The reason is that similar effects are obtained in the Rank treatment, whose evaluation phase involved no monetary scale.


Figure 5: Heatmap for the evaluation phase (Treatment Price). Red spots represent the most visually salient areas of the screen. The least salient areas (dark blue spots) were eliminated from the heatmap for better visualization. Lotteries were evaluated individually and are presented here side-by-side for ease of comparison only. Below the lottery was the input field for the monetary evaluation (not part of AOIs for the analysis). Actual screenshots are depicted in the Appendix. The figure illustrates that, in this treatment, more attention was devoted to $-bets than to P-bets during monetary evaluation.

Indeed, in the Rank treatment (Figure 4, center panel), there were also significant differences in fixations between $-bets (20.79) and P-bets (17.74) in the ranking phase (WSR, N=29, z=4.508, p<.0001). Subjects fixated slightly more on $-bets (7.57) than P-bets in the choice phase (7.17), but, as in the Price treatment, the comparison was not significant (WSR, N=29, z=1.838, p=.0661). The individual-level difference in the average number of fixations between $-bets and P-bets was significantly different across phases (WSR, N=29, z=4.098, p<.0001). That is, there are also significant differences in attention across lottery types in the ranking treatment, in spite of the absence of a monetary scale as required by the Compatibility Hypothesis. To test the latter, we need to focus on the comparison across treatments.

Therefore, we consider the $-bet/P-bet ratios of fixations in the evaluation phases of both treatments. The ratio indicates how visual attention in each evaluation phase was allocated, i.e., ratios above 1 mean a stronger focus on $-bets and ratios below 1 a stronger focus on the P-bets. There was indeed a significant shift in attention toward the $-bet in the Price treatment (Figure 4, right-hand panel): the average $-bet/P-bet ratio was 1.30 in the Price treatment, and only 1.17 in the Rank treatment (MWW test, N=59, z=2.108, p=.0351).13 These results confirm that monetary valuations (WTA) lead to a stronger attentional focus on the $-bets, the lotteries whose predominant feature is the large monetary outcome.

4  Results: Attention and Overpricing

In the previous section, we have relied on comparisons across treatments to demonstrate the existence of an attentional shift toward outcomes brought forward by a monetary focus during certain evaluation tasks. In this section, we report a complementary analysis by examining the relation between overpricing of lotteries and gaze data as measured by fixations. The argument is that, by focusing attention on higher outcomes during monetary evaluations, lotteries with particularly large outcomes ($-bets) will become overpriced, resulting in a larger number of instances where the $-bet is valued above the P-bet but the latter is chosen. Thus, the objective here is to link overpricing in the Pricing treatment at the lottery level with attention as revealed by fixation data. However, we view this analysis as a proof of concept, since it is unlikely that the effects of attention on the exact, monetary overpricing of lotteries can be reduced merely to the number of fixations.14

To accomplish this, a measure of overpricing is needed. For this reason, our design included an initial phase with 32 lottery pairs, independent of the preference reversal design, which served the purpose of providing an out-of-sample estimation of the subjects’ individual preferences (following Alós-Ferrer et al., 2020). The main goal of this estimation was to obtain individual utility functions and certainty equivalents which can be used to quantify the overpricing of lotteries in the evaluation phase of the Price treatment, and relate it to attention as measured by fixation data. In our opinion, the natural choice is to conduct this estimation out-of-sample, using (unrelated) binary choices. This is precisely what we chose to do, using an independent set of lotteries that covers the entire range relevant for the preference reversal experiment. In the first subsection below, we briefly describe the estimation. We then turned to a regression analysis using the so-obtained certainty equivalents as dependent variables and fixation data as a regressor.

4.1  Utility Estimation

The choices in the first part of the experiment were used for the estimation of individual preferences out of sample, in the sense that the estimation relied exclusively on the choices in this first part, but was used as an external measure to analyze the data in the following two parts. The set of lotteries used in the first phase (see Appendix B) was constructed to maximize the precision of the estimated risk attitudes, relying on optimal design theory (Silvey, 1980) in the context of non linear (binary) models (Ford et al., 1992; Atkinson, 1996), and following (Moffatt, 2015).15 We assume that the structure of errors follows an additive random utility model (e.g., Thurstone, 1927; Luce, 1959; McFadden, 2001) with normally-distributed noise. The estimation procedure employs well-established techniques as used in many recent contributions (Von Gaudecker et al., 2011; Conte et al., 2011; Moffatt, 2015). We refer the interested reader to Appendix A for a more detailed description of the estimation procedure.

For the functional form of the utilities, we assumed a constant relative risk aversion (CRRA) power utility function given by

u(x) = xr

with r>0. The average of the estimated individual risk propensities in our data set is r=0.508 (median 0.440, SD 0.290).16

4.2  Overpricing

For each of the 30 subjects that participated in the Price treatment, we collected pricing decisions for 120 lotteries in the evaluation phase. We now use the individual preferences estimated from the first part of the experiment to calculate for each individual the certainty equivalent (CE) for each lottery. For a lottery A and subject i, let EUi(A) be the corresponding expected utility of A for i. The certainty equivalent is defined as CEi(A)=ui−1(EUi(A)) and derived from subject i’s utility function ui estimated in the first part of the experiment. The certainty equivalent is the formal translation of monetary evaluation questions, namely the amount of money for sure that leaves the decision maker indifferent between accepting it and playing out the lottery.

For each lottery A and each subject i, define overpricing by

Oi(A)=WTAi(A)−CEi(A).

That is, Oi(A) is the difference between the stated price and the certainty equivalent for that lottery, and is hence measured in monetary units (Euros) and thus fully comparable across lotteries and subjects. To examine overpricing differences in a straightforward way at the population level, consider the average overpricing for each lottery across all subjects in the Price treatment. Average overpricing for $-bets was € 4.414, compared to only € 1.181 for P-bets (MWW test, N=120, z= 9.127, p<.0001). This result documents a systematic overpricing of $-bets, in line with the predictions of (Tversky et al., 1990).


Table 2: Random Effects Panel Regression of Fixations on Overpricing.
 $-BetsP-Bets
OverpricingModel 1Model 2Model 3Model 4Model 5Model 6
# Fix. Outcome0.0342^**0.0258^*0.0259^*0.00920.00970.0096
 (0.0121)(0.0105)(0.0105)(0.0065)(0.0062)(0.0063)
# Fix. Probability 0.0233^**0.0229^** -0.0009-0.0010
  (0.0088)(0.0087) (0.0043)(0.0043)
Constant4.1080^***4.0101^***7.6711^*1.1242^***1.1272^***1.3342
 (0.4936)(0.4959)(3.7169)(0.1640)(0.1693)(1.3004)
ControlsNoNoYesNoNoYes
R20.00000.00130.04290.00100.00060.0488
Wald Test7.93^***14.95^***23.58^***1.992.435.08
Observations180018001800180018001800
Standard errors in parentheses, * p<0.05, ** p<0.01, *** p<0.001.

Our design also allowed us to relate overpricing to visual attention by comparing the quantities Oi(A) to visual fixations in panel regressions. Of course, this paints an incomplete picture, since the effects of attention on elicited prices are likely to be more subtle than a direct, linear relation between number of fixations on a lottery and reported price for that lottery. However, a significant effect would serve as a further direct demonstration of the link between visual attention and overpricing.

Table 2 reports random effects panel regressions for overpricing, using the number of visual fixations on outcomes and probabilities of the corresponding lottery as regressors. The regression makes use of the individual, trial-level data for all subjects in the Price treatment, i.e., 60 different lotteries of each type ($-bets and P-bets) × 30 subjects. For simplicity, we analyzed the two types of lotteries separately, with Models 1–3 focusing on $-bets and Models 4–6 focusing on P-bets.

Model 1 regresses the number of fixations on outcomes on normalized overpricing for $-bets. The coefficient is positive and highly significant (p=.0049) confirming the attentional effect, that is, increased attention on the high outcomes of $-bets is associated with larger overpricing. The model suggests that each fixation is directly responsible for an increase of 3.4 Eurocents in the evaluation of $-bets, relative to the true certainty equivalent. The average number of fixations on the outcomes of $-bets for the price treatment was 8.96, thus the results suggest that, roughly speaking, the direct effect of fixations on $-bets accounts for around 8.96× 3.4 = 30.46 Eurocents per lottery. This is a very modest effect compared to the actual magnitude of overpricing for $-bets, suggesting that (unsurprisingly) overpricing cannot be mechanically reduced to an additive effect on the price each time that the lottery is attended to. However, the very existence of the effect serves as an additional, basic proof of concept substantiating the link between visual attention and overpricing.

Model 2 adds the number of fixations on probabilities, which is also significant. Importantly, visual attention on outcomes remains positive and significant (p=.0136). That is, any kind of visual attention on $-bets is related to overpricing. Note that recent cognitive models as the attentional drift-diffusion model mentioned in the introduction (Krajbich et al., 2010) essentially postulate that increased attention boosts valuations, which would naturally provide a link between increased attention on (any attribute of) $-bets and their overpricing.

Model 3 shows that the effects remain positive and significant when adding controls such as age, gender, and the numerical literacy test score. Taken together, Models 1–3 demonstrate the link between visual attention and overpricing for $-bets. Models 4–6 reproduce the same analysis for P-bets. In contrast to $-bets, fixations on outcomes or probabilities have no significant effect on overpricing, independently of the addition of further controls.

In summary, our design allowed us to estimate individual certainty equivalents with an out-of-sample estimation procedure and directly show that $-bets are more overpriced than P-bets. This also enables us to confirm the link between visual attention and overpricing, and reveals that increased attention (more fixations) on $-bets leads to higher overpricing for those lotteries.

5  Conclusion

The classic preference reversal phenomenon is historically one of the most important behavioral anomalies in the study of decision making under risk. It casts doubt on fundamental assumptions that underlie the analysis of human decisions. It has accordingly received considerable attention across the disciplines. One of the most important components of explanations of the phenomenon is that, if a monetary evaluation is asked for, the focus on a monetary scale produces an overpricing when the lottery involves a large monetary amount, resulting in an incorrect evaluation of long-shot options compared to moderate ones.

This argument entails an attentional component which can now be tested directly by means of visual attention data. We conducted an experiment with two treatments, one containing a standard “pricing” evaluation which should shift attention toward outcomes, and another relying on an ordinal “ranking” evaluation which should not have such an attentional effect. The treatments correspond to standard experiments in the literature on the preference reversal phenomenon and have been shown to elicit this phenomenon and its reversal, respectively. By testing across treatments, we confirm that the monetary evaluation results in an attentional shift toward outcomes compared to probabilities, and toward long-shots compared to moderate lotteries. This provides direct evidence on the attentional foundations of preference reversals.

Although we kept the stimuli as comparable as possible, it should be remarked that implementing the standard experiments from the literature results in a layout difference (number of evaluated lotteries) across the treatments. Potentially, this could lead to confounds for the comparison of fixations across evaluation phases, and our results for this comparison should be interpreted carefully. However, we found the same effects when analyzing transitions (saccades) which start and finish on the same area of interest (outcome or probability). This alternative analysis naturally excludes additional transitions across lotteries in the Rank treatment. Overall, our results comparing treatments concur with the within-treatment analysis showing increased attention on outcomes compared to probabilities for the pricing treatment (which confirm previous results of Kim et al., 2012), and showing greater attention placed on probabilities than on outcomes for both phases of the Rank treatment. When restricted to differences in evaluations, our results are also in alignment with those of (Rubaltelli et al., 2012), who showed that subjects fixated more on outcomes than on probabilities in a pricing task, but the difference vanished for an abstract attractiveness rating.

Additionally, by enriching the experiment with an independent block of lottery choices, we are able to estimate utilities and certainty equivalents out of sample, and hence quantify overpricing for each lottery and each subject in the treatment using pricing evaluations. This enables a panel-regression analysis confirming that increased visual fixations on the long-shot lotteries results in increased overpricing of those, while such an effect is absent for the valuations of moderate lotteries.

Together with previous contributions as (Kim et al., 2012) and (Rubaltelli et al., 2012), our evidence suggests that attentional shifts due to evaluations employing a monetary scale (pricing) are at the root of the classic preference reversal phenomenon. More generally, our results demonstrate that the analysis of behavioral anomalies in decisions under risk can greatly benefit from explicitly taking the role of attention into account. We suggest that future research in decision making should consider the attentional aspects (as well as possible bottom-up visual factors) even when relying on well-established behavioral tasks.

References

[Alós-Ferrer et al., 2020]
Alós-Ferrer, C., Buckenmaier, J., & Garagnani, M. (2020). Stochastic Choice and Preference Reversals. Working Paper, University of Zurich.
[Alós-Ferrer et al., 2016]
Alós-Ferrer, C., Granić, D.-G., Kern, J., & Wagner, A. K. (2016). Preference Reversals: Time and Again. Journal of Risk and Uncertainty, 52(1), 65–97.
[Alós-Ferrer et al., 2019b]
Alós-Ferrer, C., Jaudas, A., & Ritschel, A. (2019b). Effortful Bayesian Updating: A Pupil-dilation Study. Working Paper, University of Zurich.
[Andersen et al., 2006]
Andersen, S., Harrison, G. W., Lau, M. I., & Rutström, E. E. (2006). Elicitation Using Multiple Price List Formats. Experimental Economics, 9(4), 383–405.
[Atkinson, 1996]
Atkinson, A. C. (1996). The Usefulness of Optimum Experimental Designs. Journal of the Royal Statistical Society, 51(1), 59–76.
[Bateman et al., 2007]
Bateman, I., Day, B., Loomes, G., & Sugden, R. (2007). Can Ranking Techniques Elicit Robust Values? Journal of Risk and Uncertainty, 34(1), 49–66.
[Bateman et al., 2002]
Bateman, I. J., et al. (2002). Economic Valuation with Stated Preference Techniques: A Manual. Cheltenham, United Kingdom: Edward Elgar.
[Beauchamp et al., 2019]
Beauchamp, J. P., Benjamin, D. J., Laibson, D. I., & Chabris, C. F. (2019). Measuring and Controlling for the Compromise Effect when Estimating Risk Preference Parameters. Experimental Economics, (pp. 1–31).
[Bellemare et al., 2008]
Bellemare, C., Kröger, S., & van Soest, A. (2008). Measuring Inequity Aversion in a Heterogeneous Population Using Experimental Decisions and Subjective Probabilities. Econometrica, 76(4), 815–839.
[ButlerLoomes, 2007]
Butler, D. J. & Loomes, G. (2007). Imprecision as an Account of the Preference Reversal Phenomenon. American Economic Review, 97(1), 277–297.
[Casey, 1991]
Casey, J. T. (1991). Reversal of the Preference Reversal Phenomenon. Organizational Behavior and Human Decision Processes, 48(2), 224–251.
[Casey, 1994]
Casey, J. T. (1994). Buyers’ Pricing Behavior for Risky Alternatives: Encoding Processes and Preference Reversals. Management Science, 40(6), 730–749.
[Conte et al., 2011]
Conte, A., Hey, J. D., & Moffatt, P. G. (2011). Mixture Models of Choice Under Risk. Journal of Econometrics, 162(1), 79–88.
[Cubitt et al., 2004]
Cubitt, R. P., Munro, A., & Starmer, C. (2004). Testing Explanations of Preference Reversal. Economic Journal, 114(497), 709–726.
[De Los SantosWildenbeest, 2012]
De Los Santos, Babur, A. H. & Wildenbeest, M. R. (2012). Testing Models of Consumer Search Using Data on Web Browsing and Purchasing Behavior. American Economic Review, 102(6), 2955–2980.
[Devetag et al., 2016]
Devetag, G., Di Guida, S., & Polonio, L. (2016). An Eye-Tracking Study of Feature-Based Choice in One-Shot Games. Experimental Economics, 19(1), 177–201.
[Fischer et al., 1999]
Fischer, G. W., Carmon, Z., Ariely, D., & Zauberman, G. (1999). Goal-Based Construction of Preferences: Task Goals and the Prominence Effect. Management Science, 45(8), 1057–1075.
[Ford et al., 1992]
Ford, I., Torsney, B., & Wu, C. J. (1992). The Use of a Canonical Form in the Construction of Locally Optimal Designs for Non-Linear Problems. Journal of the Royal Statistical Society, 54(2), 569–583.
[Glöckner et al., 2012]
Glöckner, A., Fiedler, S., Hochman, G., Ayal, S., & Hilbig, B. E. (2012). Processing Differences Between Descriptions and Experience: A Comparative Analysis Using Eye-tracking and Physiological Measures. Frontiers in Psychology, 3 (173), 1–15.
[GlöcknerHerbold, 2011]
Glöckner, A. & Herbold, A.-K. (2011). An Eye-tracking Study on Information Processing in Risky Decisions: Evidence for Compensatory Strategies Based on Automatic Processes. Journal of Behavioral Decision Making, 24(1), 71–98.
[GoldsteinEinhorn, 1987]
Goldstein, W. M. & Einhorn, H. J. (1987). Expression Theory and the Preference Reversal Phenomena. Psychological Review, 94(2), 236–254.
[Greiner, 2015]
Greiner, B. (2015). Subject Pool Recruitment Procedures: Organizing Experiments with ORSEE. Journal of the Economic Science Association, 1, 114–125.
[GretherPlott, 1979]
Grether, D. M. & Plott, C. R. (1979). Theory of Choice and the Preference Reversal Phenomenon. American Economic Review, 69(4), 623–638.
[Halton, 1960]
Halton, J. H. (1960). On the Efficiency of Certain Quasi-Random Sequences of Points in Evaluating Multi-Dimensional Integrals. Numerische Mathematik, 2(1), 84–90.
[HarlessCamerer, 1994]
Harless, D. W. & Camerer, C. F. (1994). The Predictive Utility of Generalized Expected Utility Theories. Econometrica, 62(6), 1251–1289.
[HarrisonRutstrom, 2008]
Harrison, G. & Rutstrom, E. (2008). Experimental Evidence on the Existence of Hypothetical Bias in Value Elicitation Methods. In C. R. Plott & V. L. Smith (Eds.), Handbook of Experimental Economics Results, volume 1, Part 5 chapter 81, (pp. 752
[HoltLaury, 2002]
Holt, C. A. & Laury, S. K. (2002). Risk Aversion and Incentive Effects. American Economic Review, 92(5), 1644–1655.
[Huber et al., 1982]
Huber, J., Payne, J. W., & Puto, C. (1982). Adding Symmetrically Dominated Alternatives: Violations of Regularity and the Similarity Hypothesis. Journal of Consumer Research, 9(1), 90–98.
[Kim et al., 2012]
Kim, B. E., Seligman, D., & Kable, J. W. (2012). Preference Reversals in Decision Making under Risk are Accompanied by Changes in Attention to Different Attributes. Frontiers in Neuroscience, 6(109), 1–10.
[Knoepfle et al., 2009]
Knoepfle, D. T., Wang, J. T.-Y., & Camerer, C. F. (2009). Studying Learning in Games Using Eye-Tracking. Journal of the European Economic Association, 7(2–3), 388–398.
[Krajbich et al., 2010]
Krajbich, I., Armel, C., & Rangel, A. (2010). Visual Fixations and the Computation and Comparison of Value in Simple Choice. Nature Neuroscience, 13(10), 1292–1298.
[Krajbich et al., 2012]
Krajbich, I., Lu, D., Camerer, C., & Rangel, A. (2012). The Attentional Drift-Diffusion Model Extends to Simple Purchasing Decisions. Frontiers in Psychology, 3(Article 193), 1–18.
[KrajbichRangel, 2011]
Krajbich, I. & Rangel, A. (2011). Multialternative Drift-Diffusion Model Predicts the Relationship Between Visual Fixations and Choice in Value-Based Decisions. Proceedings of the National Academy of Sciences, 108(33), 13852–13857.
[Le MeurBaccino, 2013]
Le Meur, O. & Baccino, T. (2013). Methods for Comparing Scanpaths and Saliency Maps: Strengths and Weaknesses. Behavior Research Methods, 45, 251–266.
[LichtensteinSlovic, 1971]
Lichtenstein, S. & Slovic, P. (1971). Reversals of Preference Between Bids and Choices in Gambling Decisions. Journal of Experimental Psychology, 89(1), 46–55.
[Lipkus et al., 2001]
Lipkus, I. M., Samsa, G., & Rimer, B. K. (2001). General Performance on a Numeracy Scale Among Highly Educated Samples. Medical Decision Making, 21(1), 37–44.
[Luce, 1959]
Luce, R. D. (1959). Individual Choice Behavior: A Theoretical Analysis. New York: Wiley.
[Ludwig et al., 2020]
Ludwig, J., Jaudas, A., & Achtziger, A. (2020). The Role of Motivation and Volition in Economic Decisions: Evidence from Eye Movements and Pupillometry. Journal of Behavioral Decision Making, 33(2), 180–195.
[McFadden, 2001]
McFadden, D. L. (2001). Economic Choices. American Economic Review, 91(3), 351–378.
[Moffatt, 2005]
Moffatt, P. G. (2005). Stochastic Choice and the Allocation of Cognitive Effort. Experimental Economics, 8(4), 369–388.
[Moffatt, 2015]
Moffatt, P. G. (2015). Experimetrics: Econometrics for Experimental Economics. London: Palgrave Macmillan.
[NoguchiStewart, 2014]
Noguchi, T. & Stewart, N. (2014). In the Attraction, Compromise, and Similarity Effects, Alternatives are Repeatedly Compared in Pairs on Single Dimensions. Cognition, 132(1), 44–56.
[OrquinHolmqvist, 2018]
Orquin, J. L. & Holmqvist, K. (2018). Threats to the Validity of Eye-movement Research in Psychology. Behavior Research Methods, 50, 1645–1656.
[Peirce, 2007]
Peirce, J. W. (2007). PsychoPy – Psychophysics Software in Python. Journal of Neuroscience Methods, 162(1), 8–13.
[Pettibone, 2012]
Pettibone, J. C. (2012). Testing the Effect of Time Pressure on Asymetric Dominance and Compromise Decoys in Choice. Judgment and Decision Making, 7(4), 513–523.
[PolonioCoricelli, 2019]
Polonio, L. & Coricelli, G. (2019). Testing the Level of Consistency Between Choices and Beliefs in Games Using Eye-Tracking. Games and Economic Behavior, 113, 566–586.
[Polonio et al., 2015]
Polonio, L., Di Guida, S., & Coricelli, G. (2015). Strategic Sophistication and Attention in Games: An Eye-Tracking Study. Games and Economic Behavior, 94, 80–96.
[Ratcliff, 1978]
Ratcliff, R. (1978). A Theory of Memory Retrieval. Psychological Review, 85, 59–108.
[RatcliffRouder, 1998]
Ratcliff, R. & Rouder, J. N. (1998). Modeling Response Times for Two-Choice Decisions. Psychological Science, 9(5), 347–356.
[Reutskaja et al., 2011]
Reutskaja, E., Nagel, R., Camerer, C. F., & Rangel, A. (2011). Search Dynamics in Consumer Choice under Time Pressure: An Eye-Tracking Study. American Economic Review, 101(2), 900–926.
[RobertsLattin, 1991]
Roberts, J. H. & Lattin, J. M. (1991). Development and Testing of a Model of Consideration Set Composition. Journal of Marketing Research, 28(4), 429–440.
[Rubaltelli et al., 2012]
Rubaltelli, E., Dickert, S., & Slovic, P. (2012). Response Mode, Compatibility, and Dual-processes in the Evaluation of Simple Gambles: An eye-tracking investigation. Judgment and Decision Making, 7(4), 427–440.
[SalvucciGoldberg, 2000]
Salvucci, D. D. & Goldberg, J. H. (2000). Identifying Fixations and Saccades in Eye-tracking Protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications (pp. 71–78). New York, NY, USA: Association for Computing Machinery.
[SchmidtHey, 2004]
Schmidt, U. & Hey, J. D. (2004). Are Preference Reversals Errors? An Experimental Investigation. Journal of Risk and Uncertainty, 29(3), 207–218.
[Seidl, 2002]
Seidl, C. (2002). Preference Reversal. Journal of Economic Surveys, 16(5), 621–655.
[ShadlenKiani, 2013]
Shadlen, M. N. & Kiani, R. (2013). Decision Making as a Window on Cognition. Neuron, 80, 791–806.
[ShadlenShohamy, 2016]
Shadlen, M. N. & Shohamy, D. (2016). Decision Making and Sequential Sampling from Memory. Neuron, 90, 927–939.
[Silvey, 1980]
Silvey, S. D. (1980). Optimal Design: An Introduction to the Theory for Parameter Estimation, volume 1. New York: Chapman and Hall.
[Simonson, 1989]
Simonson, I. (1989). Choice Based on Reasons: The Case of Attraction and Compromise Effects. Journal of Consumer Research, 16(2), 158–174.
[Thurstone, 1927]
Thurstone, L. L. (1927). A Law of Comparative Judgement. Psychological Review, 34, 273–286.
[Train, 2003]
Train, K. E. (2003). Discrete Choice Methods with Simulation. New York: Cambridge University Press.
[Tversky, 1972]
Tversky, A. (1972). Elimination by Aspects: A Theory of Choice. Psychological Review, 79(4), 281–299.
[Tversky et al., 1988]
Tversky, A., Sattath, S., & Slovic, P. (1988). Contingent Weighting in Judgment and Choice. Psychological Review, 95(3), 371–384.
[Tversky et al., 1990]
Tversky, A., Slovic, P., & Kahneman, D. (1990). The Causes of Preference Reversal. American Economic Review, 80(1), 204–217.
[TverskyThaler, 1990]
Tversky, A. & Thaler, R. H. (1990). Anomalies: Preference Reversals. Journal of Economic Perspectives, 4(2), 201–211.
[UsherMcClelland, 2001]
Usher, M. & McClelland, J. L. (2001). The Time Course of Perceptual Choice: The Leaky, Competing Accumulator Model. Psychological Review, 108(3), 550–592.
[Vadillo et al., 2015]
Vadillo, M. A., Street, C. N. H., Beesley, T., & Shanks, D. R. (2015). A Simple Algorithm for the Offline Recalibration of Eye-tracking Data Through Best-fitting Linear Transformation. Behavior Research Methods, 47(4), 1365–1376.
[Von Gaudecker et al., 2011]
Von Gaudecker, H.-M., Van Soest, A., & Wengström, E. (2011). Heterogeneity in Risky Choice Behavior in a Broad Population. American Economic Review, 101(2), 664–694.
[Wang et al., 2010]
Wang, J. T.-Y., Spezio, M., & Camerer, C. F. (2010). Pinocchio’s Pupil: Using Eyetracking and Pupil Dilation to Understand Truth Telling and Deception in Sender-Receiver Games. American Economic Review, 100(3), 984–1007.

Appendix A  Description of RUM Estimation

We now describe the details of the estimation procedure used in the main text, which follows the approach described in (Moffatt, 2015), Chapter 13. We index the N=59 subjects by i=1,...,N, and the T=32 trials used for utility estimation by t=1,...,T. In trial t, subjects faced the binary choice between At=(pt,xt), which pays xt with probability pt and zero otherwise, and Bt=(qt,yt), which pays yt with probability qt and zero otherwise. We assume the following constant relative risk aversion (CRRA) utility function

  u(x|r) = xr     (1)

with r>0. Under the assumption of Expected Utility maximization, subject i with utility function u(x|ri) chooses At over Bt if the difference in expected utilities is positive, that is,

 t(ri) := ptu(xt|ri)−qtu(yt|ri)=pt(xtri)−qt(ytri)>0.     (2)

Following a standard Random Utility Model (RUM), we postulate normally-distributed noise. That is, each subject is characterized by a fixed risk parameter ri, but utility is perturbed by an error term εitN(0,σ2) with σ2>0. Thus, At is chosen if

 t(ri)+εit>0     (3)

Define the binary choice indicator for trial t

γit=


1
if  At  chosen by subject  i                       −1if  Bt  chosen by subject  i.

Then the probability of a choice conditional on the risk-parameter ri is given by

  Pit|ri)=P
γit
t(ri)+εit
>0)
= Φ


γit
t(ri)
σ



    (4)

where Φ is the standard normal cumulative distribution function.

To account for individual heterogeneity, we assume that the risk parameter is distributed over the population and we estimate the parameters of this distribution (e.g., HarlessCamerer, 1994; Moffatt, 2005; Moffatt, 2015; HarrisonRutstrom, 2008; Bellemare et al., 2008; Von Gaudecker et al., 2011; Conte et al., 2011). This approach greatly reduces the degrees of freedom compared to individual-level estimates, avoiding possible overfitting problems (see Conte et al., 2011, for a more detailed discussion). In particular, we assume that the individual risk attitudes in the population are distributed log-normally in our subject pool according to

logr ∼ N(µ,η2).

Hence, the log-likelihood of a sample given by the matrix Γ=(γit) consisting of T trials and N subjects is

  logL =
N
i=1
 ln


−∞
 
T
t=1
 Φ


γit
t(r)
σ



f(r|µ,η) d r     (5)

where f(r|µ,η) is the density function of the risk parameter r.

In order to evaluate the integral in (5) we use the method of maximum simulated likelihood (MSL) (see Train, 2003, for details), which approximates the integral above by an average using Halton draws (Halton, 1960; Moffatt, 2015).

Applying maximum likelihood to the resulting approximation yields the estimates (µ,η,σ). Given those estimates, we compute the posterior expectation of each subject’s risk attitude ri conditional on their T choices, and obtain

ûi(x) =xri

(with ri>0) as the estimated utility function of subject i.

Appendix B  List of Lotteries

Table B1 contains the 32 lottery pairs used for the utility estimation in the first part. Table B2 contains the 4 lottery pairs involving a dominated lottery which was used to check for violations of dominance in the first part. Table B3 contains all 60 lottery pairs ($-bets and P-bets) used for the preference reversal experiment in the second and third parts.


Table B1. Lottery pairs used for the utility estimation, first part.
LotteryLottery 1 Lottery 2  Lottery 1 Lottery 2
PairProbOutcEV ProbOutcEV PairProbOutcEV ProbOutcEV
1.05120.6 .832.4 17.653 .3226.6
2.2224.4 .854 18.684.8 .5136.5
3.25174.25 .7564.5 19.6148.4 .742.8
4.35207 .684.8 20.642.4 .5563.3
5.35175.95 .742.8 21.631.8 .5136.5
6.4124.8 .764.2 22.6531.95 .15182.7
7.4145.6 .6563.9 23.651711.05 .7575.25
8.4145.6 .832.4 24.742.8 .1161.6
9.5115.5 .774.9 25.774.9 .62012
10.5157.5 .6574.55 26.7117.7 .864.8
11.52010 .753.5 27.71812.6 .8554.25
12.5552.75 .35186.3 28.7564.5 .3154.5
13.5542.2 .2153 29.7564.5 .4156
14.5542.2 .4156 30.7543 .35124.2
15.5542.2 .45219.45 31.751511.25 .854
16.663.6 .35113.85 32.832.4 .4176.8
Prob = Probability, Outc = Outcome, EV = Expected Value.


Table B2. Lottery pairs with a dominated lottery, first part.
LotteryDominated Dominating  Dominated Dominating
PairProbOutcEV ProbOutcEV PairProbOutcEV ProbOutcEV
1.493.6 .4114.4 3.6521.3 .774.9
2.36134.68 .42135.46 4.5284.16 .58105.8
Prob = Probability, Outc = Outcome, EV = Expected Value.


Table B3. (P,$) lottery pairs used in the evaluation (second part) and choice (third part) phases.
LotteryP-Bet D-Bet  P-Bet D-Bet
PairProbOutcEV ProbOutcEV PairProbOutcEV ProbOutcEV
1.9532.85 .37103.7 31.8364.98 .31185.58
2.5752.85 .46104.6 32.9554.75 .22183.96
3.965.4 .3113.3 33.8654.3 .33185.94
4.864.8 .3113.3 34.7943.16 .33185.94
5.7275.04 .23112.53 35.6116.6 .22194.18
6.7921.58 .21112.31 36.56105.6 .43198.17
7.821.6 .4114.4 37.7975.53 .2204
8.6485.12 .24122.88 38.753.5 .17203.4
9.8465.04 .48125.76 39.85108.5 .3206
10.7532.25 .17122.04 40.6542.6 .25205
11.9432.82 .49125.88 41.9287.36 .23214.83
12.9243.68 .53126.36 42.88119.68 .35217.35
13.8232.46 .34124.08 43.7264.32 .29216.09
14.7464.44 .15131.95 44.6832.04 .23214.83
15.8954.45 .39135.07 45.7396.57 .21224.62
16.8765.22 .36134.68 46.674.2 .3226.6
17.921.8 .35134.55 47.68117.48 .23235.29
18.6621.32 .24133.12 48.8887.04 .4249.6
19.653 .45135.85 49.8475.88 .35258.75
20.976.3 .51147.14 50.9587.6 .31278.37
21.8654.3 .16152.4 51.82119.02 .24317.44
22.7107 .31154.65 52.8754.35 .13324.16
23.8554.25 .41156.15 53.8643.44 .5563.3
24.6374.41 .41156.15 54.843.2 .4562.7
25.7564.5 .15152.25 55.8732.61 .573.5
26.76118.36 .37165.92 56.7553.75 .5573.85
27.6342.52 .33165.28 57.8254.1 .4783.76
28.9654.8 .19173.23 58.7153.55 .2291.98
29.9687.68 .43177.31 59.8954.45 .5594.95
30.8497.56 .25184.5 60.8243.28 .3693.24
Prob = Probability, Outc = Outcome, EV = Expected Value.

Appendix C  Translated Instructions

[These are the written instructions given to subjects before the experiment. The original instructions were in German. Text in brackets […] was not displayed to subjects.]

General Instructions

Welcome! In this experiment you will be asked to make a series of decisions that will determine your earnings at the end of the experiment. The total duration of the experiment is about 1 hour. If you have a question, please let us know and we will answer your question. It is important that you read the instructions carefully before you make your decisions.

We now explain the general course of the experiment. The experiment consists of three parts. In each part you have to make multiple decisions. At the end of the experiment you will be asked to answer a questionnaire.

In each part, you can earn money. How much money you earn will depend on your decisions in that part and chance. Your earnings in one part of the experiment are independent of your earnings and decisions in the other parts. Your earnings in each part will be added up and you will be paid the total amount anonymously and in cash at the end of the experiment. In addition to this amount you will receive  4 for your participation in the experiment.

Below you will find further general information for the experiment. The specific instructions for each part will be shown on screen directly before the beginning of that part.

Instructions: Lotteries

In the three parts of the experiment you will be asked to make decisions about lotteries. Hence, we will now explain in detail what a lottery is.

A lottery consists of two potential outcomes, each of which will occur with a given probability. One of the two outcomes is always € 0 (zero). The other outcome will differ from lottery to lottery. If a lottery is played out, this means that you will receive exactly one of the two possible outcomes (in Euro).

In the experiment lotteries will be represented by tables as in the example below. The bottom cell illustrates the probability for the corresponding outcome in the top cell. The remaining probability with the outcome of €  0 will not be displayed.

 2*10 € 
   
 2*75 % 
   

Example: The table depicted above is an example of how we present a lottery. In this example, the lottery pays € 10 with a probability of 75%. Accordingly, the lottery pays € 0 with a probability of 25%. The second outcome is always € 0 and occurs with the remaining probability. Please note that this information is not repeated numerically on screen.

If a lottery is played out, this means that it will pay exactly one of the two outcomes. In the example above, the lottery pays € 10 with a probability of 75% and € 0 with the remaining probability of 25%. Please note that the lottery shown above is only an example. The lotteries in the experiment will have different outcomes and probabilities. If you have a question, please raise your hand. If you have no further questions, you may proceed to the comprehension questions on the next page.

Comprehension questions: Lotteries

Below you see examples of two lotteries, similar to the ones you will face later on in the experiment. Please note that these lotteries are only examples.

 2*10 € 
   
 2*75 % 
   
Lottery A
 2*8 € 
   
 2*55 % 
   
Lottery B

Please answer the following comprehension questions:

  1. What is the probability that Lottery A pays € 10?
  2. What is the probability that Lottery B pays € 0?
  3. Which amount does Lottery A pay with a probability of 25%?
  4. Which amount does Lottery B pay with a probability of 55%?

Once you have answered all comprehension questions, please raise your hand. An experimenter will then check your answers.

Translated onscreen instructions

[These are the instructions for each part, which were presented separately on screen, at the beginning of each part. The original instructions were in German. Text in brackets […] was not displayed to subjects.]


Welcome to this economic experiment. Thank you for supporting our research. Please note the following rules:

  1. If you have questions, please raise your hand.
  2. Please refrain from using any features of the computer that are not part of the experiment.

Instructions for part 1

Your decisions: In this part of the experiment you will be presented with a series of lottery pairs. Your task is to choose one of the two lotteries from each pair.

On the screen you will see a lottery pair (consisting of two lotteries) represented by two tables. One of the lotteries will be shown on the left and the other will be shown on the right. You choose one of the lotteries by pressing the left or right arrow key on your keyboard. These keys are marked with a yellow sticker. To choose the lottery on the left, press the left arrow key “←.” To choose the lottery on the right, press the right arrow key “→.” Please note that your decisions will affect your earnings at the end of the experiment (a detailed description of how your earnings are determined will follow below).

There are no wrong or correct decisions. When you choose one of the lotteries, this simply shows that you prefer to play this lottery over the other lottery.

After you have made your decision, you will see the next lottery pair. In part 1 you will be presented with a total of 36 lottery pairs. After you have made a decision for each of the pairs, this part ends and we will start with the next part of the experiment.

Your earnings for part 1

After you have made a decision for each of the lottery pairs, the computer will randomly select one of the 36 lottery pairs. The computer then checks which of the two lotteries you have chosen for this randomly selected pair. The lottery you have chosen will be played out. The outcome of the lottery determines your earnings for part 1 of the experiment.

The lottery will be played out at the end of the experiment, that is, after you have completed all three parts of the experiment. Please note that, although your earnings for this part will be determined at the end of the experiment, they will only depend on your decisions in this part of the experiment and chance.

If you have any further questions, please let us know.

Instructions for part 2 [Price treatment]

Your decisions: In this part of the experiment you will be presented with a series of lotteries. When a lottery is presented to you on screen, you may simply assume that you own that lottery and are asked to sell it.

Your task is to state the lowest price at which you are still willing to sell the presented lottery instead of keeping the lottery and playing it out.

There is no wrong or correct answer when stating the lowest price at which you are still willing to sell the lottery. When you enter your selling price for the lottery, simply ask yourself “Is this really the lowest price at which I am still willing to sell the lottery instead of playing the lottery?”. Please note that your decisions will affect your earnings at the end of the experiment (a detailed description of how your earnings are determined will follow below).

Please enter the lowest price at which you are still willing to sell the lottery in the form “EURO.CENTS.” Please note that you cannot enter a selling price that is larger than the highest outcome of the lottery.

After you have entered your selling price, the next lottery will be presented. In this part of the experiment you will see a total of 120 lotteries, presented in 20 rounds of 6 lotteries each. All rounds are independent. Once you have entered a selling price for each lottery in a round, the next round will start. Once you are done with all 20 rounds, you can continue with the next part of the experiment.

Your earnings for part 2 [Price treatment]

After you have entered your lowest selling price for each of the lotteries, the computer will randomly draw one of the 20 rounds. From this round, the computer will then randomly select two of the six lotteries. The computer then checks for which of the two lotteries you have entered the higher selling price (in case both prices are the same, the computer will randomly select one of the two lotteries with equal probability). This lottery will be played out and the outcome of that lottery determines your earnings for part 2 of the experiment.

The lottery will be played out at the end of the experiment, that is, after you have completed all three parts of the experiment. Please note that, although your earnings for this part will be determined at the end of the experiment, they will only depend on your decisions in this part of the experiment and chance.

If you have any further questions, please raise your hand and remain seated.

Instructions for part 2 [Rank treatment]

Your decisions: In this part of the experiment you will be presented with a series of lotteries. When a lottery is presented to you on screen, you may simply assume, that you own that lottery and may play that lottery.

Your task is to order different lotteries according to your preference, that is, according to how much you would like to play them. In each round you will see six different lotteries on screen. Please order the lotteries as follows:

To select a lottery simply click on the button below the lottery that you want to select. As soon as you assign a rank to a lottery, the corresponding rank (from 1 to 6) will be shown below that lottery.

In case you want to change the rank of the lotteries, please press the “Reset” button. This resets the ranking. After you have ranked the lotteries from rank 1 to rank 6, please press the “Continue” button to confirm your ranking and proceed to the next round.

Please note that there is no wrong or correct ranking. When ranking the lotteries, simply ask yourself which lottery you would like to play out the most, which one you would like the second, and so on. Please note that your decisions will affect your earnings at the end of the experiment (a detailed description of how your earnings are determined will follow below).

In this part of the experiment you will see a total of 120 lotteries, presented in 20 rounds of 6 lotteries each. All rounds are independent, that is, you will have to submit 20 rankings of 6 lotteries by assigning ranks from 1 to 6. Once you are done with all 20 rounds, you can continue with the next part of the experiment.

Your earnings for part 2 [Rank treatment]

After you have ranked all lotteries, the computer will randomly draw one of the 20 rounds. From this round, the computer will then randomly select two of the six lotteries. The computer then will check which of the two lotteries you have ranked higher (that is, which one you want to play out more). This lottery will be played out and the outcome of that lottery determines your earnings for part 2 of the experiment.

The lottery will be played out at the end of the experiment, that is, after you have completed all three parts of the experiment. Please note that, although your earnings for this part will be determined at the end of the experiment, they will only depend on your decisions in this part of the experiment and chance.

If you have any further questions, please raise your hand and remain seated.

Instructions for part 3

Your decisions: In this part of the experiment you will be presented with a series of lottery pairs. Similarly to part 1, your task is to choose one of the two lotteries from each pair. Please note that the lottery pairs are different from part 1.

On the screen you will see a lottery pair (consisting of two lotteries) represented by two tables. One of the lotteries will be shown on the left and the other will be shown on the right. You can choose one of the lotteries pressing the left or right arrow key on your keyboard. These keys are marked with a yellow sticker. To choose the lottery on the left, press the left arrow key “←.” To choose the lottery on the right, press the right arrow key “→.” Please note that your decisions will affect your earnings at the end of the experiment (a detailed description of how your earnings are determined will follow below).

There are no wrong or correct decisions. When you choose one of the lotteries, this simply shows that you prefer to play this lottery over the other lottery.

After you have made your decision, you will see the next lottery pair. In part 3 you will be presented with a total of 60 lottery pairs. After you have made a decision for each of the pairs, this part ends and you can start the questionnaire.

Your earnings for part 3

After you have made a decision for each of the lottery pairs, the computer will randomly select one of the 60 lottery pairs. The computer then will check which of the two lotteries you have chosen for this randomly selected pair. The lottery you have chosen will be played out. The outcome of the lottery determines your earnings for part 3 of the experiment.

The lottery will be played out at the end of the experiment, that is, after you have completed all three parts of the experiment. Please note that, although your earnings for this part will be determined at the end of the experiment, they will only depend on your decisions in this part of the experiment and chance.

If you have any further questions, please raise your hand and remain seated.

Appendix D  Screenshots

The following pictures depict screenshots from the different phases. The pictures also include dashed frames which the subjects did not see and are added only to represent the Areas of Interests used for classifying the number of fixations.


Figure D.1: Example screenshot of the lottery choice phase (part 1 and 3).
Note: The dashed frames around the outcomes and probabilities are visualizations of the areas of interest and were not visible to subjects.


Figure D.2: Example screenshot of the lottery evaluation phase in the Price treatment (part 2).
Note: The dashed frames around the outcome and probability are visualizations of the areas of interest and were not visible to subjects.


Figure D.3: Example screenshot of the lottery evaluation phase in the Rank treatment (part 2).
Note: The dashed frames around the outcomes and probabilities are visualizations of the areas of interest and were not visible to subjects.


*
Zurich Center for Neuroeconomics (ZNE), Department of Economics, University of Zurich. Blümlisalpstrasse 10, 8006 Zurich, Switzerland. Email: carlos.alos-ferrer@econ.uzh.ch. https://orcid.org/0000-0002-1668-9784.
#
Department of Political and Social Sciences, Zeppelin University Friedrichshafen, Germany. https://orcid.org/0000-0002-1067-8830.
$
Zurich Center for Neuroeconomics (ZNE), Department of Economics, University of Zurich. https://orcid.org/0000-0001-5169-678X.
We thank Andreas Gloeckner and two anonymous referees for helpful comments. The authors gratefully acknowledge financial support from the German Research Foundation (DFG) under project Al-1169/4, part of the Research Unit “Psychoeconomics” (FOR 1882).

Copyright: © 2021. The authors license this article under the terms of the Creative Commons Attribution 3.0 License.

1
One additional subject had to be excluded from the analysis due to poor eye-tracking data quality. An additional measurement was not completed because the subject took extremely long for her decisions and exceeded the allocated slot.
2
See Appendix B for a complete list of all lottery pairs used in each phase of the experiment.
3
No subject chose a strictly dominated lottery out of the pairs.
4
For the sake of clarity, we refer to the six lotteries presented simultaneously in the Rank treatment as a block also for the Price treatment, even though in the latter they were presented individually and sequentially.
5
The preference reversal phenomenon occurs independently of whether the choice phase precedes or follows the evaluation phase (e.g., Alós-Ferrer et al., 2016).
6
Following the literature, repeated fixations within the same AOI were still counted as different fixations, i.e. not merged into one fixation.
7
In the Price treatment, in 129 out of 1,800 cases subjects gave both lotteries the same WTA, indicating indifference. Excluding these observations does not change the result quantitatively.
8
The rate of predicted reversals is the proportion of pairs where the $-bet was evaluated higher than the P-bet conditional on the P-bet being chosen. The rate of unpredicted reversals is the proportion of pairs where the P-bet was evaluated higher than the $-bet, conditional on the $-bet being chosen. In the tests below, the number of observations sometimes differs as reversal rates cannot be computed if a subject never chose the corresponding type of lotteries.
9
An alternative measure of attention is the overall duration of fixations. Both fixations and overall duration are often reported in eye-tracking studies and yield similar conclusions in our case.
10
Unsurprisingly, there are no significant differences between outcome/probability ratios in the choice phases across treatments (Price treatment .85, Rank treatment .85; MWW test, N=59, z=−0.227, p=.8201).
11
The size of the AOIs used in the analyses was also always identical for all phases and treatments. However, the boxes around the actual numbers where slightly smaller in the Ranking phase. The distance between the two AOIs within a lottery was always at least 65 pixels, large enough to prevent fixation misallocation.
12
We thank an anonymous reviewer for this observation.
13
Of course, there are no significant differences between $-bet/P-bet ratios in the choice phases across treatments (Price treatment 1.00, Rank treatment 1.04; MWW test, N=59, z=−1.228, p=.2194).
14
Random effects panel probit regressions on the likelihood of (predicted) reversals revealed no measurable significant effects of outcome/probability or $-bet/P-bet fixation ratios.
15
We chose to estimate subjects’ risk attitudes from a sequence of lottery choices instead of relying on alternatives such as the multiple price list (MPL) method (HoltLaury, 2002) because the literature has pointed out a number of difficulties with the latter, e.g., imposing a correlation structure on the choice sequence (see, e.g., Andersen et al., 2006), or the compromise effect (Beauchamp et al., 2019).
16
An agent with a risk propensity equal to the average in our sample would have a certainty equivalent of about $2.56 when facing a lottery paying $10 with 50% probability and zero otherwise.

This document was translated from LATEX by HEVEA.