Judgment and Decision Making, vol. 5, no. 3, June 2010, pp. 200-206

Predicting soccer matches: A reassessment of the benefit of unconscious thinking

Claudia González-Vallejo* and Nathaniel Phillips
Ohio University

We evaluate Dijksterhuis, Bos, van der Leij, & van Baaren (2009), Psychological Science, on the benefit of unconscious thinking in predicting the outcomes of soccer matches. We conclude that the evidence that unconscious thinking helps experts to make better predictions is tenuous both from theoretical and statistical perspectives.


Keywords: unconscious thinking, expertise, predictive judgments.

1  Introduction

The literature on unconscious processing is vast and there is evidence that such processes can influence judgments, memory, and behavior (e.g., Bargh, 1990; Jacoby, 1991; Nisbett & Wilson, 1977; Shiffrin & Schneider, 1977; Zajonc, 1980). An intriguing new theory, Unconscious Thought Theory (Dijksterhuis & Nordgren, 2006) holds that the unconscious is a highly sophisticated, rational system that can make better decisions in complex situations than conscious thought (Dijksterhuis, 2004; Dijksterhuis, Bos, Nordgren, & van Baaren, 2006; Dijksterhuis & van Olden, 2006;). Furthermore, according to a recent publication, experts who think unconsciously can make better use of diagnostic information and arrive at better predictions than non-experts, or experts who think consciously (Dijksterhuis, Bos, van der Leij, & van Baaren, 2009). In the present article, we evaluate this claim and conclude that the hypothesis of superior performance by unconscious thinkers in a predictive judgment task is not conclusively substantiated statistically or theoretically. (For a more general and detailed critique of Unconscious Thought Theory see, González-Vallejo, Lassiter, Bellezza, & Lindberg, 2008).

2  Summary of Dijksterhuis et al.’s (2009) methodology

In two studies (Dijksterhuis et al., 2009), participants predicted the results of upcoming soccer matches (n = 352 and n = 116, in Experiments 1 and 2, respectively). The experimental methodology used by the researchers was very similar for the two studies. First they assessed participants’ expertise using a 1 to 9 self-rating scale. Next, they presented participants with four upcoming soccer matches from the highest Dutch league (“Eredivisie”) and were asked to predict the results of each one (home-team win, away-team win, or draw). In the Immediate condition, participants were presented with the team names and were asked to make a prediction in 20 seconds. In the Conscious and Unconscious conditions, participants were shown the teams for 20 seconds, and then were told that they would be making predictions later on. Conscious thought participants were then given an additional 2 minutes to think about the matches, while Unconscious thought participants were told they would do something else and performed a 2-minute “two-back” task designed to occupy conscious processing. The procedure for Experiment 2 was basically the same as that of Experiment 1 with two differences. First participants predicted five soccer matches from the World Cup, and second, participants were asked to estimate the rank of each country in the World Ranking List (WRL) after they completed the other procedures. Dijkstehruis et al. (2009) claimed that participants who were distracted prior to providing their predictions (the unconscious group) and who scored higher in a self-assessed measure of soccer expertise, outperformed participants who either provided their predictions immediately, or after being asked to think carefully about each prediction. In this critique we perform alternative statistical analyses and derive different conclusions.

3  Statistical issues1

The primary test carried out by Dijksterhuis et al. (2009) was an ANOVA with condition (Immediate, Conscious, and Unconscious) and Expertise (Low versus High) as between-subjects factors on accuracy as measured by proportion of correct predictions. The Expertise factor was constructed from a median-split of the self-assessments of expertise. The main result from the two studies is a Condition by Expertise interaction showing that higher accuracy results with higher expertise for unconscious participants.

Our statistical reanalysis begins at the descriptive level, because it provides a clear view of the distributional characteristics of accuracy, as a function of the independent variables in question. In addition, we challenge the use of the ANOVA analysis conducted by the authors using a median-split of self-rated expertise. The perils of median-split have been greatly documented by several prominent researchers of the methodological field (Maxwell & Delaney, 1993; Vargha, Rudas, Delaney & Maxwell, 1996; MacCallum, Zhang, Preacher & Rucker, 2002). Irwin & McClelland (2001) and Fitzsimons (2008) have made a direct call to researchers to stop dichotomizing variables because of the potential of making unwarranted conclusions. Thus, we present alternative analyses that do not dichotomize self-rated expertise.


Table 1: Mean, median, and quartiles of proportion correct as a function of condition and self-rated expertise in Experiments 1 and 2.
 
Self-rated expertise
Condition
Mean
Median
Q1
Q3
Experiment 1
1
C
.40
.50
.25
.50
  
U
.36
.50
.00
.50
 
2
C
.41
.50
.25
.50
  
U
.44
.50
.25
.50
 
3
C
.51
.50
.50
.75
  
U
.48
.50
.25
.50
 
4
C
.53
.50
.25
.75
  
U
.68
.75
.50
.75
 
5
C
.43
.50
.25
.50
  
U
.50
.25
.25
.75
 
6
C
.52
.50
.50
.50
  
U
.59
.50
.50
.75
 
7
C
.50
.50
.25
.50
  
U
.58
.50
.50
.75
 
8
C
.63
.50
.50
.75
  
U
.59
.75
.25
.75
 
9
C
.50
.50
.25
.75
  
U
.45
.25
.25
.50
Experiment 2
1
C
.67
.60
.40
1
  
U
.60
.40
.40
.80
 
2
C
.64
.60
.60
.80
  
U
.51
.60
.40
.60
 
3
C
.54
.60
.40
.60
  
U
.45
.40
.20
.60
 
4
C
.77
.80
.60
1
  
U
.80
.80*
  
 
5
C
.60
.60*
  
  
U
.87
.80
.80
1
 
6
C
.80
.80
.60
1
  
U
.75
.80
.60
.80
 
7
C
.60
.40
.40
1
  
U
.85
.80
.80
.80
 
8
C
.60
.40
.40
.80
  
U
**
   
 
9
C
.73
.60
.60
1
  
U
.40
.40*
  
* This cell has only one observation. ** No observations at self-expertise level of 8.

Table 1 contains the means and quartiles of proportion correct as a function of Self-rated expertise and Condition (Conscious and Unconscious groups) in Experiments 1 and 2. For ease of presentation the Immediate group is omitted, but its distribution is very similar to that of the other two groups. Clearly the middle fifty percent of the distributions for the groups overlap at all levels of expertise and no greater increase in mean accuracy is observed for the unconscious group as a function of expertise. In addition, the number of times that the unconscious participants produce higher means than the conscious group is not greater than what would be predicted by chance alone (binomial test p > .05).2

Using the dichotomization of Dijksterhuis et al. (2009), we replicated the ANOVA significant interaction between thought Condition and Self-rated expertise. The means and standard errors for each experiment and condition are found in Table 2.


Table 2: Means and 95% confidence intervals for immediate, conscious, and unconscious groups as a function of self-rated expertise in Experiments 1 and 2.
   95% Confidence Interval
Self-rated ExpertiseConditionMean (%)Lower BoundUpper Bound
Experiment 1    
LowImmediate48.843.254.5
 Conscious44.338.550.1
 Unconscious42.336.248.4
 Immediate46.440.352.5
HighConscious49.644.055.2
 Unconscious58.251.964.4
Experiment 2    
LowImmediate65.655.675.5
 Conscious61.151.470.7
 Unconscious52.041.162.9
 Immediate66.558.974.0
HighConscious70.060.679.4
 Unconscious78.566.790.2

As seen in Table 2, conditional on expertise level, the 95 percent confidence intervals around the means overlap across the Immediate, Conscious, and Unconscious conditions in both experiments. We note that the medians of the Self-rated expertise are rather low (3 and 4, for each ). The values used by Dijksterhuis et al. to split the groups differ from these values. As found in Dijksterhuis et al.’s footnotes, they departed from using the medians in order to have more even groups of participants (for example, 52 and 64 individuals in the low and high self-rated expertise groups in Experiment 2, respectively). We remark, however, that splitting the groups at the median of 4, yields exactly 58 participants in each group in Experiment 2, and the number of participants at each level of Condition is more even than with the split the authors used. In addition, the Self-rated expertise by condition interaction in Experiment 2 occurs when Self-rated expertise is dichotomized at the value 3, and this result disappears when the dichotomization occurs at the actual median of 4. Another important aspect of these data is that the effect sizes found are quite small (partial eta square < .03).

The descriptive statistics found in Tables 1 and 2 tell two different stories. Without dichotomization, accuracy does not increase more sharply as a function of expertise for the Unconscious group; but with dichotomization, the mean differences (ignoring the confidence intervals) are greater between low and high Self-rated expertise for the Unconscious group. Thus, in order to test the generality of the interaction found with ANOVA that used the median-splits, we performed several splits of the expertise ratings,3 five in each for a total of ten tests, and found that no other split criteria besides the one used by Dijksterhuis et al. (2009) replicated their ANOVA interaction results.

Dichotomization has the problem that some splits result in more uneven sample sizes for the different groups, so it is desirable to test the interaction hypothesis in another manner. As earlier stated, the prediction of UTT is that higher mean accuracy should be evident for experts in the unconscious condition relative to the experts in the other groups and the non-experts. This implies two things: 1) that accuracy increases with expertise and 2) that the increase is more pronounced for the unconscious group. Using the general linear model approach advocated by many researchers (e.g., Fitzsimons, 2008), we can test this interaction in a regression framework. Results demonstrated no significant Condition by Self-rated expertise interactions in the two studies: F(2, 346) = 1.98, p = .14, Experiment 1; and F(2, 110) = 1.56, p = .215, Experiment 2. Because Experiment 2 also had measures of objective expertise (that is, knowledge of the world soccer rankings of the teams, WRL), we performed the same test using WRL as independent variable. The Condition by WRL (objective-expertise) interaction was not significant either, F(2, 110) = 2.17, p = .12. Thus, we do not find support for the hypothesis that accuracy is differentially affected by thought condition and levels of expertise (either objective or self-rated) when using the general linear model approach. We thus conclude that the results observed with the median split analysis are spurious, because the works of Maxwell and Delaney (1993), Vargha et al. (1996), and MacCallum et al. (2002) demonstrated that spurious significant interactions can appear in analyses that dichotomize the independent variables, in part due to non-linearity between the independent and dependent variables. As seen in Table 1, accuracy does not follow a clear monotonic trajectory from low to high expertise, and the trend of the means show a small peak in the middle of the scale for the unconscious participants.

Next, we performed a more direct test of the mean differences between the two key groups, Unconscious and Conscious participants, on a contrast that captured the expected accuracy increases when going from low to high expertise. Again, Unconscious and Conscious groups were not significantly different on this linear contrast: (t(349) = –.2, p = .42, Experiment 1, and t(113) = –.19, p = .42 in Experiment 2, one tail tests). That is, the changes in accuracy as a function of expertise were not different for the Unconscious and Conscious participants.

Finally, we just explored additive models and checked more generally whether the variability in accuracy is better explained by adding Condition as a variable once we control for Self-rated expertise. R2s remained unchanged up to two decimal places when the Condition independent variable was added to the model. In Experiment 1, the full and reduced models yield R2 = .14. In Experiment 2, the R2 = .01. In each experiment, the linear model containing only Self-rated expertise is significant at the .05 level, but the relation is small (R2 < .14). Using WRL (objective-expertise) as a predictor (with or without Condition in Experiment 2) yields R2 = .09. Objective-expertise is significant (at the .05 level), and not surprisingly a stronger predictor of accuracy.

4  Conclusions

The notion that experts can make better predictions when thinking unconsciously is in part traced to the assumption that unconscious thought weights the importance of attributes appropriately, whereas conscious thought disturbs the natural process and produces suboptimal weighting of cues (Dijksterhuis, et al., 2009). This is the weighting principle of UTT (Dijksterhuis & Nordgren, 2006;). An earlier study (Dijksterhuis, 2004, Experiment 3) attempted to find evidence for this principle by correlating people’s importance judgments of the dimensions that defined the stimuli and the participants’ overall preferences for the stimuli. As stated by the authors, no significant differences were found among the groups that thought consciously or unconsciously on this correlation measure (see page 2, Dijksterhuis, et al., 2009; page 100, Dijksterhuis & Nordgren, 2006).

From another perspective, Dijksterhuis et al., (2009) refer to the work by Halberstadt and Levine (1999) to emphasize the shortcomings of conscious thinking in a predictive judgment task. In that study, participants predicted basketball games either after thinking and listing the reasons for their choices (at least three reasons), or without doing so (control group). Participants also provided self-rated expertise judgments. The results of this study found that those who were asked to list reasons had worse accuracy scores (measured with three dependent variables) than those who did not, replicating and expanding the work of Wilson and Schooler (1991) on the effects of listing reasons. With regards to self-rated expertise, the results showed only a marginal (thus non-significant) negative correlation between self-rated expertise and one of the three accuracy measures used in the study. Hence, we believe that the Halberstand and Levine study cannot be linked directly to the hypothesis that unconscious thinking should aid experts (or more precisely, self-rated experts) when making predictions. We also believe that this research does not directly relate to the conscious condition employed by Dijksterhuis et al. and therefore has little to say about the possible lower performance of individuals who are asked to think consciously about their predictions. The differences in procedures could be significant (i.e., between listing reasons versus just thinking about a problem). For example, a good technique for reducing the overconfidence bias (i.e., confidence judgments are higher than those warranted by accuracy) is to list con reasons for a chosen response in contrast to listing pro reasons (Koriat, Lichtenstein, & Fischhoff, 1980). What this means is that even within different types of conscious directives, performance can vary.

From yet another angle, the superiority of unconscious experts is linked to Fuzzy-trace theory (Reyna & Brainerd, 1991, 1995a, 1995b). Dijksterhuis et al. (2009) state that experts will benefit more from unconscious thinking when compared to non-experts because experts rely on “gist” instead of “verbatim” memory to form judgments, and that gist memory is unconscious. But Fuzzy-trace theory acknowledges that consciousness is multidimensional, and there is nothing in Fuzzy-trace theory that would prevent gist from being used when prompted to think carefully. Some of Fuzzy-trace theory key principles are: 1) Cognitive flexibility results from encoding both gist and verbatim representations, 2) reasoning operates at the least precise level of gist as expertise increases, and 3) qualitative processing becomes the default mode of reasoning and is not a result of computational complexity. The first principle assumes parallel processing for both gist and verbatim information, and the second and third principles assume a greater reliance on gist as expertise increases with reasoning being qualitative more than quantitative. Taking these principles together, the only expectation with regard to making predictive judgments is that experts will be more likely to use their gist memory than non-experts. It is unclear how unconscious experts will derive further benefits from distraction.

A final point concerning the weighting principle is that unconscious thought is simultaneously expected to weight information optimally, but is unable to use numerical information (page 2, Dijksterhuis et al., 2009). Payne, Samper, Bettman and Luce (2008) showed that in a gambling task conscious thinkers were better at weighting than the unconscious thinkers (i.e., a contradiction of UTT), but these results were dismissed by Dijksterhuis et al. under the premise that the unconscious does not use numbers. Thus we are left with a conundrum: the unconscious can make better judgments and decisions in complex environments, but it cannot process numerical information. A thought experiment quickly reveals that much complexity in the world is found in numerical form (e.g., comparing insurances, making retirement decisions, making travel plans with differing costs and schedules) and therefore the non-numerical aspect of unconscious thinking seems at odds with its ability to excel in complex problems.

From a broad theoretical perspective, we (researchers in judgment and decision making) are surprised that a vast literature on predictive and diagnostic judgments was largely ignored by a paper that attempts to advise experts on how to best make predictions. For example, there is an extensive literature on clinical and probability judgment that has focused on describing the shortcomings of expertise and the robustness of linear models in many domains (Dawes, 1979, 2005; Dawes, Faust, & Meehl, 1989; Meehl, 1954, etc.). Studies have also looked at the factors that influence the beliefs in expertise (the illusion of validity — Einhorn & Hogarth, 1978) and the conditions under which experts differ among each other in the way they weight and combine information (Einhorn, 1974). In a different realm, the calibration literature has demonstrated, among other things, that accuracy is a complex concept and that different measures address different psychological processes (e.g., discrimination versus calibration, see Yates,1990, for a comprehensive review of calibration; see Yates also for performance differences by experts and lay people in many domains). In addition, experts vary in their levels of accuracy as a function of tasks (Yates); for example, weather forecasters made accurate probabilistic forecasts of rain (Murphy & Winkler, 1977), but physicians diagnosing pneumonia did not perform well (Christensen-Szalanski & Bushyhead, 1981). Furthermore, researchers in the cue probability learning and lens model traditions have studied predictive judgments extensively and proposed mechanisms of how individuals combine and weight cues and how feedback and task properties can affect these processes as well as performance (Hammond, Summers, & Deane, 1973; Hogarth, Gibbs, McKenzie, & Marquis, 1991; Klayman, 1988; Stewart & Lusk, 1994). The list of references we present is by no means exhaustive, but sheds light on the richness of studies and methods that researchers have employed to understand judgments of novices and experts. We believe that a theory like UTT would benefit from making the relevant theoretical connections to this research when attempting to explain and predict how judges make forecasts. In particular, Hammond’s Cognitive Continuum Theory (1996) is a clear candidate for analyzing the conditions in which different modes of thoughts may lead to different judgment strategies and outcomes across the deliberation-intuition continuum.

In sum, because the mechanisms underlying UTT have yet to be clearly defined, and because several researchers have not been able to replicate the basic finding of superior performance by unconscious thinkers (Acker, 2008; Calvillo & Penaloza, 2009; Newell, Yao Wong, Cheung, & Rakow, 2009; Waroquier, Marchiori, & Cleeremans, 2009) we conclude that it is premature to recommend that individuals “let their unconscious do the work” for important decisions. We also warn against the recommendation that experts should think unconsciously when making forecasts.

References

Acker, F. (2008). New findings on unconscious versus conscious thought in decision making: additional empirical data and meta-analysis. Judgment and Decision Making, 3, 292–303.

Bargh, J. A. (1990). Auto-motives: Preconscious determinants of social interaction. In E. T. Higgins & R. M. Sorrentino (Eds.) Handbook of motivation and cognition (Vol. 2, pp. 93–130). New York: Guilford Press.

Calvillo, D. P. & Penaloza, A. (2009). Are complex decisions better left to the unconscious? Further failed replications of the deliberation-without-attention effect. Judgment and Decision Making, 4, 509–517.

Christensen-Szalanski, J. J. J. & Bushyhead, J. B. (1981). Physicians’ use of probabilistic information in a real clinical setting. Journal of Experimental Psychology: Human Perception and Performance, 7, 928–935.

Dawes, R. M. (1979). The robust beauty of improper linear models. American Psychologist, 34, 571–582.

Dawes, R. M., Faust, D., and Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 243, 1668–1674.

Dawes, R. M. (2005). The ethical implications of Paul Meehl’s work on comparing clinical versus actuarial prediction methods. Journal of Clinical Psychology, 61, 1245–1255.

Dijksterhuis, A. (2004). Think different: The merits of unconscious thought in preference development and decision making. Journal of Personality and Social Psychology, 87, 586–598.

Dijksterhuis, A., Bos, M. W., Nordgren, L. F., & vanBaaren, R. B. (2006). On making the right choice: The deliberation-without-attention effect. Science,311, 1005–1007.

Dijksterhuis, A., Bos, M. W., van der Leij, A., & van Baaren, R. B. (2009). Predicting soccer matches after unconscious and conscious thought as a function of expertise. Psychological Science, 20, 1381–1387.

Dijksterhuis, A., & Nordgren, L. F. (2006). A theory of unconscious thought. Perspectives on Psychological Science, 1, 95–109.

Dijksterhuis, A., & van Olden, Z. (2006). On the benefits of thinking unconsciously: Unconscious thought can increase post-choice satisfaction. Journal of Experimental Social Psychology, 42, 627–631.

Einhorn, H. J. (1974). Expert judgment: Some necessary conditions and an example. Journal of Applied Psychology, 59, 562–571.

Einhorn, H. J. & Hogarth, R. M. (1978). Confidence in judgment: Persistence of the illusion of validity. Psychological Review, 85, 395–416.

Fitzsimons, G. J. (2008). Death to dichotomizing. Journal of Consumer Research, 35, 9.

González-Vallejo, C., Lassiter, G. D., Bellezza, F. S., & Lindberg, M. J. (2008). “Save angels perhaps”: A critical examination of unconscious thought theory and the deliberation-without-attention effect. Review of General Psychology, 12, 282–296.

Halberstadt, J. B., & Levine, G. (1999). Effects of reasons analysis on the accuracy of predicting basketball games. Journal of Applied Social Psychology, 29, 517–530.

Hammond, K.R. (1996). Human judgment and social policy: Irreducible uncertainty, inevitable error, unavoidable injustice. New York: Oxford University Press.

Hammond, K. R., Summers, D. A., & Deane, D. H. (1973). Negative effects of outcome-feedback in multiple-cue probability learning. Organizational Behavior and Human Decision Processes, 9, 30–34.

Hogarth, R. M., Gibbs, B. J., McKenzie, C. R., & Marquis, M. A. (1991). Learning from feedback: exactingness and incentives. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 734–752.

Irwin, J. R., & McClelland, G. H. (2001). Misleading heuristics and moderated multiple regression models. Journal of Marketing Research, 38, 100–109.

Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from intentional uses of memory. Journal of Memory and Language, 30, 513–531.

Klayman, J. (1988). On the how and why (not) of learning from outcomes. In B. Brehmer & C. R. B. Joyce (Eds.), Human Judgment: The SJT View. Oxford, England: North-Holland.

Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6, 107–118.

MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the Practice of Dichotomization of Quantitative Variables. Psychological Methods, 7, 19–40.

Maxwell, S. E., & Delaney, H. D. (1993). Bivariate median splits and spurious statistical significance. Psychological Bulletin, 113, 191–190.

Meehl, P. E. (1954). Clinical versus Statistical Predictions: A Theoretical Analysis and Revision of the Literature. Minneapolis: University of Minnesota Press.

Murphy, A. H. & Winkler, R. L. (1977). Can weather forecasters formulate reliable probability forecasts of precipitation and temperature? National Weather Digest, 2, 2–9.

Newell, B. R., Yao Wong, K., Cheung, J., & Rakow, T. (2009). Think, blink or sleep on it? The impact of modes of thought on complex decision making. The Quarterly Journal of Experimental Psychology, 62, 707–732.

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231–259.

Payne, J., Samper, A., Bettman, J. R., & Luce, M. F. (2008). Boundary conditions on unconscious thought in complex decision making. Psychological Science, 19, (1118–1123).

Reyna, V. V., & Brainerd, C. J. (1991). Fuzzy-trace theory and children’s acquisition f mathematical and scientific concepts: An interim synthesis. Learning and Individual Differences, 3, 27–58.

Reyna, V. V., & Brainerd, C. J. (1995a). Fuzzy-trace theory: An interim synthesis. Learning and Individual Differences, 7, 1–75.

Reyna, V. V., & Brainerd, C. J. (1995b). Fuzzy-trace theory: Some foundational issues. Learning and Individual Differences, 7, 145–162.

Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information processing II: General theory. Psychological Review, 82, 127– 190.

Stewart, T. R. & Lusk, C. M. (1994). Seven components of judgmental forecasting skill: Implications for research and the improvement of forecasts. Journal of Forecasting, 13, 579–599.

Vargha, A., Rudas, T., Delaney, H. D., & Maxwell, S. E. (1996). Dichotomization, partial correlation, and conditional independence. Journal of Educational and Behavioral Statistics, 21, 264–282.

Waroquier, L., Marchiori, D., Klein, O. & Cleeremans, A. (2009). Methodological pitfalls of the unconscious thought paradigm. Judgment and Decision Making, 4, 601–610.

Wilson, T. D., & Schooler, J. W. (1991). Thinking too much: Introspection can reduce the quality of preferences and decisions. Journal of Personality and Social Psychology, 60, 181–192.

Yates, J. F. (1990). Judgment and decision making. Englewood Cliffs, NJ: Prentice Hall.

Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35, 151–175.


*
We thank Drs. Jonathan Baron, Frank Bellezza, David Budescu, Jeff Vancouver, and Mr. Jason Harman for comments on earlier drafts, and Dr. Scott Maxwell for statistical assistance. Correspondence: Claudia González Vallejo, gonzalez@ohiou.edu.
1
We thank Dr. Dijksterhuis for providing us with the data for reanalysis.
2
We compared the distributions of Conscious and Unconscious groups via Kolgomorov-Smirnov test and found no significant differences (Experiment 1, K-S = .65, p = .78; 2, K-S = .4, p = .99).
3
We did not perform all possible splits of the data as the order of expertise is important in its presumed relationship to accuracy. We also did not include splits that would result in groups with less than 5 observations. We note that performing more tests on more groupings would only increase the Type I error rate and the alpha correction would make our conclusions even stronger.

This document was translated from LATEX by HEVEA.