Judgment and Decision Making, Vol. ‍17, No. ‍5, September 2022, pp. 937-961

Failing to ignore the ignorant: Mistaking ignorance for error

André Vaz*  André Mata#

Abstract: Expertise is a reliable cue for accuracy – experts are often correct in their judgments and opinions. However, the opposite is not necessarily the case – ignorant judges are not guaranteed to err. Specifically, in a question with a dichotomous response option, an ignorant responder has a 50% chance of being correct. In five studies, we show that people fail to understand this, and that they overgeneralize a sound heuristic (expertise signals accuracy) to cases where it does not apply (lack of expertise does not imply error). These studies show that people 1) tend to think that the responses of an ignorant person to dichotomous-response questions are more likely to be incorrect than correct, and 2) they tend to respond the opposite of what the ignorant person responded. This research also shows that this bias is at least partially intuitive in nature, as it manifests more clearly in quick gut responses than in slow careful responses. Still, it is not completely corrected upon careful deliberation. Implications are discussed for rationality and epistemic vigilance.


Keywords: ignorance, error, heuristics, expertise, advice taking, social influence

1 Introduction

Expertise is a powerful heuristic cue for persuasion and accuracy. People are often persuaded by experts into taking their advice and adopting their opinions. This work asks whether people overgeneralize this heuristic, such that they think that, just as expertise is a positive cue for accuracy and truth (i.e., people think the expert is probably right and thus are likely to choose the same response that the expert gave), ignorance might have a negative influence (i.e., people think the ignorant is wrong and thus choose the opposite response).

There is a great deal of work on the positive effect of expertise on persuasion and advice taking (DeBono & Harnish, 1988; Harvey & Fischer, 1997; Hovland & Weiss, 1951; Petty et al., 1981; Reimer et al., 2004). In comparison, and to our knowledge, much less is known about the influence of ignorance (which can be conceived of as negative expertise) on people’s judgments and decisions. The key questions in the present research are: 1) Just as expertise is a positive cue for credibility and accuracy, whereby people think the expert is right, is ignorance a negative cue, such that people think the ignorant is wrong? And 2) do people ignore the ignorant’s opinion, or, because they think that opinion is wrong, do they go against it?

From a normative perspective, people should realize that, in a dichotomous choice/judgment task, complete ignorance still gives the ignorant a 50% chance of success (likewise, in a judgment task with more than two choices, a random guess by the ignorant has a 100%/N of choices chance of being correct). The ignorant’s response should carry no weight (positive or negative) in one’s response. Not to ignore the ignorant is irrational – it is as if the ignorant person has negative knowledge (i.e., incorrect knowledge), such that choosing the opposite response is a correct strategy. In the worst case, the ignorant has at least a 50% chance of being correct, and even minimal knowledge would raise that chance level above 50%. Moreover, even if people have absolutely no knowledge about a specific fact, they might extrapolate from other facts they know. Thus, a case could be made that the baseline against which to judge the rationality of the perceived likelihood of the ignorant’s response being correct might be set higher than 50% (though in the present research we use the conservative 50% benchmark). In short, people should ignore the ignorant. However, we suspect that people might overgeneralize the expertise=correct heuristic, and expect that ignorance=error.

Our hypothesis is in line with research on heuristics and biases, showing that people sometimes overgeneralize the use of reliable cues and sound judgment strategies to domains where they are no longer valid or useful (Arkes & Ayton, 1999; Haws et al., 2017; Hsee et al., 2019; Hsee et al., 2015; Peysakhovich & Rand, 2016; Zhu et al., 2019). This hypothesis also follows from research in different domains of psychology showing that people have difficulty in ignoring information – for instance, research on the perseverance effect (McFarland et al., 2007; Ross et al., 1975), repetition effects (Unkelbach et al., 2007; Weaver et al., 2007), and redundancy (Alves & Mata, 2019). Here too we suggest that people should ignore the input coming from an ignorant person, but that they might have a hard time doing so.

Note that the effect of interest here is a form of influence – a negative influence – as the person’s response is influenced by the ignorant’s response, and so the responder ends up giving a different response than the one they might give had they not learned about the ignorant’s response. For example, if a person is given a movie recommendation by someone they perceive to be ignorant about movies, this might actually lower the chances that the person will choose to watch the movie. Previous research in social psychology has shown similar cases of negative social influence, where people do the opposite of others (e.g., Ariely & Levav, 2000; Berger & Heath, 2007; Chan et al., 2012). However, in that research, people choose differently from others in order to signal non-conformity. In our research, we suggest that people might do the same but for different reasons: they do so because they infer that the ignorant is wrong, generalizing the rational heuristic that associates expertise with accuracy to the less defensible case of thinking that ignorance implies inaccuracy.

Data and histograms for all studies can be accessed at: https://osf.io/fnbua/.

2 Study 1

Study 1 tests whether people fall prey to this ignorance = error heuristic by assessing how likely they are to choose the same responses as other people who are described as experts or ignorant about some domain. In line with research on the effects of expertise on advice-taking and persuasion, we predict that people will give the same response as experts. But in line with the hypothesis presented here, we predict that people are more likely to reject than to follow the ignorant’s response.

2.1 Method

2.1.1 Participants

One hundred participants were recruited through the Prolific online platform (101 ended up participating; 65 Female, 27 Male, 9 who chose not to disclose). Participants were aged 18-74 (M = 33.43, SD = 12.88).

2.1.2 Materials

Twenty 2-choice history questions were collected (see Appendix I) from several internet quizzes. Questions were selected with the criteria that they were difficult, so that the correct answers would not be known to most (non-expert) participants (this was assessed independently by the authors, and the final set of questions were those for which there was agreement).

2.1.3 Procedure

Participants were told they would be asked to answer 20 history questions and presented with the answer of a previous participant (target: Susan or David) before they answered. One of the targets was described as an expert in history (see Appendix II for detailed instructions), and the other as completely ignorant about history — which of them (Susan or David) was described as ignorant or expert was randomized per participant.

Half the questions were randomly attributed to the expert, and half to the ignorant. The questions were presented sequentially, but always in two blocks: participants always saw all 10 questions to which one of the targets gave their answer, before seeing the 10 questions to which the other target answered — whether participants saw the ignorant’s or the expert’s questions first was counterbalanced.

Additionally, the study was coded so that each of the targets answered correctly only half the time, though participants were presumably not aware of whether the answers were correct or not. Thus, each participant answered five questions for which they saw an incorrect answer and five questions for which they saw a correct answer, from a supposedly ignorant person, and five questions with an incorrect answer and five questions with a correct answer from a supposedly expert person.

Finally, participants were debriefed and provided with the correct answer to all questions, along with the answers they had given.

2.2 Results

For each participant, the number of times they gave the same answer as the other person was computed, for both the expert and the ignorant. Without surprise, a one-sample t-test revealed that participants followed the expert’s answer on most trials (M = 8.07 out of 10, SD = 1.75, t(100) = 17.62, p < .001, d = 1.75). More importantly, participants chose the same option as the ignorant person less than 50% of the time (M = 4.41, SD = 1.90, t(100) = –3.09, p = .003, d = 0.31).

3 Study 2

Study 2 seeks to illuminate the mechanism underlying this bias, whereby people mistake ignorance for error. First, it tests whether people think that an ignorant person has a chance of success below random levels, which would explain why participants reject their response. Moreover, Study 2 tests the intuitive vs. deliberative nature of this heuristic. Participants made two judgments about the likely accuracy of the target person’s response: an initial fast judgment, and a slower and more careful judgment (Bago & De Neys, 2019; Thompson et al., 2011; Vega et al., 2021). If this heuristic results from faulty intuition, then it should manifest more clearly in time-constrained responses, and then be corrected when participants are given the chance to think more carefully about the problems. If, however, it is an explicit belief, then the manipulation of cognitive resources should not affect its expression.

3.1 Method

3.1.1 Participants

Ninety-five undergraduate students (82 Female, 1 Nonbinary, and 1 who did not disclose) were recruited for participation in return for course credit. Participants were aged 19-65 (M = 21.89, SD = 7.36).

3.1.2 Materials

The same twenty 2-choice history questions from Study 1 were translated to Portuguese.

3.1.3 Procedure

The procedure was similar to that of Study 1 with two key changes. First, for each question, participants were provided with both options, as well as the alleged previous participant’s answer. However, rather than having to choose their own answer, participants were asked to judge how right or wrong the other person’s answer was, on a 9-point scale (from 1- Definitely wrong to 9- Definitely right).

Second, following the 2-response paradigm that is commonly used in the study of intuition versus deliberation (e.g., Bago & De Neys, 2019; Thompson et al., 2011; Vega et al., 2021), participants were asked to answer each question twice: First they were asked to answer as quickly as they could, to gauge their intuitive answers, and then to answer a second time where they could take as long as they needed, having the chance to revise their initial answer (see Appendix III for the full instructions).

As in Study 1, half the questions were presented with the answer of the ignorant person — 5 questions with the correct and 5 with the incorrect answer — and the other half with the answer of the expert person — again 5 questions with the correct and 5 with the incorrect answer. But unlike Study 1, all the answers were presented in random order, rather than in blocks of expert/ignorant.

3.2 Results

One-sample t-tests show that judgments differed significantly from the middle point of the scale, for both levels of knowledge and for both the fast and slow judgments. Specifically, the expert was judged as likely to be right at both time 1 and time 2, and the ignorant as likely to be wrong at both time 1 and time 2. To account for item variance, we also tested the judgments at the item level and obtained the same pattern of results (Table 1).


Table 1: Average responses for fast and slow judgments, per target, analyzed at subject- and item-level (t tests refer to differences from the midpoint of the scale; Study 2).
TargetJudgmentMSDt(n – 1)pd
Subject-level (n = 95)
Expertfast6.700.9417.66< .0011.81
 slow6.400.9114.91< .0011.53
Ignorantfast4.541.02–4.43< .0010.46
 slow4.640.93–3.73< .0010.38
Item-level (n = 20)
Expertfast6.680.2529.55< .0016.61
 slow6.390.2425.67< .0015.74
Ignorantfast4.540.38–5.41< .0011.21
 slow4.650.32–4.80< .0011.07

In order to test whether participants significantly revised their answers, we used a repeated measures ANOVA that enabled us to compare judgments across fast versus slow judgments. There was a main effect of time (F(1, 94) = 7.27, p = .008), with slow judgments being overall lower, as well as expertise (F(1,94) = 169.60, p < .001), with experts being judged as more likely to be right than ignorants. More importantly, there was an interaction effect of judgment X expertise (F(1,94) = 13.64, p < .001): Paired-samples t-tests show that, while judgments for experts decreased from the fast to the slow response (t(94) = 4.42, p < .001, d = 0.45), judgments for ignorants were, as expected, revised upwards (t(94) = 1.69, p = .095 (one-tailed: .047), d = 0.11). Once again, testing at the item-level yielded the same pattern of results: main effects of judgment (F(1, 19) = 16.41, p < .001) and expertise (F(1, 19) = 602.98, p < .001), as well as an interaction effect of judgment X expertise (F(1, 19) = 119.10, p < .001). Paired-samples t-tests show that judgments for experts decreased from the fast to the slow response (t(19) = 11.13, p < .001, d = 2.49) and judgments for ignorants increased (t(19) = 3.29, p = .004, d = 0.74).1

4 Study 3

There are five goals to this study: 1) To generalize the effect to a new knowledge domain: arts and culture; 2) to test the effect with a new measure: a summary estimate of the number of correct responses provided by the other person; 3) to assess the explicit heuristic belief that ignorance implies error; 4) to test the relation between this belief and the estimated number of correct responses provided by the ignorant person; and 5) to test the effect in a between-subjects design. Whereas Studies 1-2 manipulated the expertise-vs.-ignorance cue within-subjects, a between-subjects design prevents explicit comparison between the expert and ignorant conditions. In the literature on judgment and decision making it is well-documented that within- vs. between-subjects designs can modulate effects, making them larger or smaller, or even reversing them (e.g., Erlandsson, 2021; González-Vallejo & Moran, 2001; Hsee, 1996; Krüger et al., 2014), and the same might apply to the present effect. It is not easy, however, to anticipate the direction of the effect. It might be that the effect emerges only in within-subjects designs, as they might encourage participants to overgeneralize the sound heuristic that expertise signals accuracy to the irrational belief that lack of expertise signals inaccuracy. Or, on the other hand, the juxtaposition of expertise and ignorance could make it easier to realize the validity of one cue and the invalidity of the other.

4.1 Method

4.1.1 Participants

One hundred fifty-seven undergraduate students (127 Female, 5 who did not disclose) participated voluntarily. Participants were aged 17-46 (M = 19.44, SD = 3.55).

4.1.2 Materials

Ten 2-choice questions about Arts & Culture were collected from a game of Trivial Pursuit (see Appendix IV). The questions were selected with the criterion that none of the authors knew the correct answer.

4.1.3 Procedure

The study was carried out in pen and paper format. The study started by having participants answer 10 2-choice Arts & Culture questions by circling the correct answer. In the following page, participants were asked to imagine a hypothetical person, about whom they read a short description, and then predict how many of the previous questions this person would answer correctly. Finally, participants rated their agreement with three statements meant to gauge their heuristic beliefs (e.g., “Just like an expert on a subject is probably correct when answering questions about that subject, a person who is ignorant about that subject is probably incorrect when answering questions about that subject”).

For half the participants, the hypothetical person was described as an expert on arts and culture, and for the other half as ignorant about arts and culture. Additionally, the gender of this target person was counterbalanced (see Appendix IV for full instructions).

4.2 Results

One-sample t-tests show that the estimated number of correct responses differed significantly from 5, both in the expert condition (t(76) = 15.07, p < .001, d = 1.71) and in the ignorant condition (t(74) = –10.53, p < .001, d = 1.22), with participants expecting mostly correct responses for experts (M = 8.00 (out of 10), SD = 1.75) and mostly errors for ignorant responders (M = 2.75 (out of 10), SD = 1.85).

The three items pertaining to the heuristic belief that ignorance implies error were aggregated (α = .72). Participants in general endorsed this heuristic belief (M = 6.84, SD = 1.45). Moreover, this belief correlated with estimates in the critical ignorant condition, such that the more participants believed in the heuristic, the more they estimated that the ignorant would make more mistakes (r = –.43, p < .0012; this correlation holds for each of the three items: r ≥ –.24, p ≤ .036).

Lending further support to the hypothesis that estimates that the ignorant person would give more incorrect than correct responses are based on a belief in the heuristic tested here, participants who estimated that the ignorant would give 5 correct responses (chance level) scored lower on the heuristic belief items (M = 6.56, SD = 1.34) than those who estimated that the ignorant would have less than 5 correct responses (M = 7.30, SD = 1.15; t(69) = 2.04, p = .045, d = 0.63). In a less conservative test that compared those who estimated that the ignorant would have less than 5 correct responses against those who estimated that they would have at least 5 or more,3 this difference is clearer still (M = 7.30, SD = 1.15 vs. M = 6.47, SD = 1.32; t(73) = 2.55, p = .013, d = 0.70).

5 Study 4a

In Study 4 we asked participants to estimate the performance of an ignorant person in a set of knowledge questions with dichotomous response options (as in Study 3), but we also asked them to estimate the performance of the same person in a set of questions where it should be clearer that success is at chance level: In Study 4A, this involved guessing whether the outcome of coin tosses is heads or tails; In Study 4B, it involved choosing blindly between two response options. The goal was to 1) replicate the results of the previous studies, and 2) perform a sanity check on our paradigm, to make sure that participants understand that random outcomes for dichotomous-response questions have a likelihood of 50%. Finally, 3) we manipulated the order in which those two estimates (knowledge and coin tosses) were requested, in order to test whether making estimates about obviously random events (i.e., coin tosses) first would then debias estimates about events that might otherwise be perceived as less random (i.e., knowledge).

5.1 Method

5.1.1 Participants

Two hundred participants were recruited through the Prolific online platform. Of these, 9 failed the attention check, and therefore the final sample size was 191 (113 Female, 59 Male, and 20 who chose another category or did not disclose their gender). Participants were aged 18–71 (M = 37.23, SD = 12.86).

5.1.2 Procedure

Participants were told to consider two games where someone (referred to as X) would be asked to answer 10 questions. Participants were further told that their task would be to estimate how many of those questions X would answer correctly. In one game, coin, they were told X would go through 10 coin tosses and, each time, try to guess which way the coin would land. In the other game, knowledge, they were told X would answer 10 two-choice Art History questions, and they were shown an example — “Who painted Woman Listening to Music?”: a) “Miró” or b) “Matisse”. Participants were informed that X knew nothing about neither the coin — and thus had no idea what the outcome of the coin tosses would be — nor Art History — and thus had no idea about what the correct answers were. After reading about each game, participants estimated how many, out of 10 coin tosses/questions, X would answer correctly. All participants made estimates for both games, in a counterbalanced order. Unlike Study 3, participants did not read all the questions. This was done so that participants’ inferences regarding the questions would not contaminate their judgments.

We included a captcha at the beginning of the study, and a multiple-choice attention check at the end: “In this study, you were asked to estimate how many questions of a certain topic a person would answer correctly. What was the topic of those questions?”. Participants who failed to select “Art History” were excluded from the analysis.

5.2 Results

One-sample t-tests against five show that people underestimated the number of correct answers for both the coin tosses (t(19) = 6.67, p < .001, d = 0.48) and the knowledge task (t(190) = 8.37, p < .001, d = 0.61). Still, a mixed model ANOVA with estimated number of correct responses for knowledge and guessing coin tosses as a within-subjects factor and order of presentation of those domains as a between-subjects factor revealed a main effect of domain (F(1, 189) = 22.54, p < .001), such that estimated performance was higher and closer to 50% for guessing coin tosses (M = 4.55, SD = 0.93) than for knowledge (M = 4.03, SD = 1.60).

There was also a main effect of order (F(1, 189) = 4.29, p = .040), such that estimated performance was higher in the condition where knowledge came before coin tosses than in the other order condition (M = 4.44, SD = 0.89 vs. M = 4.13, SD = 1.07). Admittedly, we expected this order effect to be either non-significant or to go in the opposite direction (if going through the coin tosses task would then improve estimates for the knowledge task). Still, the largest effect was the predicted difference in estimated performance across tasks, which did not differ across order conditions (for the interaction, F < 1).

In conclusion, participants discriminated between domains where randomness was clearer (guessing the outcome of coin tosses) versus less obvious (answering knowledge questions). However, having gone through the obviously random domain first did not improve estimates for the less obviously random domain.

6 Study 4b

6.1 Method

6.1.1 Participants

Two hundred participants were recruited through the Prolific online platform. Of these, 2 failed the attention check (the same check that was used in Study 4A), and therefore the final sample size was 198 (124 Female, 66 Male, and 8 who chose another category or did not disclose their gender). Participants were aged 19-80 (M = 38.03, SD = 13.43).

6.1.2 Procedure

Study 4B was similar to Study 4A, with some differences. First, rather than a coin toss, the knowledge condition was contrasted with a condition where the target also had to answer Art History questions, but did so blindly. Specifically, the target was described as having the answers to each question sealed inside closed envelopes and, thus, having to pick one of the two answers without knowing which option they were picking. In this way, we wanted to make it clear to participants that the answers were completely random.

Second, to avoid consistency or transfer effects across task conditions, we presented a different target person in each task (we referred to target X in the blind answer condition, and Y in the ignorant answer condition).

6.2 Results

One-sample t-tests against five showed that people once again underestimated the number of correct answers for both the blind responder (t(197) = 6.21, p < .001, d = 0.44) and the ignorant responder (t(197) = 6.88, p < .001, d = 0.49). But again, a mixed model ANOVA with estimated number of correct responses for knowledge and blind choice as a within-subjects factor and order of presentation of those domains as a between-subjects factor revealed a main effect of domain (F(1, 196) = 7.13, p = .008), such that estimated performance was higher and closer to 50% for blind choices (M = 4.57, SD = 0.97) than for knowledge (M = 4.30, SD = 1.42).

There was also a main effect of order (F(1, 196) = 11.53, p = .001), such that estimated performance was higher in the condition where blind choice came before knowledge than in the other order condition (M = 4.67, SD = 0.84 vs. M = 4.21, SD = 1.09). This effect went in the opposite direction to that observed in Study 4A, but it is more in line with the predicted effect (going through the blind choice task first should improve estimates for the knowledge task).

Finally, and as in Study 4A, the interaction effect was not significant (F < 1), which again leads to the conclusion that participants discriminated between domains where randomness was clearer (blind choices) versus less obvious (answering knowledge questions). However, having gone through the obviously random domain first did not improve estimates for the (apparently less obviously random) domain: ignorance.

7 General Discussion

The present research (using different samples from different countries, online and in the lab) revealed a bias whereby people mistake ignorance with error. Participants in the present studies were shown the responses of other people who were described as either knowledgeable or ignorant about a certain topic. Participants in general believed that the knowledgeable person was likely to be correct (Studies 2–3), and therefore they were likely to follow this person’s response (Study 1). This makes perfect normative sense, as expertise is naturally associated with accuracy. However, the results also showed that participants also relied on ignorance as a cue for accuracy – in this case, lack of accuracy. This is, we argue, normatively inappropriate, as the chance of a completely ignorant person being correct when answering to questions with dichotomous response options can never be lower than 50%. And, as we argued in the introduction, even the slightest bit of knowledge, or the ability to extrapolate from related knowledge, would improve those odds.

Thus, the present results reflect the overgeneralization of a sound heuristic (expertise = accuracy) to cases where the heuristic fails (the absence of expertise does not imply inaccuracy). Nevertheless, a more nuanced reading of these results is that participants seemed to be somewhat aware of this bias, as the positive effect of expertise was larger than the negative effect of ignorance. This fact, which was consistent throughout studies, enables a more generous interpretation of the present results.

Relatedly, one may ask whether this bias results from an intuitive heuristic, which one can correct upon further deliberation, or whether it results from an explicit belief. On the one hand, the results of Study 2 show that participants did revise their intuitive response, and they showed less bias when they could think more carefully about the problems. This suggests that this bias stems from faulty intuition, and that people can correct for it (yet another point in favor of a more generous reading of this effect).4 On the other hand, however, the bias manifested in both the intuitive and the deliberate responses, and in Study 3 participants explicitly expressed their belief in the heuristic that ignorance is synonymous with error, which suggests that either people realize it is a bias but insufficiently correct for it, or that people might not even realize it is a bias to begin with.

Study 4 shows that the bias is not a consequence of a basic failure to understand that random dichotomous events have a 50% chance. Indeed, participants’ estimates were better (i.e., closer to 50%) when guessing how the ignorant person would perform in tasks where the outcome was more obviously random (coin tosses, in Study 4A, or blindly choosing the response options, in Study 4B). Still, thinking about the obviously random task first did not debias estimates for the domain where randomness was less obvious (knowledge). We were unsure whether the order manipulation (i.e., going through the obviously random task before or after the ignorance task) would be effective in eliminating the bias, though the order manipulation was implemented as (what seems to us) a reasonable test of whether realizing that random = 50% chance of success would then prompt better judgment for the focal ignorance task. The fact that it didn’t suggests that this is a robust and powerful bias, and that beliefs about randomness might be domain-specific. Apparently, responding blindly is considered to have a better chance of success than choosing under ignorance.

One question one might ask is whether it might sometimes be rational to follow this ignorance=error heuristic and give the opposite response to the one given by an ignorant person. Particularly for tricky problems involving counter-intuitive reasoning and judgment (e.g., the typical tasks used in the heuristics-and-biases research program), it might be a good strategy to choose the opposite of what an ignorant/illogical person responded. For such tasks, correct responders are often aware that there is an intuitive but incorrect response option that is quite appealing (as they too might have thought of giving that intuitive response at first; Mata, Ferreira, et al., 2013; Mata, 2019ab, 2020). However, as they are aware of the trick, they do not become less confident but rather grow more confident in their less consensual (but more logical) response. Indeed, research shows that correct responders in these tricky tasks are often quite good at using cues about how other people respond (e.g., quickly or slowly) to infer their responses (i.e., the incorrect/intuitive response or the correct/deliberative response; Mata & Almeida, 2014). Future research might explore whether the perceived ignorance of the other person can also serve as a cue that can be validly used when deciding about one’s response and how confident one is about it. Consensus is a powerful cue for confidence judgments (i.e., the more people favor a certain opinion, the more correct it is perceived to be), but the perceived expertise versus ignorance of the opinion-givers might moderate this effect, mitigating or reversing it (i.e., confidently choosing the opposite of what an ignorant majority defends).

Another question concerns the size of the ignorance=error effect. This question might be answered by examining the effect size for aggregate results, or by analysis of individual-differences. Concerning the former, and considering as cut-off levels that a Cohen’s d of 0.2 corresponds to a small effect size, d ≥ 0.5 corresponds to a medium effect size, and d ≥ 0.8 is a large effect size, we observed a small effect in Studies 1, 2, and 4B (in Study 1 this is not surprising, because this study did not directly test the ignorance=error bias, but rather a consequence of that bias whereby people choose to respond differently from the ignorant person, presumably because they think that he is incorrect, which can be seen as a stronger test, or at least a less direct test of the hypothesis), a medium effect in Study 4A, and a large effect in Study 3.

In a further individual-differences analysis, we assessed whether the bias holds for most responders, or rather whether it is the result of a minority of responders manifesting the bias to extreme degrees (i.e., a large bias in a small subsample). This analysis compares the percentage of biased vs. non-biased responders. In Study 1, we considered those who responded the same as the ignorant person in < 50% of the trials vs. those who did it in 50% of the trials: 48.5% vs. 25.7%. In Study 2, because participants gave several ratings, we considered instead the percentage of slow/deliberate ratings for the ignorant target that fell below the midpoint of the scale vs. the ratings that were in the middle of the scale: 48.1% vs. 16.5%. In Studies 3 and 4, we considered responders who estimated that the ignorant person would have < 50% of correct responses vs. those who estimated 50% accuracy: 77.3% vs. 17.3% in Study 3; 41.5% vs. 53.5% in Study 4a; and 36% vs. 58% in Study 4b. Thus, one might say that the proportion of biased responders is at least non-negligible, or even the majority in some of the studies. In either case, we should not attribute the average effect to a small minority displaying a large bias; the effect seems more generalized. These two analyses seek to inform about the magnitude of the bias, both in terms of how large it is for the average responder and how pervasive it is across individuals. In either analysis, the magnitude of the bias is non-negligible. See Appendix VIII for histograms of responses in Study 4a and 4b, which illustrate the sorts of distributions found in all studies.

Yet another question is how impactful the bias is in everyday judgment and decision-making. Of course, the 50% odds mentioned earlier apply only to dichotomous response tasks. When confronted with scenarios that have more than two possible options, it can be rational to assume any given answer is more likely incorrect than correct. However, we do not propose that the bias refers simply to estimating a lower than 50% chance of being correct. Rather, we suspect that for tasks with more response options (e.g., 4 instead of 2), people might still underestimate the probability of the ignorant’s response being correct (i.e., estimating less than 25%). Future research might test whether the present hypothesis holds for questions with more than two response options.

Many heuristics and biases are explained as cases of attribute substitution (Kahneman & Frederick, 2002), whereby one takes a cue (e.g., fluency or ease of thought) as an indicator of the dimension that one is actually trying to estimate (e.g., frequency or probability), thus replacing a hard question with a much easier computation. In the case of the present bias, the problem is not so much one of relying on more or less valid cues to make inferences about a related attribute, but rather one of overgeneralizing a sound rule to cases where it no longer holds, due to a logical fallacy of reverse inference. Such fallacies have already been shown to underlie other cases of overgeneralization that give rise to heuristics and biases.5 The rule may even be appropriate in some situations, such as politics: If we are uncertain about whether to support a proposal, but know that members of the opposite party support it, that alone may give us good reason to oppose it (though it can also lead to biased judgments; Cohen, 2003).

That said, there are quite a lot of cases where real-life decisions are two-sided and where a bias may occur. Indeed, we would argue that this bias is not limited to trivia games, but that it applies to a wide range of domains. Several examples come to mind: whether to watch a certain movie or not; to go on vacation to a certain destination or not; to buy or sell a stock or currency; to choose one graduate program or another; to vote in favor or against a certain policy in a referendum; to take one route or means of transportation versus another to get to a certain destination; to take a safe bet versus a risky gamble; to hire a job candidate or not; to try out a certain health regime or physical exercise practice; choosing to undergo a certain medical procedure or not; or of greater relevance recently, to choose to get vaccinated or not. Or it might even be that the same bias holds when thinking about one’s own competence at a certain domain: For instance, true/false or multiple-choice exams are sometimes graded with a penalty for guessing, which forces people to choose whether to respond at all. If they think that they are more likely to be wrong than chance, they may choose not to guess even when they know enough to be better than chance. These are but a few of many domains where life asks us to choose between door A and door B, and the perceived expertise (or lack thereof) of the responder may sway us in one direction or another: either prompting one to choose in line with their suggestion, or against it, if one believes them to be ignorant about a certain topic (as in Study 1).

Indeed, these studies revealed a curious case of social influence where a person wishes to avoid the response of the other person, and ironically ends up being influenced by it. If the definition of social influence is the influence that other people exert on one’s thoughts and behavior, then negative influence (i.e., choosing the opposite of what others chose) is just as good a case of social influence as positive influence (i.e., choosing the same as others). Moreover, we hope that we were convincing in arguing that this is a bias. The argumentative theory of reasoning and epistemic vigilance (Mercier & Sperber, 2011; Sperber et al., 2010) argues that rationality is well-adapted to the social context in which people navigate, and that people are sensitive to the credibility of others, and the possibility that they are wronged by others (because others might be deceitful or incompetent sources). We fully agree that epistemic vigilance and sensitivity to social context are adaptive. However, the strategies that one uses upon perceiving low credibility in others might vary in their adaptiveness: they might be reasonable, as when people put further deliberation into scrutinizing the responses of others who they perceive to be biased (Janssen et al., 2021; Mata, Fiedler, et al., 2013; Trouche et al., 2016); but they might also be less reasonable, as when they uncritically give the opposite response, as in the present studies.

References

Alves, H., & Mata, A. (2019). The redundancy in cumulative information and how it biases impressions. Journal of Personality and Social Psychology, 117(6), 1035–1060.

Alves, H., Uğurlar, P., & Unkelbach, C. (2022). Typical is trustworthy: Evidence for a generalized heuristic. Social Psychological and Personality Science, 13(2), 446–455

Ariely, D., & Levav, J. (2000). Sequential choice in group settings: Taking the road less traveled and less enjoyed. Journal of Consumer Research, 27(3), 279–290.

Arkes, H. R., & Ayton, P. (1999). The sunk cost and Concorde effects: Are humans less rational than lower animals?. Psychological Bulletin, 125(5), 591–600.

Bago, B., & De Neys, W. (2019). The intuitive greater good: Testing the corrective dual process model of moral cognition. Journal of Experimental Psychology: General, 148(10), 1782–1801.

Berger, J., & Heath, C. (2007). Where consumers diverge from others: Identity signaling and product domains. Journal of Consumer Research, 34(2), 121–134.

Chan, C., Berger, J., & Van Boven, L. (2012). Identifiable but not identical: Combining social identity and uniqueness motives in choice. Journal of Consumer Research, 39(3), 561–573.

Cohen, G. L. (2003). Party over policy: The dominating impact of group influence on political beliefs. Journal of Personality and Social Psychology, 85(5), 808–822.

DeBono, K. G., & Harnish, R. J. (1988). Source expertise, source attractiveness, and the processing of persuasive information: A functional approach. Journal of Personality and Social Psychology, 55(4), 541-546.

De Neys, W. (2012). Bias and conflict: A case for logical intuitions. Perspectives on Psychological Science, 7(1), 28–38.

Erlandsson, A. (2021). Seven (weak and strong) helping effects systematically tested in separate evaluation, joint evaluation and forced choice. Judgment & Decision Making, 16(5), 1113–1154.

González-Vallejo, C., & Moran, E. (2001). The evaluability hypothesis revisited: Joint and separate evaluation preference reversal as a function of attribute importance. Organizational Behavior and Human Decision Processes, 86(2), 216–233.

Harvey, N., & Fischer, I. (1997). Taking advice: Accepting help, improving judgment, and sharing responsibility. Organizational Behavior and Human Decision Processes, 70(2), 117–133.

Haws, K. L., Reczek, R. W., & Sample, K. L. (2017). Healthy diets make empty wallets: The healthy=expensive intuition. Journal of Consumer Research, 43(6), 992–1007.

Hovland, C. I., & Weiss, W. (1951). The influence of source credibility on communication effectiveness. Public Opinion Quarterly, 15(4), 635-650.

Hsee, C. K. (1996). The evaluability hypothesis: An explanation for preference reversals between joint and separate evaluations of alternatives. Organizational Behavior and Human Decision Processes, 67(3), 247–257.

Hsee, C. K., Yang, Y., & Li, X. (2019). Relevance insensitivity: A new look at some old biases. Organizational Behavior and Human Decision Processes, 153, 13–26.

Hsee, C. K., Yang, Y., & Ruan, B. (2015). The mere-reaction effect: Even nonpositive and noninformative reactions can reinforce actions. Journal of Consumer Research, 42(3), 420–434.

Janssen, E. M., Velinga, S. B., De Neys, W., & van Gog, T. (2021). Recognizing biased reasoning: Conflict detection during decision-making and decision-evaluation. Acta Psychologica, 217, 103322.

Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 49–81). Cambridge University Press.

Krüger, T., Mata, A., & Ihmels, M. (2014). The presenter’s paradox revisited: An evaluation mode account. Journal of Consumer Research, 41(4), 1127–1136.

Labroo, A. A., & Kim, S. (2009). The “instrumentality” heuristic: Why metacognitive difficulty is desirable during goal pursuit. Psychological Science, 20(1), 127–134.

Mata, A. (2019a). Further tests of the metacognitive advantage model: Counterfactuals, confidence and affect. Psychological Topics (Special Issue on Meta-Reasoning), 28, 155–124.

Mata, A. (2019b). Social metacognition in moral judgment: Decisional conflict promotes perspective taking. Journal of Personality and Social Psychology, 117, 1061–1082.

Mata, A. (2020). Metacognition and social perception: Bringing meta-reasoning and social cognition together. Thinking and Reasoning, 26, 140–149,

Mata, A., & Almeida, T. (2014). Using metacognitive cues to infer others’ thinking. Judgment and Decision Making, 9, 349–359.

Mata, A., Ferreira, M. B., & Sherman, S. J. (2013). The metacognitive advantage of deliberative thinkers: A dual-process perspective on overconfidence. Journal of Personality and Social Psychology, 105, 353–373.

Mata, A., Fiedler, K., Ferreira, M. B., & Almeida, T. (2013). Reasoning about others’ reasoning. Journal of Experimental Social Psychology, 49(3), 486–491.

McFarland, C., Cheam, A., & Buehler, R. (2007). The perseverance effect in the debriefing paradigm: Replication and extension. Journal of Experimental Social Psychology, 43, 233–240.

Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57-74.

Petty, R. E., Cacioppo, J. T., & Goldman, R. (1981). Personal involvement as a determinant of argument-based persuasion. Journal of Personality and Social Psychology, 41, 847–855.

Peysakhovich, A., & Rand, D. G. (2016). Habits of virtue: Creating norms of cooperation and defection in the laboratory. Management Science, 62(3), 631–647.

Reimer, T., Mata, R., & Stoecklin, M. (2004). The use of heuristics in persuasion: Deriving cues on source expertise from argument quality. Current Research in Social Psychology, 10(6), 69–84.

Ross, L., Lepper, M. R., & Hubbard, M. (1975). Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm. Journal of Personality and Social Psychology, 32, 880–892.

Sela, A., & Berger, J. (2012). Decision quicksand: How trivial choices suck us in. Journal of Consumer Research, 39(2), 360–370.

Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigilance. Mind & Language, 25(4), 359–393.

Thompson, V. A., Turner, J. A. P., & Pennycook, G. (2011). Intuition, reason, and metacognition. Cognitive Psychology, 63(3), 107–140.

Trouche, E., Johansson, P., Hall, L., & Mercier, H. (2016). The selective laziness of reasoning. Cognitive Science, 40(8), 2122–2136.

Unkelbach, C., Fiedler, K., & Freytag, P. (2007). Information repetition in evaluative judgments: Easy to monitor, hard to control. Organizational Behavior and Human Decision Processes, 103, 37–52.

Vega, S., Mata, A., Ferreira, M. B., & Vaz, A. R. (2021). Metacognition in moral decisions: Judgment extremity and feeling of rightness in moral intuitions. Thinking & Reasoning, 27(1), 124–141.

Weaver, K., Garcia, S. M., Schwarz, N., & Miller, D. T. (2007). Inferring the popularity of an opinion from its familiarity: A repetitive voice can sound like a chorus. Journal of Personality and Social Psychology, 92(5), 821–833.

Zhu, M., Bagchi, R., & Hock, S. J. (2019). The mere deadline effect: Why more time might sabotage goal pursuit. Journal of Consumer Research, 45(5), 1068–1084.



Appendix

Appendix I. History Questions

(Correct answers in bold)

In which year was Rome founded?

753 B.C. vs. 743 B.C.

Which U.S. President signed the bill limiting the driving speed to 55 mph in order to conserve fuel?

Richard Nixon vs. Lyndon B. Johnson

What was the name of Charles Darwin’ famous grandfather?

Erasmus Darwin vs. Alexander Darwin

Romans used the plant silphium so much it went extinct. What were they using it for?

As a natural contraceptive vs. To treat aches and pains

The ‘Conversion of Saint Paul’ was painted by which artist?

Caravaggio vs. Botticelli

Besides Anne Boleyn, which other wife of Henry VIII was executed

Catherine Howard vs. Jane Seymour

In which year was William Shakespeare born?

1564 vs. 1554

Roughly how much per acre did the United States pay Russia for the land that is now Alaska?

2 cents vs. 7 cents

Who was in the command module while Neil Armstrong and Buzz Aldrin were on the moon?

Michael Collins vs. Eugene Cernan

What was the first dynasty in China?

Xia dynasty vs. Qin dynasty

How many republics made up the former Soviet Union?

15 vs. 12

In which year did the French Revolution begin?

1789 vs. 1785

In 1781, William Herschel discovered which planet?

Uranus vs. Neptune

The ‘Great Northern War’ was primarily a contest between which two countries

Russia and Sweden vs. Norway and Denmark

Who was the architect who rebuilt London after the Great Fire of 1666

Sir Christopher Wren vs. Sir Christopher Robin

Which of these battles involved Huns?

Chalons vs. Cannae

What is the name of the first of the ancient Roman roads?

Via Appia vs. Via Augusta

How many manned moon landings have there been?

6 vs. 9

Which of these tanks was designed and operated by the United Kingdom?

Tog II vs. Leopard 2

Which King of England was faced with the Peasants’ Revolt in 1381?

Richard II vs. Henry IV

Appendix II. Study 1 Instructions

In this study, you will read a series of multiple-choice questions about History.

For each question you will be given two response options. You must indicate the one you believe to be correct.

For each question, before you indicate your response, you will also be shown the response of one of two previous participants. These participants’ names are: Susan [David] or David [Susan] (the names used here are fictitious).

This is what you need to know about them:

David [Susan] is an expert with a Doctorate in a subfield of history – he [she] has always loved history and focused all his [her] career choices on fully developing this interest.

On the contrary, Susan [David] is completely ignorant about history, and has always failed at it throughout her [his] school years. Because she [he] was always so bad at it, she [he] decided to graduate in a completely different area, in order to avoid having history classes, which she [he] hated.

To summarize, for each question you will see the response of the previous participant (this can be David [Susan]’s response or Susan [David]’s response), and then you will provide your response (this can be the same as the response of the previous participant, or it can be a different response).

As explained above, the key task in this study is about people’s knowledge of history.

In order for the study to work (and for science to be able to learn from it), it is essential that you do not look up the responses online (or ask anyone else). Our goal is to get your responses.

If you’re not sure about what the correct response is, make your best guess. You won’t be penalized for incorrect responses. Besides, it is perfectly understandable that you don’t know the responses to some of the questions. So, just indicate what you think might be the correct response.

(…)

[Question]

This is the response that Susan [David] gave (Susan [David] is the person who loves history and knows a great deal about it [hates history and knows nothing about it]): [Answer]

What do you think is the correct response?

Appendix III. Study 2 Instructions

In this study, you will read a series of questions about History.

For each question you will be given two response options. You will also be presented with the answer of one of two participants from a previous study. The names of these participants are Susan [David] and David [Susan] (the names used here are fictitious).

This is what you need to know about them:

Susan [David] is an expert in History, with a Doctorate in that field – she [he] has always loves history and focused all her [his] academic choices in developing that interest.

On the contrary, David [Susan] is completely ignorant about history and has always failed at that subject, in his [her] school years. Because he [she] was always so bad at it, he [she] decided to get a degree in a completely different field, in order to avoid having history classes, which he [she] hated.

Thus, for each question you will see both response options, as well as the response from one of the previous participants (this can be David [Susan]’s response or Susan [David]’s response), and you will have to judge what you think is the likelihood that the response given by the participant is correct.

More specifically, for each question we will ask you to provide a quick answer, the first that comes to mind. Next, you will have the opportunity to answer again, this time more calmly.

The focus of this study is on intuitive knowledge that people have about History. In order for this study to work (and for it to be able to contribute to science), it is essential that you do not look up the answers online nor ask someone else. Our goal is to get your responses, so it is important that you answer on your own, both in the first moment when you should give a quick answer, and in the second one when you can reflect further.

If you do not know the answer, try to guess. There will be no penalty for incorrect responses. Besides, it is perfectly understandable that you don’t know the responses to some of the questions.

(…)

Susan [David] saw the following question:

[Question]

The options were […]

The answer that Susan [David] gave was: […]

Please answer as fast as you can!

Do you think Susan [David]’s answer (the person who loves history and knows a lot [hates history and knows nothing] about the subject) is wrong or right?

[1 – Definitely wrong; 5 – Neither wrong nor right; 9 – Definitely right]

(…)

Susan [David] saw the following question:

[Question]

The options were […]

The answer that Susan [David] gave was: […]

Now take as long as you need!

Do you think Susan [David]’s answer (the person who loves history and know a lot [hates history and knows nothing] about the subject) is wrong or right?

[1 – Definitely wrong; 5 – Neither wrong nor right; 9 – Definitely right]

Appendix IV. Study 3 Instructions

Note: The correct responses below are in bold, but were not so when participants saw them.

On the next page, you will find 10 general knowledge questions about Arts & Culture from a game of Trivial Pursuit. Your task will be to answer those 10 questions and, afterwards, to answer 4 additional questions. We ask that you answer every question in order, as you read them.

Please answer the following questions. If you do not know the answer, try to guess. Your answers are completely anonymous.

Who painted Woman Listening to Music?

a. Miró

b. Matisse

Which author ‘birthed’ The Fifth Child?

a. Patricia Highsmith

b. Doris Lessing

In which artistic field does the Portuguese Daniel Blaufuks excel?

a. Painting

b. Photography

Jorge Peixinho was a Portuguese known for:

a. Composing

b. Painting

Which of these writers was born first?

a. Edgar Allan Poe

b. Honoré Balzac

Who composed, in 1827, Winter Journey?

a. Schubert

b. Beethoven

Mário Eloy was a Portuguese known for:

a. Painting

b. Writing

What is the nationality of composer Anton Bruckner?

a. Austrian

b. German

Who painted The Red Armchair?

a. Miró

b. Pablo Picasso

Who composed, in 1734, Christmas Oratorio?

a. Verdi

b. Bach

(…)

Thank you for your answers!

Now we ask that you imagine the following person:

Susan [David] is an expert in Art. She [He] got a degree in a specific field of art, and has always been interested in art in general, being profoundly knowledgeable in questions of literature, sculpture, painting, architecture, erudite music, etc. She [He] has always been like that: even in school, her [his] best grades were in subjects related to Art and Culture.

/

Susan [David] is ignorant about Art and Culture. She [He] has always liked the clarity and objectivity of engineering, the subject in which she [he] graduated, and hates Art, being completely ignorant in questions of literature, sculpture, painting, architecture, erudite music, etc. She [He] has always been like that: even in school, her [his] worst grades were in subjects related to Art and Culture.

If Susan [David] were to answer the 10 previous questions, how many do you think Susan [David] would answer correctly (0-10)?

__________

Finally, rate your agreement with the 3 statements below, on a scale from 1- Totally disagree to 9- Totally agree:

Just like an expert in a subject is probably correct when answering questions about that subject, an ignorant in that subject is probably wrong when answering questions about that subject.

A person who knows a lot about the subject will get more questions right than wrong, just like a person who knows nothing about the subject will get more questions wrong than right.

The more someone knows about the subject, the closer they will be to getting 100% of the questions right, just like the less someone knows about the subject, the closer they will be to getting 0% of the questions right.

Appendix V. Study 4a Instructions

In this study, you will consider some scenarios where one might be asked to answer questions.

Your task will be to forecast how well someone might do in those scenarios.

More specifically, you will consider a person, called X.

X will be playing two games where they have to answer 10 questions in each. However, X is very ignorant about the topic of the questions – X basically knows nothing about it.

(…)

Consider X is playing a game in which the objective is to guess the result of a coin toss. The coin toss has two possible results — heads or tails. There are 10 coin tosses and, each time, X tries to guess which way the coin is going to land.

However, X knows nothing about this coin and, therefore, has no idea about what the outcomes of the coin tosses will be.

If X was faced with 10 such coin tosses, how many of those do you think X would guess correctly?

(…)

Consider X is playing a game in which the objective is to answer Art History questions — for example, “Who painted Woman Listening to Music”? Each question comes with two possible answers — for example, “Miró” and “Matisse”. There are 10 questions and, each time, X tries to select the correct option.

However, X knows nothing about this topic and, therefore, has no idea about what the correct answers will be.

If X was faced with 10 such questions, how many of those questions do you think X would answer correctly?

(…)

In this study, you were asked to estimate how many questions of a certain topic a person would answer correctly. What was the topic of those questions?

a. Art History

b. Engineering

c. World Politics

d. South American cuisine

e. Horoscope compatibility

Appendix VI. Study 4b Instructions

In this study, you will consider some scenarios where one might be asked to answer questions.

Your task will be to forecast how well someone might do in those scenarios.

(…)

Consider a person (named X) who is playing a game in which the objective is to answer questions about Art History.

For example, one question could be: “Who painted Woman Listening to Music?”

Each question comes with two possible answers that the person (X) has to choose from.

For example, for the question above, the options might be “Miró” and “Matisse”.

However, there is a problem:

X can’t see what the options are, as they are sealed inside closed envelopes (which cannot be opened). The envelopes only say “option A” and “option B”, but X doesn’t know what are the solutions inside them, so he has to choose blindly between the two options by picking one of the envelopes.

If X was faced with 10 such questions, how many of those questions do you think X would answer correctly?

(…)

Consider a person (named Y) who is playing a game in which the objective is to answer questions about Art History.

For example, one question could be: “Who painted Woman Listening to Music?”

Each question comes with two possible answers that the person (X) has to choose from.

For example, for the question above, the options might be “Miró” and “Matisse”.

However, there is a problem:

Y knows nothing about this topic (Art History) and, therefore, has no idea about what the correct answers to these questions are.

If Y was faced with 10 such questions, how many of those questions do you think Y would answer correctly?

(…)

In this study, you were asked to estimate how many questions of a certain topic a person would answer correctly. What was the topic of those questions?

a. Art History

b. Engineering

c. World Politics

d. South American cuisine

e. Horoscope compatibility

Appendix VII. Study 2 Response Revision


Response revision by target.

Appendix VIII. Study 4a and 4b Histograms


Study 4a:
 
  
Study 4b:
 


*
CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal. https://orcid.org/0000-0003-3352-5455
#
Corresponding author. CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal. Email: andremata@psicologia.ulisboa.pt. https://orcid.org/0000-0001-5087-4919

We are grateful to Francisco Cruz, Joana Dias, and Paulo Moreira for their comments.

Copyright: © 2022. The authors license this article under the terms of the Creative Commons Attribution 4.0 License.

1
We also ran a mixed model analysis in SPSS, which included random intercepts of participant and question. We found the same pattern of results, with an interaction effect of judgment X expertise (F(1, 1898) = 48.91, p < .001) such that judgments for ignorant increased towards the middle point of the scale (B = 0.11, t(1898) = 2.60, p = .009) and judgments for expert decreased towards the middle point of the scale (B = -0.30, t(1898) = -7.29, p < .001). An additional mixed model analysis with response times for the slow judgments as predictor of response revision showed that the longer the response time, the greater the revision between fast and slow trials (B = 0.02, t(1866.50) = 6.93, p < .001).
2
In the expert condition, scores on these items did not predict estimates of correct responses (r = .06, p = .603), which makes sense as 1) the items are framed syllogistically with the conclusion focused on the ignorant and not the expert, and 2) estimates in the expert condition could be understood as a no-conflict problem in the dual-process sense (De Neys, 2012) whereby both heuristic and logical reasoning converge on the same notion that expertise is a reliable predictor of good performance.
3
Though we abstain from making normative considerations about whether 5 is the only correct estimate for an ignorant person, or whether a slightly higher estimate can also be defensible, as even the slightest knowledge about the subject, or extrapolation from knowledge about related subjects, might give an ignorant responder a chance of success above 50%.
4
Anecdotally, it was also interesting to see that, in Study 3, where responses were hand-written in a questionnaire (and not typed on a computer), participants sometimes corrected their first response (which they wrote down and then crossed out). When this was the case in the ignorant condition, the initial estimate was always more biased (i.e., estimating that the ignorant had more incorrect responses) than the final/revised estimate.
5
In most cases that we found in the literature, the logical fallacy involves affirming the consequent: inferring that if X → Y, then Y → X (e.g., “People learn that trustworthiness is typical and may form the reversed, overgeneralized inference that typical-is-trustworthy”, Alves et al., 2022, p. 448; “People generally associate important decisions with difficulty. Consequently, if a decision feels unexpectedly difficult, due to even incidental reasons, people may draw the reverse inference that it is also important”, Sela & Berger, 2012, p. 360; for yet another example, see the work of Labroo & Kim, 2009, on the instrumentality heuristic). In the case of believing that ignorance=error, the fallacy is also one of reverse inference, but the error is what is known in logic as denying the antecedent: inferring that if X → Y, then notX → notY. This is not a reversal of the causal arrow (as in the case of affirming the consequent), but rather a reversal of the states of X and Y: if X (expert) → Y (correct), then notX (ignorant) → notY (incorrect). And again, the response format is critical in determining when this mode of reasoning produces sound inferences (e.g., for questions with open-ended response formats, or multiple choice questions with more than two options) and when it leads to error (as in the present case of questions with dichotomous response options).

This document was translated from LATEX by HEVEA.