Judgment and Decision Making, Vol. ‍16, No. ‍5, September 2021, pp. 1234-1266

Judging fast and slow: The truth effect does not increase under time-pressure conditions

Lena Nadarevic *  Martin Schnuerch #  Marlena J. Stegemann $

Abstract:

Due to the information overload in today’s digital age, people may sometimes feel pressured to process and judge information especially fast. In three experiments, we examined whether time pressure increases the repetition-based truth effect — the tendency to judge repeatedly encountered statements more likely as “true” than novel statements. Based on the Heuristic-Systematic Model, a dual-process model in the field of persuasion research, we expected that time pressure would boost the truth effect by increasing reliance on processing fluency as a presumably heuristic cue for truth, and by decreasing knowledge retrieval as a presumably slow and systematic process that determines truth judgments. However, contrary to our expectation, time pressure did not moderate the truth effect. Importantly, this was the case for difficult statements, for which most people lack prior knowledge, as well as for easy statements, for which most people hold relevant knowledge. Overall, the findings clearly speak against the conception of fast, fluency-based truth judgments versus slow, knowledge-based truth judgments. In contrast, the results are compatible with a referential theory of the truth effect that does not distinguish between different types of truth judgments. Instead, it assumes that truth judgments rely on the coherence of localized networks in people’s semantic memory, formed by both repetition and prior knowledge.


Keywords: truth effect, time pressure, dual-process theory, fluency, referential theory

1 Introduction

In his bestseller Thinking, fast and slow, Nobel laureate Daniel Kahneman (2011) describes two different ways of thinking: The first is fast, effortless, and intuitive; the second is slow, effortful, and deliberative. Other authors use attributes such as heuristic vs. systematic (Chaiken, 1980), experiential vs. rational (Epstein, 1994), or associative vs. rule-based (Sloman, 1996) to characterize two different types of cognitive processing (see Evans, 2008, for a review). In the following, we will stick to the heuristic/systematic distinction of the Heuristic-Systematic Model (HSM, Chaiken et al., 1989), a well-established dual-process model in the field of persuasion research.

According to the HSM, people engaging in systematic processing “access and scrutinize all informational input for its relevance and importance to their judgment task, and integrate all useful information in forming their judgments” (Chaiken et al., 1989, p. 212). For example, processing a particular statement systematically would involve retrieving relevant knowledge from memory in order to judge the statements’ truth or plausibility, respectively. In addition, further available judgment cues such as the credibility and the expertise of the communicator would also be considered for the judgment at hand. In contrast, heuristic processing means that “people focus on that subset of available information that enables them to use simple inferential rules, schemata, or cognitive heuristics to formulate their judgments and decisions” (Chaiken et al., 1989, p. 213). For example, instead of spending time and effort to deliberate on a statement’s plausibility or truth, respectively, a heuristic statement evaluation would only focus on a few, easily accessible cues such as the expertise or the likeability of the communicator (Chaiken, 1980). In the HSM, heuristic processing and systematic processing are conceptualized as two independent processing modes. Importantly, the two modes do not operate in an all-or-none fashion but their prevalence is assumed to vary on a processing continuum. The lower the cognitive capacity or motivation to process information, the higher the prevalence of heuristic processing. With increasing levels of capacity and motivation, in contrast, more systematic processing is expected to come into play.

Thus, according to the HSM, the probability of judgments based solely on heuristic processing should be largest when people lack the time, the cognitive resources, or the motivation to engage in systematic processing. Based on this idea, the goal of the present work was to investigate whether time pressure would boost people’s reliance on repetition as a heuristic cue in judgments of truth.

1.1 Truth judgments and the truth effect

As described above, people may rely on their knowledge and/or on characteristics of the information source when assessing a statement’s truth. However, according to the feelings-as-information theory (Schwarz, 2012), subjective feelings can also serve as a judgment basis. One feeling that presumably informs truth judgments is processing fluency, which is “the subjective experience of ease with which a stimulus is processed” (Reber & Unkelbach, 2010, p. 564). Indeed, several studies demonstrated that different fluency manipulations (i.e., ease-of-processing manipulations) affect truth judgments. For instance, truth judgments are typically higher for statements presented in high color contrasts compared to low color contrasts (Reber & Schwarz, 1999; Unkelbach, 2007), for statements that are written in concrete language compared to abstract language (Hansen & Wänke, 2010), and for rhyming compared to non-rhyming statements (McGlone & Tofighbakhsh, 2000). However, the strongest and most robust truth effects are typically observed when processing fluency is induced by means of statement repetition (Parks & Toth, 2006; Silva et al., 2016; Vogel et al., 2020).

The first demonstration of the repetition-based truth effect was published by Hasher et al. (1977). The authors found that participants provided higher truth judgments for statements that had already been presented in two previous experimental sessions than for novel statements. Since then, the effect has been replicated many times with different statement types and under different contextual conditions (see Dechêne et al., 2010, for a meta-analysis). One of the few boundaries of the truth effect that have been proposed in the literature is that the effect occurs only for statements for which people lack prior knowledge (Dechêne et al., 2010; Unkelbach & Stahl, 2009). Fazio et al. (2015) demonstrated, however, that this claim does not hold. The authors found comparable truth effects for statements that they had classified as difficult as for those that they had classified as easy based on general knowledge norms or on a post-experimental knowledge check, respectively. Fazio and colleagues concluded from this finding that inferring truth from fluency is often “an accurate and cognitively inexpensive strategy, making it reasonable that people sometimes apply this heuristic without searching for knowledge” (p. 1000). This interpretation suggests that fluency-based truth judgments are particularly likely under heuristic processing whereas knowledge-based truth judgments require more systematic processing.

In contrast to this dual-processing assumption, the referential theory of the repetition-based truth effect proposed by Unkelbach and Rom (2017) does not distinguish between fluency-based and knowledge-based truth judgments. The theory is inspired by associative network models of semantic memory and proposes that truth judgments depend on a) the number of activated references in the associative network, and b) the coherence of these references. The more references a statement activates and the higher their coherence, the higher the probability that the statement will be judged as true. Moreover, the theory assumes that statement processing activates and links references in the associative network. As a consequence, repeated statements should have more associative references that are coherently linked compared to novel statements and thus should be more likely judged as true. The referential theory also predicts that the activation of many coherent references in the associative network elicits a fluency experience. Importantly, however, fluency is considered as an outcome variable rather than as a mediator between statement repetition and truth. To sum up, the referential theory assumes that truth judgments are informed by associative processes which depend on statement repetition and prior knowledge at the same time. Hence, the same truth judgments should result irrespective of whether context conditions favor heuristic processing or systematic processing, at least if no further external cues are available.

1.2 The truth effect under different processing conditions

Garcia-Marques et al. (2016, Experiment 1) tested whether cognitive resources and motivation would moderate the repetition-based truth effect. Based on the fluency account and the dual-processing assumption, the authors hypothesized that the effect would be larger when judging the truth of statements under conditions favoring heuristic processing (e.g., low motivation or low cognitive capacity) compared to systematic processing (e.g., high motivation and high cognitive capacity). For this reason, the authors instructed participants either to make careful and accurate truth judgments (high-motivation condition) or to provide intuitive truth judgments (low-motivation condition). In addition, they asked participants either to memorize a string of eight letters over the course of the judgment phase (low-capacity condition) or to write down the string before the judgment phase started (high-capacity condition). In line with their hypothesis, the authors found that the truth effect was smallest in a high-motivation, high-capacity group compared to the groups that either lacked motivation, capacity, or both. On the descriptive level, however, the difference in truth ratings (1 = sure it’s false to 7 = sure it’s true) between the high-motivation, high-capacity group (repeated: M = 5.5, new: M = 4.4) and the low-motivation, low-capacity group (repeated: M = 5.7, new: M = 4.3) was very small.

Likewise, in an unpublished study, Nadarevic and Rinnewitz (2011) tested whether the truth effect was larger when the context prompted heuristic processing than when it prompted systematic processing. Similar to Garcia-Marques et al. (2016), participants were either instructed to provide intuitive truth judgments or to make careful evaluations. However, unlike predicted, these instructions did not moderate the truth effect. On the one hand, this finding might indicate that the truth effect is not moderated by conditions that favor heuristic processing or systematic processing. Alternatively, however, it could also be the case that participants did not comply with the task instructions. For instance, it is possible that participants in the intuitive group reasoned about the truth of the presented statements instead of providing intuitive responses. Thus, the aim of the present work was to test the presumed effect of processing mode (heuristic vs. systematic) on the truth effect using a presumably stronger experimental manipulation: time pressure. Moreover, studying the truth effect under time pressure is also of practical importance, as people may sometimes be in a rush when processing and judging information in everyday life.

According to the HSM, time pressure should increase the probability of heuristic processing and decrease the probability of systematic processing. Indeed, there is empirical evidence that response-time limits impair systematic processing in deductive reasoning (Evans & Curtis-Holmes, 2005; Schroyens et al., 2003). Moreover, Hilbig et al. (2012) found that time pressure fostered reliance on the recognition heuristic — the simple judgment rule that recognized objects score higher on a criterion than unrecognized objects in a comparative judgment task (e.g., Which of two cities is more populous?). More precisely, participants used the recognition heuristic more often in a time-pressure group, in which they had to provide their judgment within 2000 ms, than in a control group without a time limit. In contrast, the use of further knowledge beyond recognition was lower in the time-pressure group compared to the control group. But does time pressure also increase the truth effect? If the truth effect is based on fluency-truth attributions (as suggested by the fluency account), it is plausible that time pressure moderates the truth effect by enhancing reliance on fluency as a presumably intuitive cue for truth and by reducing the likelihood of knowledge retrieval as a presumably slow and deliberative process that determines truth judgments. If, however, truth judgments depend on the number of coherently linked references in people’s semantic network (as suggested by the referential theory), time pressure should not moderate the truth effect. This is because truth judgments should be based on a single, associative process that depends on both, repetition and knowledge.

In order to test the two proposed mechanisms and gain a better understanding of the cognitive processes underlying the truth effect, we conducted three experiments that tested whether the effect would increase under time-pressure conditions. As the referential theory had not yet been published when we started this project, our research hypothesis was derived from the fluency account and the dual-processing assumption of the HSM. That is, we tested the hypothesis that time pressure boosts the truth effect. The materials and the data of all three experiments are publicly available via the Open Science Framework (OSF, https://osf.io/687bn).

2 Experiment 1

Experiment 1 compared the truth effect between two groups. Participants in a time-pressure group were informed that they had only little time to provide their judgments and thus should respond as fast as possible. The response deadline for each truth judgment in this group was only 650 ms. Similar to the reasoning of Białek and De Neys (2017), this extremely short deadline aimed at knocking out any systematic processes. In the no-pressure group, in contrast, participants did not have a time constraint for providing their truth judgments. Based on the assumptions of the HSM, truth judgments in this group were thus expected to be influenced by heuristic processes as well as systematic processes. To further accentuate processing differences between groups (if present), we decided to foster systematic processes in the no-pressure group by instructing participants to take their time to evaluate each statement and to think carefully about their judgment before providing a response.

2.1 Method

2.1.1 Power analysis

To determine the required sample size, we ran a power analysis with G*Power (Faul et al., 2007) with the following parameters. We set the error probabilities to α = β = .05 and the estimated effect size for the time pressure by statement repetition interaction to f = .25. This effect size, which is equivalent to ηp2 = .059, is considered a medium effect size (Cohen, 1988). Furthermore, we assumed a repeated-measure correlation of ρ = .50 among truth judgments for repeated and new statements. Based on these parameters the minimum sample size was N = 54.

2.1.2 Participants

Seventy participants (55 female, 15 male), all of whom had been recruited at the University of Mannheim, took part in the experiment. The mean age of our sample was M = 22.0 years (SD = 4.4). Participants gave written informed consent prior to the experiment and received course credit for their participation.

2.1.3 Material

Three hundred true statements were selected from a trivia book (Ebert, 2009). For half of these statements, a false version was created. In a pretest, twenty participants rated the truth of the 300 trivia statements (150 in their original version, 150 in a false version) on a seven-point rating scale ranging from definitely false (1) to definitely true (7). Based on this pretest, we selected 120 statements with truth ratings ranging between M = 3.4 and M = 4.9 and standard deviations smaller than SD = 2.0. Descriptively, mean pretested truth ratings were slightly higher for the false statements (M = 4.2) compared to the true ones (M = 4.1). Thus, as typical in truth-effect studies, the selected statements (e.g., It takes 14 days for a wombat to digest a meal) were maximally ambiguous with respect to their factual truth status. Finally, we divided the 120 statements into three statement sets of 20 true and 20 false statements each. All sets were comparable with respect to the statements’ pretested truth ratings (M = 4.1) and word counts (Set ‍A: M = 8.3; Set ‍B: M = 8.4; Set ‍C: M = 8.4) and were counterbalanced across experimental phases.

2.1.4 Procedure

Upon arrival, participants were randomly assigned to the time-pressure group (n = 36) or the no-pressure group (n = 34). Participants then performed the experiment on standard PCs running E-Prime software.

The first phase of the experiment — the exposure phase — aimed at familiarizing participants with a list of statements. We told participants that we had asked several men and women to come up with true and false statements. Moreover, we instructed participants that their task was to decide for each statement whether it had been generated by a man or a woman. We used this cover story to make participants process the statements without focusing on the statements’ veracity. After a short practice block of ten statements, the computer presented 80 statements in random order. The statements were drawn from two statement sets. Each trial started with the presentation of a fixation cross which was displayed in the middle of the screen for 1500 ms. Subsequently, the fixation cross was replaced by a statement. After 3000 ms, the two response options man and woman appeared below the statement until participants provided their response by keypress (“d” for man and “k” for woman), which took on average M = 1443 ms (SD = 787). After the exposure phase, participants worked for five minutes on a nonverbal filler task.

In the second phase of the experiment — the judgment phase — we instructed participants to decide whether presented statements were true or false. Importantly, however, in the time-pressure group participants had to provide their judgment within 650 ms, whereas there was no response deadline in the no-pressure group. After a practice block of ten statements, the computer presented 80 statements in random order. The statements were drawn from two statement sets. One set had already been presented in the exposure phase whereas the other set was new. As in the exposure phase, each trial started with a fixation cross in the middle of the screen displayed for 1500 ms, which was then replaced by a statement. Finally, after 3000 ms, the two response options true and false appeared below the statement until participants provided their response by keypress ("d" for true and "k" for false). Participants in the time-pressure group had to provide their judgment within 650 ms and thus were instructed to respond as fast as possible. If someone failed to respond in time, the prompt Please respond faster! was displayed for 2000 ms. In contrast, participants in the no-pressure group were instructed to take their time to evaluate a statement and to think carefully about their judgment before providing a response. As there was no time limit in this group, participants could take as much time as needed for their judgments.

2.1.5 Design

The research design comprised the within-subject factors statement repetition (repeated vs. new) and truth status (true vs. false) and the between-subject factor group (time pressure vs. no pressure).

2.2 Results

In the time-pressure group, participants had missed the response deadline in 7.9% of the trials. We excluded these trials for all analyses.

2.2.1 Response times

To check whether the 650 ms response deadline had indeed induced sufficient time pressure, we compared participants’ mean response times (RTs) between groups. In this and the following experiments, statistical tests are based on log-transformed RTs, whereas descriptive statistics refer to untransformed RTs in ms. A Welch’s t-test confirmed that RTs were significantly longer in the no-pressure group compared to the time-pressure group (no ‍pressure: M = 2327, SD = 1557; time ‍pressure: M = 346, SD = 40; t(35.97) = 12.67, p < .001, d = 3.07, CI = [2.22, 3.91]).1 We also investigated possible effects of statement repetition (repeated vs. new) and truth status (true vs. false) on participants’ mean RTs. Due to unequal variances between groups, we ran two separate 2 × 2 repeated measures ANOVAs. Figure ‍1 displays the descriptive results.


Figure 1: Mean RTs (in ms) in Experiment 1 as a function of group, statement repetition, and truth status. Error bars represent standard errors of the means.

In the time-pressure group, participants made faster truth judgments for repeated statements compared to new statements (repeated: M = 324, SD = 40; new: M = 369, SD = 45; F(1, 35) = 76.94, p < .001, ηp2 = .69, CI = [.54, .78]). Likewise, participants in the no-pressure group responded faster to repeated statements than to new statements (repeated: M = 2030, SD = 1634; new: M = 2624, SD = 1545; F(1, 33) = 65.61, p < .001, ηp2 = .67, CI = [.50, .77]). There were no further significant main effects or interactions (ps ‍> ‍.05).

2.2.2 Truth judgments

In order to investigate the effect of time pressure on the truth effect, we ran a 2 (statement repetition: repeated vs. new) × 2 (truth status: true vs. false) × 2 (group: time pressure vs. no pressure) ANOVA with the mean proportion of true judgments (PTJs) as the dependent variable. Figure ‍2 displays the descriptive results.


Figure 2: Mean proportion of true judgments (PTJs) in Experiment 1 as a function of group, statement repetition, and truth status. Error bars represent standard errors of the means.

As predicted, PTJs were higher for repeated statements compared to new statements (repeated: M = .61, SD = .18; new: M = .51, SD = .13; F(1, 68) = 18.14, p < .001, ηp2 = .21, CI = [.08, .35]). That is, we replicated the truth effect. Importantly, however, we did not find a main effect of time pressure on PTJs (F(1, 68) = 0.13, p = .716, ηp2 < .01, CI = [.00, .05]), nor did time pressure interact with statement repetition (F(1, 68) = 0.04, p = .843, ηp2 < .01, CI = [.00, .03]). Hence, time pressure did not moderate the truth effect. PTJs were higher for false statements than for true statements (false: M = .61, SD = .12; true: M = .52, SD = .15; F(1, 68) = 38.74, p < .001, ηp2 = .36, CI = [.22, .49]). This finding confirms that participants had no knowledge about the actual truth status of the presented statements and even considered the false statements to be more plausible than the true ones.2 There were no further significant main effects or interactions (ps ‍> ‍.05).

2.3 Discussion

The finding that the truth effect did not increase under time-pressure conditions speaks against the hypothesis that time pressure increases fluency-based truth judgments. However, it is problematic to draw conclusions based on a single null result, as this result could have various causes. For instance, because Experiment 1 involved only difficult statements, the choice of statements might not have been ideal. Possibly, participants in the no-pressure group were unable to engage in systematic statement processing because they held no relevant knowledge to process the statements systematically (e.g., generate reasons for and against the truth of a statement) and thus may have often relied on fluency as well. Note, however, that this post-hoc explanation cannot account for the fact that time pressure did not increase fluency-reliance any further. The mean proportion of repeated statements judged as “true” was clearly below ceiling (time-pressure group: 62.2%, no-pressure group: 60.7%). Thus, in principle, there would have been room for an increase of fluency-based truth judgments under time pressure.

Another critical point is that all statements were presented for 3000 ms before the response options appeared on the screen. Pretest participants had perceived 3000 ms as sufficiently long for reading a statement but as too short to additionally deliberate on the statement. Nevertheless, a fixed presentation time is of course problematic because it does not account for individual differences in reading times. It is thus possible that slow readers in the time-pressure group were unable to read all statements whereas fast readers already formed a truth judgment before the response options appeared. Thus, individual differences in reading times might have contributed to the null effect by adding noise to the data. Even more critically, participants in the time-pressure group might have increased their reading speed in order to save time for their judgment and reduce time pressure. Experiment 2 addressed these concerns.

3 Experiment 2

In Experiment 2, we conceptually replicated Experiment 1 but increased the sample size to ensure sufficient statistical power even for a relatively small effect of time pressure on the truth effect. Furthermore, this time we not only presented difficult statements, but also easy ones. Because participants should possess relevant knowledge for these easy statements, we expected them to retrieve and use this knowledge when they have sufficient time to do so. Hence, time pressure should moderate the truth effect at least for easy statements. To rule out that individual differences in reading times or strategic shifts in reading times interfere with the time-pressure manipulation, all statements were presented auditorily. Because the truth effect does not depend on a statement’s presentation modality (Dechêne et al., 2010), this procedural change should not affect the overall truth effect. Finally, in order to reduce missing responses in the time-pressure group, we increased the response deadline to 1000 ms. In contrast, similar to other studies that investigated time-pressure effects on judgment and decision making (Rand et al., 2012; Suter & Hertwig, 2011), we implemented a response delay in the no-pressure group in order to prevent hasty responses. Participants in this group had to wait for 1000 ms after statement offset until they could provide their truth judgment. This change aimed to strengthen the presumed processing differences between groups.

3.1 Method

3.1.1 Power analysis

Similar to Experiment 1, we ran a power analysis using G*Power with the restriction α = β = .05. Again, we assumed a correlation of ρ = .50 among truth judgments for repeated and new statements. However, in contrast to Experiment 1, we chose a much more conservative effect-size estimate of f = .15 (equivalent to ηp2 = .022) for the expected effect of time pressure on the truth effect. This power analysis determined a minimum sample size of N = 148.

3.1.2 Participants

One hundred and fifty participants were recruited at the University of Mannheim. One participant had to be excluded due to a program crash during the experiment. The remaining 149 participants (106 female, 43 male) had a mean age of M = 22.9 years (SD = 4.5). All participants gave written informed consent prior to the experiment and received course credit or six euros for their participation.

3.1.3 Material

We selected 120 pretested trivia statements, most of which had previously been used in other studies (Hilbig, 2012; Nadarevic et al., 2018; Newman et al., 2012; Unkelbach, 2007) and divided them into two statement sets. In each set, half of the statements were easy ones (more than 80% correct true/false-classifications in a pretest) while the other half were difficult ones (less than 60% correct true/false-classifications in a pretest). Statement sets were also balanced in terms of factual truth. That is, each set contained 30 true statements (15 easy and 15 difficult ones) and 30 false statements (15 easy and 15 difficult ones). All statements were audio recorded by a female speaker at normal speech rate and statement sets were counterbalanced across the exposure phase.

3.1.4 Procedure

Upon arrival, participants were randomly assigned to the time-pressure group (n = 74) or the no-pressure group (n = 75). Participants were then instructed to put on headphones and to adjust the volume to their preferred level. The remaining procedure was similar to Experiment 1, except for the following changes.

In the exposure phase, participants listened to 60 statements of one statement set in random order. Each trial started with the presentation of a fixation cross which appeared for 1000 ms in the middle of the screen. Subsequently, a statement was presented via headphones, which took on average M = 4044 (SD = 947) ms. Participants’ task was to assign the statement to one of three semantic categories based on its content (history, science, or other). The response options appeared on the screen immediately after statement offset and stayed there until participants selected one of the three categories by mouse click. On average, participants needed M = 1561 (SD = 547) ms to provide their response. The exposure phase was followed by a five-minute nonverbal filler task. Next, participants again listened to several statements, but this time had to judge whether a statement was true or false. This judgment phase started with a practice block of eight statements. Then, the 120 statements of the two statement sets (one repeated and one new) were presented in random order. As in the exposure phase, each trial started with a fixation cross displayed for 1000 ms followed by a statement presented via headphones. In the time-pressure group, the response options (true vs. false) appeared on the screen immediately after statement offset. Participants in this group had only 1000 ms to provide their judgment by keypress ("d" for true and "k" for false, or vice versa) and thus were instructed to respond as fast as possible. If participants failed to provide their response in time, the computer displayed a prompt to respond quicker for 2000 ms. In the no-pressure group, participants were instructed to think carefully about each answer and to try to judge each statement as accurately as possible. To ensure this, response options were displayed with a 1000 ms delay after statement offset so that participants had to wait at least 1000 ms until they could enter their judgment. As there was no time limit in this group, participants could take as much time as needed for their judgments. At the end of the experiment, participants were asked various control questions.

3.1.5 Design

The research design was identical to Experiment 1 except that difficulty (easy vs. difficult) was included as an additional within-subject factor. Moreover, in contrast to Experiment 1, we counterbalanced response-key assignments (left key: true, right key: false vs. left key: false, right key: true) between participants.

3.2 Results

Participants in the time-pressure group had missed the response deadline in 5.6% of the trials. We excluded these trials from all analyses.

3.2.1 Response times

As in Experiment 1, statistical analyses are based on log-transformed RTs whereas descriptive statistics refer to the untransformed RTs in ms. In the no-pressure group, the 1000 ‍ms response delay was not considered as RT in the following analyses. Still, participants’ mean RTs were significantly longer in the no-pressure group compared to the time-pressure group, as indicated by a Welch’s t-test (no ‍pressure: M = 1370, SD = 785; time ‍pressure: M = 424, SD = 76; t(103.98) = 11.40, p < .001, d = 1.86, CI = [1.45, 2.27]). Similar to Experiment 1, we also analyzed participants’ RTs by means of a 2 (statement repetition: repeated vs. new) × 2 (truth status: true vs. false) × 2 (difficulty: easy vs. difficult) repeated measures ANOVA, which we conducted separately for each group. Figure ‍3 displays the descriptive results.

In the time-pressure group, RTs were again faster for repeated statements compared to new ones (repeated: M = 397, SD = 81; new: M = 452, SD = 79; F(1, 73) = 106.34, p < .001, ηp2 = .59, CI = [.47, .68]). This time, however, the magnitude of this repetition effect varied as a function of truth status (F(1, 73) = 5.15, p = .026, ηp2 = .07, CI = [.00, .18]). Moreover, RTs were faster for true statements compared to false ones (true: M = 406, SD = 77; false: M = 443, SD = 80; F(1, 73) = 65.37, p < .001, ηp2 = .47, CI = [.34, .58]), and for easy statements compared to difficult ones (easy: M = 419, SD = 79; difficult: M = 430, SD = 79; F(1, 73) = 7.42, p = .008, ηp2 = .09, CI = [.01, .21]). In the no-pressure group, RTs were also faster for repeated statements compared to new ones (repeated: M = 1306, SD = 773; new: M = 1433, SD = 838; F(1, 74) = 42.88, p < .001, ηp2 = .37, CI = [.23, .49]), and for true statements compared to false ones (true: M = 1302, SD = 786; false: M = 1438, SD = 808; F(1, 74) = 25.61, p < .001, ηp2 = .26, CI = [.13, .39]). However, as illustrated in Figure 3, the latter effect was qualified by statement difficulty (F(1, 74) = 27.02, p < .001, ηp2 = .27, CI = [.13, .40]). There were no further significant main effects or interactions (ps ‍> ‍.05).


Figure 3: Mean RTs (in ms) in Experiment 2 as a function of group, statement repetition, truth status, and difficulty. Error bars represent standard errors of the means.

3.2.2 Truth judgments

We conducted a 2 (statement repetition: repeated vs. new) × 2 (truth status: true vs. false) × 2 (difficulty: easy vs. difficult) × 2 (group: time pressure vs. no pressure) ANOVA with mean PTJs as the dependent variable. Figure ‍4 displays the descriptive results.


Figure 4: Mean proportion of true judgments (PTJs) in Experiment 2 as a function of group, statement repetition, truth status, and difficulty. Error bars represent standard errors of the means.

PTJs were higher for repeated statements than for new statements (repeated: M = .57, SD = .12; new: M = .52, SD = .10; F(1, 147) = 22.20, p < .001, ηp2 = .13, CI = [.06, .22]). That is, we again replicated the truth effect. Unlike in Experiment 1, PTJs were affected by time pressure with overall higher PTJs in the time-pressure group compared to the no-pressure group (time ‍pressure: M = .56, SD = .09; no ‍pressure: M = .52, SD = .09; F(1, 147) = 7.37, p = .007, ηp2 = .05, CI = [.01, .12]). Importantly, however, there was no repetition by time pressure interaction (F(1, 147) = 0.48, p = .490, ηp2 < .01, CI = [.00, .04]). Hence, as in Experiment 1, the truth effect was not moderated by time pressure. In contrast, the truth effect was moderated by the statements’ difficulty, as indicated by a repetition by difficulty interaction (F(1, 147) = 6.21, p = .014, ηp2 = .04, CI = [.00, .10]). Simple main effect analyses showed that this interaction was due to a larger truth effect for difficult statements (F(1, 147) = 23.20, p < .001, ηp2 = .14, CI = [.06, .22]) compared to easy statements (F(1, 147) = 8.48, p = .004, ηp2 = .06, CI = [.01, .12]). Importantly, however, the truth effect was present for both statement types and time pressure did not moderate the truth effect for either statement type. Otherwise, we should have observed a three-way interaction of repetition, difficulty, and time pressure, which was not the case (F(1, 147) = 1.40, p = .239, ηp2 < .01, CI = [.00, .05]). Overall, PTJs were higher for true statements compared to false statements (true: M = .67, SD = .10; false: M = .42, SD = .12; F(1, 147) = 582.06, p < .001, ηp2 = .80, CI = [.75, .83]). Not surprisingly, this main effect of truth status was qualified by the statements’ difficulty (F(1, 147) = 980.01, p < .001, ηp2 = .87, CI = [.84, .89]). The factual truth status of a statement only affected PTJs for easy statements (F(1, 147) = 1222.19, p < .001, ηp2 = .89, CI = [.87, .91]), but not for difficult statements (F(1, 147) = 2.04, p = .155, ηp2 = .01, CI = [.00, .06]). There were no further significant main effects or interactions (ps ‍> ‍.05).

3.3 Discussion

We again replicated the truth effect, both for difficult as well as for easy statements. This is in line with Fazio et al.’s (2015) observation that knowledge does not protect against illusory truth. Still, it is worth mentioning that we observed a smaller truth effect for easy statements. Most importantly, however, time pressure did not moderate the truth effect — neither for difficult nor for easy statements. Thus, the findings of Experiment 2 are consistent with those of Experiment 1. Yet, before rejecting the hypothesis that the truth effect increases under time-pressure conditions, it is reasonable to address the following concerns. First, one may argue that participants in the no-pressure group lacked motivation to engage in systematic processing. Note, however, that RTs in the no-pressure group were still considerably longer than RTs in the time-pressure group. Second, one may again criticize that we had implemented time pressure by means of a fixed RT deadline. Because people differ in their speed of information processing, this deadline might have been too short to judge the truth of the statements for some participants whereas it might have been too long to even induce time pressure for others. Finally, the verbatim repetition of statements might have also been problematic in this regard, as recognizing a repeated statement after hearing its first word(s) may have prompted truth judgments ahead of time. We addressed these remaining concerns in Experiment 3.

4 Experiment 3

In Experiment 3, all participants received a written motivational speech and individual performance feedback in order to increase their motivation to provide accurate judgments. Additionally, the experiment included eight very simple control statements that served to identify inattentive or unmotivated participants. Unlike in the previous studies, repeated statements appeared in paraphrased form to prevent participants from recognizing and judging these statements immediately upon hearing the first word(s). To further strengthen the need to listen carefully to the entire statement before making a judgment, we additionally presented contradictory repeated statements as distractor statements. Specifically, these contradictory distractors aimed at prompting participants to pay attention to each word of a statement, even if the statement’s topic was familiar from the exposure phase, and thus to prevent premature judgments. Moreover, importantly, Experiment 3 included an adaptive response deadline to account for individual differences in response speed.

4.1 Method

4.1.1 Preregistration and power analysis

We preregistered Experiment 3 on September 19, 2019 at https://osf.io/gd8hw. In line with Experiment 2, we originally aimed for a sample size of N = 148 participants. However, due to the Covid-19 pandemic and the resulting shutdown of the laboratories at the University of Mannheim in spring 2020, we were not able to reach this goal. We therefore ran a power analysis with G*Power to evaluate the sensitivity of the critical time pressure by statement repetition interaction test based on our final sample size (N = 98). In line with our preregistered a-priori power analysis, we used the following input parameters: α = β = .05 and ρ = .50 (correlation among truth judgments for repeated and new statements). The analysis indicated that our test was sensitive enough to detect an effect equal to or larger than f = 0.18 (equivalent to ηp2 = .031), which is still a small effect according to Cohen’s (1988) conventions.

4.1.2 Participants

One hundred participants, all of whom had been recruited at the University of Mannheim, took part in the experiment. One participant had already participated in Experiment 2 and was thus excluded from all analyses. We also excluded another participant due to a large number of missing responses (72%) in the truth judgment phase. The final sample thus comprised 98 participants (75 female, 23 male) with a mean age of M = 22.1 years (SD = 4.9). All participants gave written informed consent prior to the experiment and received course credit or six euros for their participation.

4.1.3 Materials

We selected 88 statements of Experiment 2 which we grouped into two target sets with 36 statements each and a distractor set with 16 statements. Half of the statements in each set were easy ones (more than 80% correct true/false-classifications in a pretest), the other half were difficult ones (less than 60% correct true/false-classifications in a pretest). Half of the easy and difficult statements within each set were true and half were false. We also selected eight very simple statements (e.g., A day has 24 hours) from a previous truth-effect study (Nadarevic et al., 2018, Experiment 2), which served to identify inattentive or unmotivated participants. Again, half of these control statements were true and the other half were false. The 96 statements that served as stimuli for the judgment phase were audio recorded by a female speaker at normal speech rate.

For the exposure phase, we created the following materials. For each statement of the two target sets, we created a paraphrased version by changing the sentence structure and replacing several words with synonyms (Silva et al., 2017). Importantly, however, the meaning of the statements was retained (e.g., 20% of 85-year-olds suffer from Alzheimer’s; paraphrased statement: Among 85-year-olds, 20% are affected by Alzheimer’s). As in Experiment 2, we counterbalanced which of the two target sets of paraphrased statements appeared in the exposure phase. For the 16 statements of the distractor set, we created paraphrased, contradicting versions (Silva et al., 2017). That is, in addition to the paraphrasing, we changed a detail of the statement to create a contrasting meaning to the original statement (e.g., Galileo discovered gravity; paraphrased contradiction: Gravity was discovered by Isaac Newton). All 16 paraphrased contradictions were presented in the exposure phase. All materials of the exposure phase appeared in written format and thus were not audio recorded.

4.1.4 Procedure

Upon arrival, participants were randomly assigned to the time-pressure group (n = 48) or the no-pressure group (n = 50). The procedure was similar to Experiment 2 except for the following changes.

In the exposure phase, 52 statements appeared on the screen in random order. Of these, 36 statements were paraphrases of statements belonging to one of the two target sets. The remaining 16 statements were contradictory paraphrases of the distractor set. Each trial started with the presentation of a fixation cross for 1000 ms. Subsequently, a statement appeared on the screen accompanied by a set of semantic categories displayed below (biology, geography, physics, history, society, and other). Participants’ task was to assign each statement to one of these categories by clicking on the respective category. On average, it took participants M = 5646 (SD = 1938) ms to read a statement and provide their response. Following a five-minute nonverbal filler task, a motivational speech was displayed on the screen to increase participants’ engagement in the upcoming judgment phase. This speech was adapted from Thompson et al. (1994) and highlighted the personal relevance of the task (see also, Darke et al., 1998) as well as the importance of accurate judgments for the study (see also, Cronley et al., 2010). The text read as follows (translated from German):

“One of the issues that we are concerned with in this research is the ability of people to judge the truth of statements. This ability is of considerable importance in everyday social interactions, also against the background of the increasing problem of fake news. It is often necessary to make accurate judgments about the truth of statements, even when (time and) information is limited. Therefore, it is very important for this study that you judge the following statements as accurately as possible. You will receive individual feedback on your performance at the end of the study.”

Importantly, both groups received the same text, except for the words in parentheses. These words did not appear in the no-pressure group and were displayed without parentheses in the time-pressure group. The judgment phase, which directly followed the motivational speech, consisted of several blocks. In the first block, the calibration block, all participants provided binary true/false judgments for 20 statements without any time constraints in order to measure their individual response speed. For each participant in the time-pressure group, the computer then computed the 20th percentile of his/her RTs in the calibration block that served as his/her response deadline in the following blocks of the judgment phase. This means that in the subsequent blocks, participants in the time-pressure group had to respond faster than in 80% of the trials of the calibration block. The calibration block was followed by another short version of the motivational speech, a practice block consisting of 12 statements, and two test blocks comprising 32 statements each. The reason for implementing two test blocks was to explore whether an effect of time pressure would possibly attenuate over the course of the experiment.

The statements presented in each block of the judgment phase (i.e., calibration block, practice block, test block 1, and test block 2) were randomly drawn from the two target sets and the distractor set with the restriction that the statements were balanced across the within-subject factors (i.e., repetition, truth status, and difficulty) in each block and that each statement only appeared once in the judgment phase. Moreover, four of the statements in each test block were very easy control statements (two true, two false) that served to identify inattentive or unmotivated participants. As in Experiment 2, statements were presented auditorily in the judgment phase. The course of a single trial was also the same as in Experiment 2 except that the response options were displayed directly after statement offset in both groups. Moreover, importantly, in the time-pressure group there was no unitary response deadline in the practice block and the test blocks. Instead, the response deadline was set to the 20th percentile of each participant’s RTs in the calibration block.

At the end of the experiment, participants were asked various control questions, including the question whether they had made an effort in providing accurate truth judgments. Finally, they received feedback about the proportion of statements for which they had provided correct truth judgments.

4.1.5 Design

The research design was identical to Experiment 2.

4.2 Results

The following analyses refer to the data of the test blocks only. That is, data from the calibration block and the practice block were not included. Moreover, we excluded all responses to distractor statements and control statements. Participants in the time-pressure group had missed the RT deadline in 9.8% of the trials of the test blocks. These trials were also excluded from all analyses. The mean response deadline in the time-pressure group was M = 783 (SD = 448) ms.

4.2.1 Response times

In line with the previous experiments we compared participants’ mean RTs between groups.3 A Welch’s t-test confirmed that RTs were significantly longer in the no-pressure group compared to the time-pressure group (no ‍pressure: M = 2206, SD = 1097; time ‍pressure: M = 337, SD = 206; t(96) = 13.92, p < .001, d = 2.81, CI = [2.25, 3.37]). We also analyzed participants’ mean RTs by means of a 2 (statement repetition: repeated vs. new) × 2 (truth status: true vs. false) × 2 (difficulty: easy vs. difficult) repeated-measures ANOVA, which we conducted separately for each group. Figure ‍5 displays the descriptive results.


Figure 5: Mean RTs (in ms) in Experiment 3 as a function of group, statement repetition, truth status, and difficulty. Error bars represent standard errors of the means.

In the time-pressure group, RTs were faster for repeated statements compared to new ones (repeated: M = 366, SD = 198; new: M = 390, SD = 217; F(1, 47) = 7.65, p = .008, ηp2 = .14, CI = [.02, .30]), and for easy statements compared to difficult ones (easy: M = 363, SD = 195; difficult: M = 393, SD = 220, F(1, 47) = 15.74, p < .001, ηp2 = .25, CI = [.09, .41]). Likewise, in the no-pressure group there were also significant main effects of repetition and difficulty with faster RTs for repeated statements compared to new ones (repeated: M = 2050, SD = 1129; new: M = 2362, SD = 1177; F(1, 49) = 27.70, p < .001, ηp2 = .36, CI = [.19, .51]) and for easy statements compared to difficult ones (easy: M = 2147, SD = 1164; difficult: M = 2264, SD = 1145; F(1, 49) = 13.66, p < .001, ηp2 = .22, CI = [.07, .38]). Moreover, a main effect of truth status indicated that RTs were significantly faster for true statements compared to false ones in this group (true: M = 2142, SD = 1113; false: M = 2270, SD = 1134; F(1, 49) = 5.06, p = .029, ηp2 = .09, CI = [.01, .24]). There were no further significant main effects or interactions (ps ‍> ‍.05).

4.2.2 Truth judgments

As preregistered, we ran a 2 (statement repetition: repeated vs. new) × 2 (truth status: true vs. false) × 2 (difficulty: easy vs. difficult) × 2 (group: time pressure vs. no pressure) ANOVA with mean PTJs as the dependent variable. Figure ‍6 displays the descriptive results.


Figure 6: Mean proportion of true judgments (PTJs) in Experiment 3 as a function of group, statement repetition, truth status, and difficulty. Error bars represent standard errors of the means.

We replicated the truth effect, i.e., PTJs were higher for repeated statements than for new statements (repeated: M = .59, SD = .15; new: M = .48, SD = .13; F(1, 96) = 53.08, p < .001, ηp2 = .36, CI = [.23, .46]). However, once again, the effect was not moderated by time pressure (F(1, 96) = 1.89, p = .173, ηp2 = .02, CI = [.00, .09]). In contrast, the truth effect was moderated by the statements’ difficulty (F(1, 96) = 4.64, p = .034, ηp2 = .05, CI = [.00, .13]). Simple main effect analyses showed that, as in Experiment 2, the truth effect was larger for difficult statements (F(1, 96) = 42.25, p < .001, ηp2 = .31, CI = [.19, .42]) compared to easy statements (F(1, 96) = 19.04, p < .001, ηp2 = .17, CI = [.07, .28]). Importantly, however, the truth effect was present for both statement types and time pressure did not moderate the effect for either statement type. That is, as in Experiment 2, we did not observe a three-way interaction of repetition, difficulty, and time pressure (F(1, 96) = 0.79, p = .378, ηp2 < .01, CI = [.00, .06]). Overall, PTJs were higher for true statements compared to false statements (true: M = .65, SD = .12; false: M = .43, SD = .14; F(1, 96) = 238.04, p < .001, ηp2 = .71, CI = [.64, .77]). Not surprisingly, this main effect of truth status was again qualified by the statements’ difficulty, (F(1, 96) = 267.73, p < .001, ηp2 = .74, CI = [.66, .79]). The factual truth status of a statement only affected PTJs for easy statements (F(1, 96) = 403.07, p < .001, ηp2 = .81, CI = [.75, .85]), but not for difficult statements (F(1, 96) = 1.36, p = .246, ηp2 = .01, CI = [.00, .08]). In addition, there was also a three-way interaction between repetition, truth status, and difficulty (F(1, 96) = 6.21, p = .014, ηp2 = .06, CI = [.01, .15]) indicating that the magnitude of the truth status by difficulty interaction varied between repeated and new statements. There were no further significant main effects or interactions (ps > .05).

4.2.3 Additional analyses

In our preregistration, we had specified additional analyses for the case that we would not find an effect of time pressure on the truth effect. The aim of these analyses was to explore possible reasons for the null effect. First, we examined if the pattern of results changes over the time course of the experiment. We did so by including test block (test block 1 vs. test block 2) as a further factor in the analysis of participants’ RTs and PTJs. Two participants of the time-pressure group had to be excluded from these analyses because of missing data in one cell of the design. However, including test block did not affect the main pattern of results reported above. Moreover, mean PTJs did not differ between test blocks and there were no significant interactions between test block and any of the other variables (ps ‍> ‍.05). Second, we checked if the findings remain the same when excluding presumably “unmotivated” participants. Even though all participants stated that they had made an effort to provide accurate truth judgments, eight of them (time-pressure group: n = 6, no-pressure group: n = 2) answered less than seven of the eight control statements correctly. In line with our preregistration, we repeated all analyses without these participants. However, the main results were essentially the same as the ones reported for all 98 participants. A detailed description of the additional analyses is available as online supplement at (https://osf.io/pue5n/).

Finally, although not preregistered, we also analyzed truth judgments for the contradicting distractor statements. In contrast to the repeated paraphrases, however, we did not find a truth effect for these statements. That is, PTJs did not differ between the contradicting distractor statements and the new statements (contradicting: M = .46, SD = .16; new: M = .48, SD = .13; t(97) = 0.65, p = .517, dz = 0.07, CI [-0.13, 0.26]). This finding suggests that participants had listened carefully to the content of the statements, as they showed increased PTJs only for statements repetitive in meaning, but not for statements that resembled previously presented statements but were incoherent in meaning.

4.3 Discussion

As in the previous experiments, we replicated the truth effect. However, once again the effect did not increase under time pressure, neither for difficult nor for easy statements. Importantly, this was the case even though we had taken great care to provide ideal conditions to find an effect of time pressure on the truth effect if present.

5 Bayesian data analysis

Because none of the three experiments displayed a repetition by time pressure interaction, our findings speak against the hypothesis that the truth effect increases under time-pressure conditions. Yet, the null-hypothesis significance test approach that we have pursued does not allow interpreting the findings as evidence in support of the null hypothesis. For this reason, we complemented our analyses with a Bayesian analysis. More precisely, we ran Bayesian t-tests with the statistics software JASP in order to directly compare the degree of support for the null hypothesis (H0: time pressure does not affect the truth effect) and the alternative hypothesis (H1: time pressure increases the truth effect). For the Bayesian t-test, we used a default prior distribution as implemented in JASP, that is, a Cauchy distribution centered around zero with a scale parameter of r = 1/2 √2 (Wagenmakers et al., 2018). Because H1 is a directed hypothesis, the prior distribution was truncated to put the prior mass exclusively on positive effect sizes. Group (time pressure vs. no pressure) served as the independent variable and the size of the truth effect (i.e., the difference in PTJs between repeated and new statements) as the dependent variable.

Across all three experiments, the Bayes factor clearly favored H0 over H1. The data of Experiment 1 were about 3.5 times more likely under H0 than under H1 (BF01 = 3.51). The observed data in the other experiments were even 9 times (Experiments 2: BF01 = 9.00) and 11 times (Experiment 3: BF01 = 11.07) more likely under H0. Taken together, our three experiments provided strong evidence against the assumption that time pressure boosts the truth effect (BF01 = 16.23, for the aggregate data). Importantly, this support for H0 is robust across different settings on the scale parameter r of the Cauchy prior distribution (see Figure ‍7).


Figure 7: Bayes factor robustness plot for the aggregated data of all three experiments. The plot made by JASP indicates BF01 for the user specified prior (r = 1/2 √2), wide prior (r = 1), and ultrawide prior (r = √2).

6 General discussion

6.1 Summary and interpretation of the findings

Based on the fluency account and the heuristic/systematic processing distinction of the HSM, we had set up the prediction that time pressure would boost the truth effect. However, our three experiments did not support this hypothesis. Although we found a truth effect in all three experiments, time pressure did not moderate the effect. In view of studies that have found that time pressure increases judgment biases such as the belief bias (e.g., Evans & Curtis-Holmes, 2005) and the use of simple heuristics (e.g., Hilbig et al., 2012), our results seem somewhat surprising. However, our findings are in line with the results of Nadarevic and Rinnewitz (2011), who did not find significant differences in the truth effect depending on whether participants were instructed to provide intuitive or deliberative truth judgments. Moreover, our findings align well with the observation that participants with a low need for cognition (NFC), that is, a weak dispositional motivation to engage in effortful processing, do not show larger truth effects than participants with a high NFC (Arkes et al., 1991; Boehm, 1994; Newman et al., 2020). Likewise, De Keersmaecker et al. (2020) did not find any differences in the truth effect between participants with low and high cognitive ability, cognitive reflection, need for cognitive closure, or preference for intuition and deliberation.

Although the observed findings are inconsistent with our initial predictions that we inferred from the fluency account and the HSM, the null effect of time pressure does not come as a surprise from the perspective of the referential theory (Unkelbach & Rom, 2017). According to this theory, truth judgments primarily depend on the coherence of localized networks in people’s semantic memory that a statement activates. Because such localized networks should depend on repeated statement exposure and prior knowledge, the same truth judgments should result irrespective of whether context conditions favor heuristic processing or systematic processing, at least if no further external cues are available. Thus, although not explicitly stated, the referential theory conceptualizes knowledge retrieval as an associative, effortless process, which is supported by studies that report quick and spontaneous knowledge retrieval processes (Richter et al., 2009; Wiswede et al., 2013). Similarly, our data also point to the fact that knowledge retrieval does not require time and effort. Otherwise, participants in the time-pressure groups of Experiments 2 and 3 should have been worse at discerning true from false statements than participants in the no-pressure groups. However, this was not the case. This result coincides well with other research in the judgment and decision-making field showing that various processes that have been claimed to require cognitive capacity (e.g., taking a utilitarian perspective, considering base rates) can in fact operate in an automatic fashion (e.g., Białek & De Neys, 2017; Pennycook et al., 2014).

What is more, judgment times in Experiment 3 even decreased for easy statements, that is, when background knowledge was likely. Once again, this finding speaks against the assumption that knowledge-based truth judgments are slow and effortful. In contrast, associative network models in the field of judgment and decision making such as the referential theory or the Parallel Constraint Satisfaction (PCS) model of Glöckner and Betsch (2008) can account for this finding. These models predict that RTs depend on the degree of coherence of information in a network. Hence, more information or knowledge should lead to faster responses as long as it increases coherence; a prediction that has been confirmed in the context of standard decision-making tasks (Glöckner & Betsch, 2012; Heck & Erdfelder, 2017).

One point that requires clarification is that our results are not incompatible with dual-process models per se, but only with the predictions that we derived from the HSM.4 In fact, the above-mentioned PCS model of Glöckner and Betsch (2008) belongs to the class of dual-process models. More precisely, it is a default-interventionist dual process model. Intuitive processing, which is assumed to be the default, is conceptualized in form of a connectionist network that drives information integration and output formation. Information search, information production, and voluntary changes of the mental representation of information, on the other hand, are assumed to be based on deliberative processes that come into play only if required. Putting the referential theory into a dual-process framework, similar to that of the PCS model, could lead to new predictions with respect to contextual moderators of the truth effect. Further ideas for truth-effect studies will be outlined below.

6.2 Limitations and perspectives for future research

Although our experiments provide strong evidence for the robustness of the truth effect against time pressure, future studies should test the generalizability of this finding. One limitation of our studies, for example, is that we implemented time pressure only in a trial-wise fashion with a very short response deadline per judgment (Exp. 1: 650 ms, Exp. 2: 1000 ms, Exp. 3: adaptive deadline with M = 783 ms). Therefore, it remains an open question whether studies that use other manipulations to induce time pressure will lead to the same results. Furthermore, a more systematic variation of the response deadline would certainly be insightful in this regard. Note, however, that the choice of response deadlines is quite limited. This is because the deadline must not be so short that participants cannot form a judgment at all, but also not too long, because otherwise there would be no time pressure. Future studies may also attempt to more clearly delineate statement processing and judgment. Especially Experiment 1 was critical in this regard, as participants might have increased their reading speed under time pressure. Although we addressed this caveat in Experiments 2 and 3 by presenting statements auditorily, our experiments do not provide insights into the temporal contiguity of statement processing and judgment. In principle, it might thus be possible that statement processing and judgment do not operate sequentially but in parallel. Importantly, however, even if this were the case, it would not argue against the assumptions of the referential theory nor against the robustness of the truth effect to time pressure.

Future studies could also investigate the generalizability of our findings to other statement types (e.g., opinion statements instead of knowledge statements) or different experimental settings (e.g., multiple statement repetitions in the exposure phase). Furthermore, it remains to be tested whether time pressure moderates the truth effect in more complex contexts that include additional information (e.g., source information, Nadarevic et al., 2020). According to Unkelbach and Greifeneder (2013), different cues jointly inform truth judgments, for example, subjective feelings, source information, or others people’s advice. In fact, studies show that people integrate these cues additively when forming truth judgments (Nadarevic et al., 2020; Unkelbach & Greifeneder, 2018). What is unclear, however, is how time pressure affects cue utilization in truth judgments. In light of studies that have investigated time-pressure effects on the use of simple versus complex decision-making strategies (e.g., Rieskamp & Hoffrage, 2008), it seems plausible to assume that time pressure causes people to focus on fewer cues or even a single judgment cue for truth. However, according to Betsch and Glöckner (2010) this should be the case only if the cues are associated with search costs. In contrast, if external cues (e.g., source information, advice) are openly accessible and salient, time pressure should not affect cue utilization according to their PCS model. Thus, it remains to be seen if, and under which conditions, time pressure moderates the truth effect in multiple-cue contexts.

Another prospect for future studies is to investigate the role of time pressure in the exposure phase of a truth-effect experiment. Several studies show that the way in which the statements are processed in the exposure phase has a strong impact on the truth effect. For example, Unkelbach and Rom (2017) found that deep semantic processing (Describe how the statement refers to you!) produced a larger truth effect than shallow processing (Indicate the side of the screen on which the statement is located!). The authors argued that this is because deep processing activates more references in people’s semantic networks. Other researchers, in contrast, reported an elimination of the truth effect when participants provided truth judgments in the exposure phase, at least in combination with short retention intervals (e.g., Brashier et al., 2020; Nadarevic & Erdfelder, 2014). Although the cognitive mechanisms for this elimination are not yet fully understood, the findings suggest that an initial accuracy focus in the exposure phase reduces participants’ susceptibility to the truth effect. Whether this still holds when initial truth judgments are provided under time pressure seems to be an interesting topic for future studies. Such studies could also implement a two-response paradigm in which participants provide their initial judgment under time pressure (and/or cognitive load) and then get the opportunity to change their judgment (e.g., Bago et al., 2020; Bago & De Neys, 2019).

Finally, it seems worthwhile to directly compare the effects of time pressure and cognitive load in future truth-effect studies. Because the experiment of Garcia-Marques et al. (2016) and our experiments differed with regard to several procedural variables (e.g., instructions, stimulus material, response format) it is difficult to tell why Garcia-Marques et al. found a small effect of cognitive load on the truth effect while we did not find an effect of time pressure. It is even conceivable that cognitive load and time pressure induce different cognitive processes, although they are often treated as similar manipulations. Based on the referential theory, for example, it seems plausible that cognitive load may impair the activation of semantic references and thus associative processing, because working memory is already occupied with the concurrent task. In contrast, time pressure should not affect spreading activation in people’s semantic networks. Furthermore, we observed that time pressure not only speeded up truth judgments but also reduced RT differences between repeated and new statements in our experiments. This implies that differences in processing fluency between repeated and new statements diminish under time pressure. Note that although we have interpreted our findings in terms of the referential theory of Unkelbach and Rom (2017), by no means do we imply that fluency differences cannot contribute to the truth effect (see Unkelbach et al., 2019, for an integrative model of the truth effect). In fact, a lower distinctiveness of the fluency signal under time pressure might explain why the truth effect even tended to be descriptively smaller in the time-pressure groups than in the no-pressure groups (see the Appendix for a single-paper meta-analysis).

7 Conclusion

The reported experiments provide convincing evidence that the truth effect is robust against time pressure. Although we had initially predicted that time pressure would boost the truth effect, we share the position of De Keersmaecker et al. (2020, p. 213) that “understanding which plausible variables do not affect the illusory truth effect, might be as informative as knowing which variables do influence the effect.” Overall, our findings speak against a distinction between fast, fluency-based versus slow, knowledge-based truth judgments. In contrast, they support the idea of the referential theory that truth judgments rely on a holistic, associative process that does not depend on time constraints — at least when judging isolated statements. However, because people rarely deal with isolated statements in everyday life, we are careful to draw any real-world conclusions based on our findings. To this end, further research is needed for which our work provides a good foundation and valuable ideas.

References

Arkes, H. R., Boehm, L. E. & Xu, G. (1991). Determinants of judged validity. Journal of Experimental Social Psychology, 27(6), 576–605. https://doi.org/10.1016/0022-1031(91)90026-3.

Bago, B. & De Neys, W. (2019). The intuitive greater good: Testing the corrective dual process model of moral cognition. Journal of Experimental Psychology: General, 148(10), 1782–1801. https://doi.org/10.1037/xge0000533.

Bago, B., Rand, D. G. & Pennycook, G. (2020). Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. Journal of Experimental Psychology: General, 149(8), 1608–1613. https://doi.org/10.1037/xge0000729.

Ben-Shachar, M. S., Lüdecke, D. & Makowski, D. (2020). effectsize: Estimation of effect size indices and standardized parameters. Journal of Open Source Software, 5(56), 2815. https://doi.org/10.21105/joss.02815.

Betsch, T. & Glöckner, A. (2010). Intuition in judgment and decision making: Extensive thinking without effort. Psychological Inquiry, 21(4), 279–294. https://www.jstor.org/stable/25767202.

Białek, M. & De Neys, W. (2017). Dual processes and moral conflict: Evidence for deontological reasoners’ intuitive utilitarian sensitivity. Judgment and Decision Making, 12(2), 148–167. http://journal.sjdm.org/17/17224/jdm17224.pdf.

Boehm, L. E. (1994). The validity effect: A search for mediating variables. Personality and Social Psychology Bulletin, 20(3), 285–293. https://doi.org/10.1177/0146167294203006.

Brashier, N. M., Eliseev, E. D. & Marsh, E. J. (2020). An initial accuracy focus prevents illusory truth. Cognition, 194, 104054. https://doi.org/10.1016/j.cognition.2019.104054.

Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of Personality and Social Psychology, 39(5), 752–766. https://doi.org/10.1037//0022-3514.39.5.752.

Chaiken, S., Liberman, A. & Eagly, A. H. (1989). Heuristic and systematic information processing within and beyond the persuasion context. In J. S. Uleman & J. A. Bargh (Eds.), Unintended thought (pp. 212–252). Guilford Press.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed). L. Erlbaum Associates.

Cronley, M. L., Mantel, S. P. & Kardes, F. R. (2010). Effects of accuracy motivation and need to evaluate on mode of attitude formation and attitude–behavior consistency. Journal of Consumer Psychology, 20(3), 274–281. https://doi.org/10.1016/j.jcps.2010.06.003.

Darke, P. R., Chaiken, S., Bohner, G., Einwiller, S., Erb, H.‑P. & Hazlewood, J. D. (1998). Accuracy motivation, consensus information, and the law of large numbers: Effects on attitude judgment in the absence of argumentation. Personality and Social Psychology Bulletin, 24(11), 1205–1215. https://doi.org/10.1177/01461672982411007.

De Keersmaecker, J., Dunning, D., Pennycook, G., Rand, D. G., Sanchez, C., Unkelbach, C. & Roets, A. (2020). Investigating the robustness of the illusory truth effect across individual differences in cognitive ability, need for cognitive closure, and cognitive style. Personality and Social Psychology Bulletin, 46(2), 204–215. https://doi.org/10.1177/0146167219853844.

De Neys, W. (2021). On dual- and single-process models of thinking. Perspectives on Psychological Science. https://doi.org/10.1177/1745691620964172.

Dechêne, A., Stahl, C., Hansen, J. & Wänke, M. (2010). The truth about the truth: A meta-analytic review of the truth effect. Personality and Social Psychology Review, 14(2), 238–257. https://doi.org/10.1177/1088868309352251.

Ebert, M. (2009). NEON unnützes Wissen: 1374 skurrile Fakten, die man nie mehr vergisst. Heyne.

Epstein, S. (1994). Integration of the cognitive and the psychodynamic unconscious. American Psychologist, 49(8), 709–724. https://doi.org/10.1037//0003-066X.49.8.709.

Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59, 255–278. https://doi.org/10.1146/annurev.psych.59.103006.093629.

Evans, J. S. B. T. & Curtis-Holmes, J. (2005). Rapid responding increases belief bias: Evidence for the dual-process theory of reasoning. Thinking & Reasoning, 11(4), 382–389. https://doi.org/10.1080/13546780542000005.

Faul, F., Erdfelder, E., Lang, A. & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146.

Fazio, L. K., Brashier, N. M., Payne, B. K. & Marsh, E. J. (2015). Knowledge does not protect against illusory truth. Journal of Experimental Psychology: General, 144(5), 993–1002. https://doi.org/10.1037/xge0000098.

Garcia-Marques, T., Silva, R. R. & Mello, J. (2016). Judging the truth-value of a statement in and out of a deep processing context. Social Cognition, 34(1), 40–54. https://doi.org/10.1521/soco.2016.34.1.40.

Glöckner, A. & Betsch, T. (2008). Modeling option and strategy choices with connectionist networks: Towards an integrative model of automatic and deliberate decision making. Judgment and Decision Making, 3(3), 215–228. https://doi.org/10.2139/ssrn.1090866.

Glöckner, A. & Betsch, T. (2012). Decisions beyond boundaries: When more information is processed faster than less. Acta Psychologica, 139(3), 532–542. https://doi.org/10.1016/j.actpsy.2012.01.009.

Goh, J. X., Hall, J. A. & Rosenthal, R. (2016). Mini meta-analysis of your own studies: Some arguments on why and a primer on how. Social and Personality Psychology Compass, 10(10), 535–549. https://doi.org/10.1111/spc3.12267.

Hansen, J. & Wänke, M. (2010). Truth from language and truth from fit: The impact of linguistic concreteness and level of construal on subjective truth. Personality and Social Psychology Bulletin, 36(11), 1576–1588. https://doi.org/10.1177/0146167210386238.

Hasher, L., Goldstein, D. & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior, 16(1), 107–112. https://doi.org/10.1016/S0022-5371(77)80012-1.

Heck, D. W. & Erdfelder, E. (2017). Linking process and measurement models of recognition-based decisions. Psychological Review, 124(4), 442–471. https://doi.org/10.1037/rev0000063.

Hedges, L. V. (1981). Distribution theory for Glass’s estimator of effect size and related estimators. Journal of Educational Statistics, 6(2), 107–128. https://doi.org/10.2307/1164588.

Hilbig, B. E. (2012). How framing statistical statements affects subjective veracity: Validation and application of a multinomial model for judgments of truth. Cognition, 125(1), 37–48. https://doi.org/10.1016/j.cognition.2012.06.009.

Hilbig, B. E., Erdfelder, E. & Pohl, R. F. (2012). A matter of time: Antecedents of one-reason decision making based on recognition. Acta Psychologica, 141(1), 9–16. https://doi.org/10.1016/j.actpsy.2012.05.006.

JASP Team. (2020). JASP (Version 0.13.1) [Computer software]. https://jasp-stats.org/

Kahneman, D. (2011). Thinking, fast and slow. Macmillan.

McGlone, M. S. & Tofighbakhsh, J. (2000). Birds of a feather flock conjointly (?): Rhyme as reason in aphorisms. Psychological Science, 11(5), 424–428. https://doi.org/10.1111/1467-9280.00282.

McShane, B. B. & Böckenholt, U. (2017). Single paper meta-analysis: Benefits for study summary, theory-testing, and replicability. Journal of Consumer Research, 43(6), 1048–1063. https://doi.org/10.1093/jcr/ucw085.

Nadarevic, L. & Erdfelder, E. (2014). Initial judgment task and delay of the final validity-rating task moderate the truth effect. Consciousness and Cognition, 23, 74–84. https://doi.org/10.1016/j.concog.2013.12.002.

Nadarevic, L., Plier, S., Thielmann, I. & Darancó, S. (2018). Foreign language reduces the longevity of the repetition-based truth effect. Acta Psychologica, 191, 149–159. https://doi.org/10.1016/j.actpsy.2018.08.019.

Nadarevic, L., Reber, R., Helmecke, A. J. & Köse, D. (2020). Perceived truth of statements and simulated social media postings: An experimental investigation of source credibility, repeated exposure, and presentation format. Cognitive Research: Principles and Implications, 5(1), 56. https://doi.org/10.1186/s41235-020-00251-4.

Nadarevic, L. & Rinnewitz, L. (2011). Judgment mode instructions do not moderate the truth effect. Unpublished data. https://doi.org/10.17605/OSF.IO/3UAJ7.

Newman, E. J., Garry, M., Bernstein, D. M., Kantner, J. & Lindsay, D. S. (2012). Nonprobative photographs (or words) inflate truthiness. Psychonomic Bulletin & Review, 19(5), 969–974. https://doi.org/10.3758/s13423-012-0292-0.

Newman, E. J., Jalbert, M., Schwarz, N. & Ly, D. P. (2020). Truthiness, the illusory truth effect, and the role of need for cognition. Consciousness and Cognition, 78, 102866. https://doi.org/10.1016/j.concog.2019.102866.

Parks, C. M. & Toth, J. P. (2006). Fluency, familiarity, aging, and the illusion of truth. Aging, Neuropsychology, and Cognition, 13(2), 225–253. https://doi.org/10.1080/138255890968691.

Pennycook, G., Trippas, D., Handley, S. J. & Thompson, V. A. (2014). Base rates: Both neglected and intuitive. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(2), 544–554. https://doi.org/10.1037/a0034887.

Rand, D. G., Greene, J. D. & Nowak, M. A. (2012). Spontaneous giving and calculated greed. Nature, 489(7416), 427–430. https://doi.org/10.1038/nature11467.

Reber, R. & Schwarz, N. (1999). Effects of perceptual fluency on judgments of truth. Consciousness and Cognition, 8(3), 338–342. https://doi.org/10.1006/ccog.1999.0386.

Reber, R. & Unkelbach, C. (2010). The epistemic status of processing fluency as source for judgments of truth. Review of Philosophy and Psychology, 1(4), 563–581. https://doi.org/10.1007/s13164-010-0039-7.

Richter, T., Schroeder, S., Wohrmann, B. & Wöhrmann, B. (2009). You don’t have to believe everything you read: Background knowledge permits fast and efficient validation of information. Journal of Personality and Social Psychology, 96(3), 538–558. https://doi.org/10.1037/a0014038.

Rieskamp, J. & Hoffrage, U. (2008). Inferences under time pressure: How opportunity costs affect strategy selection. Acta Psychologica, 127(2), 258–276. https://doi.org/10.1016/j.actpsy.2007.05.004.

Schroyens, W., Schaeken, W. & Handley, S. (2003). In search of counter-examples: Deductive rationality in human reasoning. The Quarterly Journal of Experimental Psychology, 56(7), 1129–1145. https://doi.org/10.1080/02724980245000043.

Schwarz, N. (2012). Feelings-as-information theory. In P. A. M. van Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), Handbook of theories of social psychology (pp. 289–308). Sage. https://doi.org/10.4135/9781446249215.n15.

Silva, R. R., Garcia-Marques, T. & Mello, J. (2016). The differential effects of fluency due to repetition and fluency due to color contrast on judgments of truth. Psychological Research, 80(5), 821–837. https://doi.org/10.1007/s00426-015-0692-7.

Silva, R. R., Garcia-Marques, T. & Reber, R. (2017). The informative value of type of repetition: Perceptual and conceptual fluency influences on judgments of truth. Consciousness and Cognition, 51, 53–67. https://doi.org/10.1016/j.concog.2017.02.016.

Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119(1), 3–22. https://doi.org/10.1037//0033-2909.119.1.3.

Suter, R. S. & Hertwig, R. (2011). Time and moral judgment. Cognition, 119(3), 454–458. https://doi.org/10.1016/j.cognition.2011.01.018.

Thompson, E. P., Roman, R. J., Moskowitz, G. B., Chaiken, S. & Bargh, J. A. (1994). Accuracy motivation attenuates covert priming: The systematic reprocessing of social information. Journal of Personality and Social Psychology, 66(3), 474–489. https://doi.org/10.1037/0022-3514.66.3.474.

Unkelbach, C. (2007). Reversing the truth effect: Learning the interpretation of processing fluency in judgments of truth. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(1), 219–230. https://doi.org/10.1037/0278-7393.33.1.219.

Unkelbach, C. & Greifeneder, R. (2013). A general model of fluency effects in judgment and decision making. In C. Unkelbach & R. Greifeneder (Eds.), The experience of thinking: how the fluency of mental processes influences cognition and behaviour. (pp. 11–32). Psychology Press.

Unkelbach, C. & Greifeneder, R. (2018). Experiential fluency and declarative advice jointly inform judgments of truth. Journal of Experimental Social Psychology, 79, 78–86. https://doi.org/10.1016/j.jesp.2018.06.010.

Unkelbach, C., Koch, A., Silva, R. R. & Garcia-Marques, T. (2019). Truth by repetition: Explanations and implications. Current Directions in Psychological Science, 28(3), 247–253. https://doi.org/10.1177/0963721419827854.

Unkelbach, C. & Rom, S. C. (2017). A referential theory of the repetition-induced truth effect. Cognition, 160, 110–126. https://doi.org/10.1016/j.cognition.2016.12.016.

Unkelbach, C. & Stahl, C. (2009). A multinomial modeling approach to dissociate different components of the truth effect. Consciousness and Cognition, 18(1), 22–38. https://doi.org/10.1016/j.concog.2008.09.006.

Vogel, T., Silva, R. R., Thomas, A. & Wänke, M. (2020). Truth is in the mind, but beauty is in the eye: Fluency effects are moderated by a match between fluency source and judgment dimension. Journal of Experimental Psychology: General, 149(8), 1587–1596. https://doi.org/10.1037/xge0000731.

Vosgerau, J., Simonsohn, U., Nelson, L. D. & Simmons, J. P. (2019). 99% impossible: A valid, or falsifiable, internal meta-analysis. Journal of Experimental Psychology: General, 148(9), 1628–1639. https://doi.org/10.1037/xge0000663.

Wagenmakers, E.‑J., Love, J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., Selker, R., Gronau, Q. F., Dropmann, D., Boutin, B., Meerhoff, F., Knight, P., Raj, A., van Kesteren, E.‑J., van Doorn, J., Šmíra, M., Epskamp, S., Etz, A., Matzke, D., . . . Morey, R. D. (2018). Bayesian inference for psychology. Part II: Example applications with JASP. Psychonomic Bulletin & Review, 25(1), 58–76. https://doi.org/10.3758/s13423-017-1323-7.

Wiswede, D., Koranyi, N., Müller, F., Langner, O. & Rothermund, K. (2013). Validating the truth of propositions: Behavioral and ERP indicators of truth evaluation processes. Social Cognitive and Affective Neuroscience, 8(6), 647–653. https://doi.org/10.1093/scan/nss042.



Appendix

To obtain more precise effect-size estimates of the truth effect under the different time-pressure conditions, we conducted a single-paper meta-analysis (Goh et al., 2016; McShane & Böckenholt, 2017; for caveats of this method, see Vosgerau et al., 2019). This analysis included the data of all three experiments. Importantly, these were the only studies that we have conducted so far to test the effect of time pressure on the truth effect in the outlined paradigm (i.e., in the test phase of the exposure paradigm). Following the procedure reported in Newman et al. (2020), we used a random-effects model within subgroups (i.e., time-pressure conditions) and a fixed-effect model to estimate the overall effect across subgroups. Moreover, homogeneous between-study variance (τ2) was assumed, so the estimate was pooled across subgroups. To account for small-sample bias, we calculated Hedges g as effect-size measure (Hedges, 1981). The analysis was carried out in the software Comprehensive Meta Analysis (version 3.0), based on separate standardized effect-size estimates and confidence intervals calculated with the statistics software JASP. The results of our meta-analysis are summarized graphically in Figure ‍A1.

Across all studies, we observed the typical medium-sized truth effect (Hedges g = 0.49, 95% CI = [0.38, 0.61]). The meta-analytic effect-size estimate was slightly smaller in the time-pressure groups (g = 0.45, CI = [0.29, 0.61]) while it was slightly larger in the no-pressure groups (g = 0.54, CI = [0.38, 0.70]). This difference was not significant, however (Q(1) = 0.30, p = .583), supporting our conclusion derived from the separate experiments that the effect of repetition on the perceived truth of statements is not affected by time pressure.


Figure A1: Results of the single-paper meta-analysis. The dashed line denotes the overall estimated truth effect and the gray-shaded area denotes the 95% CI.


*
Corresponding author. Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany. Email: nadarevic@psychologie.uni-mannheim.de. ORCID: 0000-0003-1852-5019
#
University of Mannheim, Mannheim, Germany. ORCID: 0000-0001-6531-2265
$
University of Mannheim, Mannheim, Germany, and Leibniz-Institut für Wissensmedien, Tübingen, Germany. ORCID: 0000-0001-9130-7515
Data and materials are available at https://osf.io/687bn

This work was supported by the Ministry of Science, Research and the Arts Baden-Württemberg, by an autonomy grant of the University of Mannheim, and by a grant from the Deutsche Forschungsgemeinschaft (DFG, GRK 2277) to the research training group Statistical Modeling in Psychology (SMiP). We thank Ines Hug and Johannes Prager for their help with stimulus preparation, stimulus pretesting, and data collection of Experiment 2. We also thank Ines Hug and Frank Calio for their valuable comments on an earlier draft of this manuscript.

Copyright: © 2021. The authors license this article under the terms of the Creative Commons Attribution 3.0 License.

1
CIs for Cohen’s d were calculated by JASP (JASP Team, 2020) and refer to a confidence level of 95%. CIs for ηp2 were computed with the R-package effectsize (Ben-Shachar et al., 2020) and are 90% CIs.
2
Because pretested truth ratings had already been higher for the false statements than for the true statements, the observed main effect of truth status on PTJs does not come as a surprise. However, we were surprised that the effect turned out to be much larger than in the pretest. Possibly, the different response formats (i.e., binary true/false-responses in Experiment 1 vs. truth ratings in the pretest) can account for this discrepancy.
3
Following the advice of the editor, we log-transformed RTs and computed mean RTs instead of median RTs on the participant level. The reported RT analyses thus deviate from our preregistered analysis plan.
4
In addition, our findings do not bear on the ongoing debate between dual-process and single-process models. According to De Neys (2021), it is even questionable whether this debate can ever be resolved.

This document was translated from LATEX by HEVEA.