The variables in the txt file are as follows: *condition *= the group each participant was assigned to for a given trial (0=control,1=similar manipulation,2=dissimilar manipulation). *condtc *= concurrent detection (0=no, 1=yes) *retdtc *= retrospective detection (0=no, 1=yes) *rtime* = reaction time (in minutes) *nofundspicked *= amount of funds chosen for a portfolio. *companycode *= numerical code indicating which company was selected. *gender = *1 (male), 0 (female). *education = *1 (lowest) to 10 (highest) *pension *= 1 (yes), 0 (No) *finsoph *= Financial Sophistication [1 (lowest) – 5 (highest)] *risksurvey *= survey of risk preference (1* *(risk averse) – 5 (risk seeker)) *riskselfreport *= self reported risk preference (1* *(risk averse) – 5 (risk seeker)) *CD *= Cheater detection prime (0=no, 1=yes) The variables used in the final model were: gender cd rtime finsoph memtask risksurvey education nofundspicked. The rest were either redundant or used to check that the data were not biased in some way (for example company fixed effects). ########################## The "CB_ratings.txt" file is the raw data that forms the basis of the qualitative analysis. It relates to the findings reported in the following paragraph in the results section: "Analysis of these responses identified 6 categories into which the responses fell: (1) Major confabulations; (2) minor confabulations; (3) descriptions of original choice; (4) bunched or undifferentiated responses; (5) spurious explanations; and (6) no explanation. To confirm rigour and reliability, the response data was then submitted to three independent raters, who categorized the responses as appropriate. Inter-rater reliability was assessed and was found to be consistently high, with 71% agreement, and a Fleiss kappa value of 0.52. Furthermore, when the categories are grouped into confabulatory responses (1 and 2), descriptions of original choice (3), and no clear explanation (4, 5 and 6), we observe 76.1% agreement and a Fleiss’ kappa of 0.58. Both of the reported kappa values were significant at the 1% level." The ratings correspond to a selection of response statements from our experiment, which can be found in the attached spreadsheet if desired.