[ View menu ]

May 24, 2017

Counterintuitive probability problem of the day

Filed in Encyclopedia ,Gossip ,Ideas ,R
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


With p as the probability of dying on one shot, this figure shows how to get the probability of living through the game.

Peter Ayton is giving a talk today at the London Judgement and Decision Making Seminar

Imagine being obliged to play Russian roulette – twice (if you are lucky enough to survive the first game). Each time you must spin the chambers of a six-chambered revolver before pulling the trigger. However you do have one choice: You can choose to either (a) use a revolver which contains only 2 bullets or (b) blindly pick one of two other revolvers: one revolver contains 3 bullets; the other just 1 bullet. Whichever particular gun you pick you must use every time you play. Surprisingly, option (b) offers a better chance of survival. We discuss a general theorem implying, with some specified caveats, that a system’s probability of surviving repeated ‘demands’ improves as uncertainty concerning the probability of surviving one demand increases. Nonetheless our behavioural experiments confirm the counterintuitive nature of the Russian roulette and other kindred problems: most subjects prefer option (a). We discuss how uncertain probabilities reduce risks for repeated exposure, why people intuitively eschew them and some policy implications for safety regulation.

We can see how many people would think the choice between (a) and (b) doesn’t matter. Naively one might think that 2 bullets leads to the same probability as choosing blindly between 1 and 3 bullets. But this is not true. See the graphic above and plug in different ps.

Or be lazy and let us do it for you. First, approve that this R function is correct:

prob_live_game = function(bullets) {
prob_die = bullets / 6
prob_live = 1 - prob_die
prob_die * 0 + prob_live * (prob_die * 0 + prob_live * 1)

Now see below that the probability of living with 2 bullets is .44 while the probability of living with an equal chance of 1 or 3 bullets is .47

> #probability living with 2 bullets
> prob_live_game(2)
[1] 0.4444444
> #probability living with 1 bullet
> prob_live_game(1)
[1] 0.6944444
> #probability living with 3 bullets
> prob_live_game(3)
[1] 0.25
> #probability living with an equal chance of 1 or 3 bullets
> .5 * (prob_live_game(1) + prob_live_game(3))
[1] 0.4722222

May 15, 2017

The Hillel Einhorn new investigator award 2017

Filed in SJDM
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


The Society for Judgment and Decision Making is inviting submissions for the Hillel Einhorn New Investigator Award. The purpose of this award is to encourage outstanding work by new researchers. Individuals are eligible if they have not yet completed their Ph.D. or if they have completed their Ph.D. within the last five years (on or after July 1, 2012). To be considered for the award, please submit a journal-style manuscript on any topic related to judgment and decision making.

In the case of co-authored papers, if the authors are all new investigators they can be considered jointly; otherwise, the new investigator(s) must be the primary author(s) and should be the primary source of ideas. Submissions in dissertation format will not be considered, but articles based on a dissertation are encouraged. Both reprints of published articles and manuscripts that have not yet been published are acceptable. We ask for submissions with names, affiliations, and author notes removed for blind
review. Submissions that are not properly anonymized will be invalid.

There have been two changes in policy. First, a given paper can only be submitted for consideration once. Thus, papers submitted in any prior year may not be submitted this year. Second, you must be a member at the time of submission. You need your member password to submit. If you are not a member, you should join by 17 June so as to be sure to have your password before the deadline. Instructions on becoming a member are here: http://www.sjdm.org/join.html.

Submissions will be judged by a committee appointed by the Society. To be considered, submissions must be received by 19 June, 2017 (11:59 PM, Pacific Time). The committee will announce the results to the participants by 10 October 2017. The award will be announced and presented at the annual meeting of the Society for Judgment and Decision Making. The winner must be available to accept the award at the annual meeting and will be invited to give a presentation of their paper. Do not submit a paper if you know that you cannot attend this year’s annual meeting. If the winner cannot obtain full funding from his/her own institution to attend the meeting, an application may be made to the Society for supplemental travel needs.

Submission instructions and the submission portal are available here: http://www.sjdm.org/awards/einhorn.html.

May 11, 2017

SJDM Conference, Vancouver, Nov 10-13, 2017

Filed in Conferences ,SJDM ,SJDM-Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


The Society for Judgment and Decision Making (SJDM) invites abstracts for oral presentations and posters on any interesting topic related to judgment and decision making. Completed manuscripts are not required (i.e., it’s non archival).


SJDM’s annual conference will be held in Vancouver, British Columbia, November 10-13, 2017. The conference will take place at the Fairmont Waterfront Hotel. Plenary events will include a keynote talk on Sunday, November 12 delivered by Robert Cialdini and Richard Thaler.


The deadline for submissions is Monday, June 19, 2017, end of the day. Submissions for oral presentations, and posters should be made through the SJDM website at http://www.sjdm.org/abstract-review/htdocs Technical questions can be addressed to the webmaster, Jon Baron, at webmaster@sjdm.org. All other questions can be addressed to the program chair, Suzanne Shu, at suzanne.shu@anderson.ucla.edu.


At least one author of each presentation must be a member of SJDM, by one week after the deadline for submission (to allow time for dues paid by mail). You may join SJDM at http://www.sjdm.org/join.html. An individual may give only one talk and present only one poster, but may be a co-author on multiple talks and/or posters. Please note that both the membership rule and the one-talk/one-poster rule will be strictly enforced.


Travelers from certain countries may need extra lead time to obtain travel documents. Although we are unable to accept talks early, we can provide notification of an “accepted presentation.” This means that you would at least be guaranteed a poster. We can do this because posters are typically evaluated only for content and most are accepted. To take advantage of this option, you should still submit through the regular process, make sure to indicate that you are willing to present a poster, and also send a request to the program chair, Suzanne Shu, at suzanne.shu@anderson.ucla.edu.


The Best Student Poster Award is given for the best poster presentation whose first author is a student member of SJDM.

The Hillel Einhorn New Investigator Award is intended to encourage outstanding work by new researchers. Applications are due June 19, 2017. Further details are available at http://www.sjdm.org/awards/einhorn.html. Questions can be directed to Gretchen Chapman, gretchen.chapman@rutgers.edu.

The Jane Beattie Memorial Fund subsidizes travel to North America for a foreign scholar in pursuits related to judgment and decision research, including attendance at the annual SJDM meeting. Further details will be available at http://www.sjdm.org/awards/beattie.html.


Suzanne Shu (Chair), Oleg Urminsky, Danny Oppenheimer, Nina Mazar, Thorsten Pachur, Dan Schley, Bettina von Helversen, and Kate Wessels and Kaye de Kruif (conference co-coordinators)

May 3, 2017

25th International Meeting of the Brunswik Society, Vancouver, Nov 9, 2017

Filed in Conferences ,SJDM ,SJDM-Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


After a hiatus of ten years, the 25th Annual International Meeting of the Brunswik Society will be held on Thursday, November 9, 2017 in Vancouver, British Columbia, at the Vancouver Convention Center West. The program will begin at 9:00 am and end at 6:00 pm.

This meeting is dedicated to the memory of the late Kenneth R. Hammond, on the occasion of his 100th birthday. We invite papers and/or panel discussion proposals on any theoretical or empirical/applied topic directly related to Egon Brunswik’s theoretical lens model framework and method of representative design, including approaches based on Brunswikian principles. Proposals focusing on Ken Hammond’s contributions to the Brunswikian tradition are especially encouraged.

Please send a brief abstract (125 words), and indicate whether the paper/discussion is theoretical or empirical, to Mandeep Dhami by Monday, July 3rd. Kindly respect this submission due date. We cannot guarantee a presenting slot to those who do not meet the submission deadline.

Meeting organizers are Mandeep Dhami (m.dhami at mdx.ac.uk) and Jeryl Mumpower (jmumpower at tamu.edu). The meeting is held concurrently with the Psychonomic Society Annual Meeting and just before the Judgment and Decision Society meeting. More details about the 2017 meeting, including registration instructions, will be posted on the Brunswik Society website, at http://brunswik.org.

NOTE: Putting a “c” in Brunswik is a rookie mistake.

April 26, 2017

New York City streets are not as regular as you might think

Filed in Encyclopedia ,Ideas
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


The inaccurately named blog Stuff Nobody Cares About did a post on something we do care about.

They got an old data table that shows that the distances between streets, the distances between avenues, and the width of street and avenues in New York City varies more than you might think.

For example:

* The distance between Lexington and Park Avenues is 405 feet, but that between 5th and 6th Avenue is 920 feet, or 127% further apart

* 6th and 7th street are 181.75 feet apart, but the streets between 11th and 16th street are 206.5 feet apart, or 14% further apart.

One reason we find this interesting is because we’ve heard people argue about how long it takes to walk a crosstown or uptown block in Manhattan but it depends on which blocks you’re talking about.

We also find this a useful reminder of how are mental representations are mental models that simplify reality.

When you live in New York City, you tend to think you are in a perfect grid. Here we see that it’s not the case.

Also interesting is that the Avenues in Manhattan deviate from True North by about 29 degrees.

April 16, 2017

The SJDM Newsletter is ready for download

Filed in SJDM
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



The quarterly Society for Judgment and Decision Making newsletter can be downloaded from the SJDM site:


Dan Goldstein
SJDM Newsletter Editor

April 14, 2017

Another rule of three, this one in statistics

Filed in Encyclopedia ,Ideas ,R
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


We wrote last week of Charles Darwin’s love of the “rule of three” which, according to Stigler “is simply the mathematical proposition that if a/b = c/d, then any three of a, b, c, and d suffice to determine the fourth.”

We were surprised to learn this is called the rule of three, as we had only heard of the rule of three in comedy. We were even more surprised when a reader wrote in telling us about a rule of three in statistics. According to Wikipedia: the rule of three states that if a certain event did not occur in a sample with n subjects … the interval from 0 to 3/n is a 95% confidence interval for the rate of occurrences in the population.

It’s a heuristic. We love heuristics!

We decided to give it a test run in a little simulation. You can imagine that we’re testing for defects in products coming off of a production line. Here’s how the simulation works:

  • We test everything that comes off the line, one by one, until we come across a defect (test number n+1).
  • We then make a confidence interval bounded by 0 and 3/n and make note of it. In the long run, about 95% of such intervals should contain the true underlying probability of defect.
  • Because it’s a simulation and we know the true underlying probability of defect, we make note of whether the interval contains the true probability of defect.
  • We repeat this 10,000 times at each of the following underlying probabilities: .001, .002, and .003.

Let’s work through and example. Suppose you watch 1,000 products come off the line without defects and you see that the 1,001st product is defective. You plug n=1000 into 3/n and get .003, making your 95% confidence interval for the probability of a defective product to be 0 to .003.

The simulation thus far assumes the testers have the patience to keep testing until they find a defect. In reality, they might get bored and stop testing before the first defect is found. To address this, we also simulated another condition in which the testing is stopped at n/2, halfway before the first defect is found. Of course, people have no way of knowing when if they are half the way to the first defective test, but our simulation can at least let us know what kind of confidence interval one will get if one does indeed stop halfway.

Here’s the result on bracketing, that is, how often the confidence intervals contain the correct value:

Across all three levels of true underlying probabilities, when stopping immediately before the first defect, we get 95% confidence intervals. However, when we stop half way to the first defect, we get closer to 100% intervals (99.73%, 99.80%, and 99.86%, respectively).

So we know that the upper bounds of these confidence intervals fall above the true probability 95% to about 99.9% of the time, but where do they fall?

In the uppermost figure of this post, we see the locations of the upper bounds of the simulated confidence intervals when we stop immediately before the first defect. For convenience, we draw blue lines at the true underlying probabilities of .001 (top), .002 (middle), and .003 (bottom). When it’s a 95% confidence interval, about 95% of the upper bounds should fall to the right of the blue line, and 5% to the left. Note that we’re zooming into to cut the X axis at .05 so you can actually see something. Keep in mind it extends all the way to 1.0, with the heights of the bars trailing off.

For comparison, let’s look at the case in which we stop halfway to the first defect. As suggested by the bracketing probabilities, here we see almost all of the upper bounds exceed the true underlying probabilities. As our applied statistician reader wrote us about the rule of three, “the weakness is that in some situations it’s a very broad confidence interval.”

A Look at the Rule of Three
B. D. Jovanovic and P. S. Levy
The American Statistician
Vol. 51, No. 2 (May, 1997), pp. 137-139
DOI: 10.2307/2685405
Stable URL: http://www.jstor.org/stable/2685405


levels = c(.001,.002,.003)
ITER = 10000
res_list = vector('list', ITER*length(levels))
for(true_p in levels) {
for (i in 1:ITER) {
onesam = sample(
x = c(1, 0),
size = 10*1/true_p,
prob = c(true_p, 1 - true_p),
replace = TRUE
cut = which.max(onesam) - 1
upper_bound_halfway = min(3 / (cut/2),1)
upper_bound_lastpossible = min(3/cut,1)
res_list[[index]] =
true_p = true_p,
cut = cut,
upper_bound_halfway = upper_bound_halfway,
bracketed_halfway = true_p < upper_bound_halfway,
upper_bound_lastpossible = upper_bound_lastpossible,
bracketed_lastpossible = true_p < upper_bound_lastpossible ) index=index+1 }}
df = do.call('rbind',res_list)
plot_data = rbind(
df %>% group_by(true_p) %>% summarise(bracketing_probability = mean(bracketed_halfway),type="halfway"),
df %>% group_by(true_p) %>% summarise(bracketing_probability = mean(bracketed_lastpossible),type="last possible")
aes(x=true_p,y=bracketing_probability,group=type,fill=type)) +
geom_bar(stat="identity",position="dodge") +
coord_cartesian(ylim=c(.5,1)) +
theme_bw() +
theme(legend.position = "bottom",
panel.grid.minor.x = element_blank()) +
labs(x="True Probability",y="Bracketing Probability")
plot_data2 = df %>%
dplyr::select(-bracketed_halfway,-bracketed_lastpossible) %>%
tidyr::gather(bound_type,upper_bound,c(upper_bound_halfway,upper_bound_lastpossible)) %>%
arrange(bound_type,upper_bound) %>%
mutate(bin = floor(upper_bound/.001)*.001) %>%
group_by(bound_type,true_p,bin) %>%
summarise(count = n()) %>%
p=ggplot(subset(plot_data2,bound_type=="upper_bound_lastpossible"),aes(x=bin+.0005,y=count)) +
geom_bar(stat="identity",width = .0005) +
geom_vline(aes(xintercept = true_p),color="blue") +
coord_cartesian(xlim = c(0,.05)) +
labs(x="Upper Bound",y="Count") +
facet_grid(true_p ~ .) +
theme_bw() +
theme(legend.position = "none")
#repeat for upper_bound_halfway
aes(x=bin+.0005,y=count)) +
geom_bar(stat="identity",width = .0005) +
geom_vline(aes(xintercept = true_p),color="blue") +
coord_cartesian(xlim = c(0,.05),ylim=c(0,1750)) +
labs(x="Upper Bound",y="Count") +
facet_grid(true_p ~ .) +
theme_bw() +
theme(legend.position = "none")

April 5, 2017

Darwin, the rule of three, and little use for higher mathematics

Filed in Books ,Encyclopedia ,Ideas
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

The Rule of Three: A/B = C/D

We came across an interesting passage in our former professor Stephen Stigler’s “The Seven Pillars of Statistical Wisdom.”

Charles Darwin had little use for higher mathematics. He summed up his view in 1855 in a letter to his old friend (and second cousin) William Darwin Fox with a statement that Karl Pearson made famous: “I have no faith in anything short of actual measurement and the Rule of Three.” In 1901, Pearson adopted this as the motto for the new journal Biometrika … it was as close to an endorsement of mathematics as Pearson could find in Darwin’s writing.

Darwin was surely right in valuing actual measurement, but his faith in the Rule of Three was msiplaced. The Rule of Three that Darwin cited would have been familiar to every English schoolboy who had studied Euclid’s book 5. It is simply the mathematical proposition that if a/b = c/d, then any three of a, b, c, and d suffice to determine the fourth. For Darwin, this would have served as a handy tool for extrapolation, just as it had for many other before him … In the 1600s, John Graunt and William Petty ahd used such rations to estimate population and economic activity; int he 1700s and early 1800s, so, too, did Pierre Simon Laplace and Adolphe Quetelet.

Neither Darwin nor anyone before him realized what weak analytical support the Rule of Three provides. The rule works well in prorating commercial transactions and for the mathematical problems of Euclid; it fails to work in any interesting scientific question where variation and measurement error present …

1) The rule of three. Who knew it had that name? Could be a good way to get your kids interested in it.
2) The other rule of three we know is in comedy, where it’s pretty darn important.
2) Who knew that Darwin and Pearson had little faith in fancy math? Wonder what they’d make of modern statistical methods.

March 29, 2017

Call for nominations: 5th Exeter Prize for Research in Experimental Economics, Decision Theory, and Behavioral Economics

Filed in Programs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


Call for nominations: 5th Exeter Prize for Research in Experimental Economics, Decision Theory, and Behavioral Economics

The University of Exeter Business School will award a prize of £2,000 for the most outstanding article published in a refereed journal in 2016 from the following fields:

-Experimental Economics
-Decision Theory
-Behavioral Economics

Papers can qualify under any one of the following categories:
1. Any paper that involves either lab or field experiments.
2. Any purely theoretical paper that involves “behavioral” theory (for example, non-expected utility).
3. Any empirical work that shows evidence for behavioral models (that fit under 2) or tests/rejects models (that fit under 2).

In addition to the monetary prize, the author or representative from the authors of the winning paper will be invited to present that paper and related research at the University of Exeter in the Fall of this year.

We would like to invite you to nominate a paper. To qualify it must be published in 2016 and in one of the above-mentioned fields. The date must be the in-print date rather than the on-line date. You may send the nomination via an email to the following address: feelmail at exeter.ac.uk. Please write ‘Exeter Prize Nomination’ in the subject field. Note that you are allowed to nominate your own papers.

We will generate a shortlist of papers from the nominations. The shortlist will be evaluated by a panel, who will then decide the winner. This year our panel members are:

– Glenn Harrison (Georgia State University)
– Michael Mandler (Royal Holloway University of London)
– Michel Regenwetter (University of Illinois)

The deadline for submitting a nomination is May 1, 2017

The winner of the 2016 Exeter Prize was “Identifying Expertise to Extract the Wisdom of Crowds” by David Budescu and Eva Chen, published in Management Science.

The winner of the 2015 Exeter Prize was “Experimental games on networks: underpinnings of behavior and equilibrium selection” by Gary Charness, Francesco Feri, Miguel Melendez, and Matthias Sutter, published in Econometrica.

The winner of the 2014 Exeter Prize was “Temporal Resolution of Uncertainty and Recursive Models of Ambiguity Aversion” by Tomasz Strzalecki, published in Econometrica.

The winner of the 2013 Exeter Prize was “A Continuous Dilemma” by Daniel Friedman and Ryan Oprea, published in the American Economic Review.

The winner of the 2012 Exeter Prize was “Transitivity of Preferences” by Michel Regenwetter, Jason Dana, and Clintin P. Davis-Stober, published in Psychological Review.

For more details on the prize see:

March 22, 2017

Process Tracing Studies Conference, Galway, Ireland, June 22-24, 2017

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


What: 36th meeting of the European Group of Process Tracing Studies in Judgment and Decision Making Conference (EGPROC)
When: June 22-24, 2017
Where: University of Ireland Galway

Submissions and registrations for the meeting are open until April 18th via http://tiny.cc/egproc2017

We are delighted to host the 36th meeting of the European Group of Process Tracing Studies in Judgment and Decision Making (EGPROC) from the 22nd to the 24th of June 2017 at the National University of Ireland Galway in Ireland. This meeting is sponsored by the European Association for Decision Making (EADM).

The deadline for abstract submission is the 18th of April, 2017.

The EGPROC meeting is an annual gathering of researchers interested in process tracing research in the area of Judgment and Decision Making where participants present and discuss recent research and ideas in an open and relatively informal atmosphere. Process tracing approaches include eye tracking, mouse cursor tracking, verbal protocols, neurological correlates of decision making to name a few. Such approaches are technical, so, in addition to presenting the latest advances in the field, the meeting aims to facilitate the transfer of best practice and “lab lore” across laboratories to support the development of the next generation of process tracing researchers.

This year, we are delighted to host Professor Neil Stewart of Warwick University as our keynote speaker. In addition, we will have a panel discussion on the first day of the conference facilitated by Dr KongFatt Wong-Lin of Ulster University, a Moore Institute Visiting Fellow, discussing neural plausibility of decision-making models. These events will allow attendees to discuss decision theoretic concepts with the architects of two influential decision models.