[ View menu ]

Mind your Ps

Filed in Encyclopedia ,Profiles ,Research News ,Tools
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


Click through to site to try

We were exploring Jerry Dallal’s site and came across this cute gizmo linked to as “a valuable lesson”.

Clicking the button simulates running 20 significance tests, each of which has a 5% chance of coming up significant when no effect is present. Underneath Jerry writes, “The chance that nothing is significant is only 0.3585, so don’t give up hope!”

This is probably a good thing to show to new grad students, whom I suspect get a bit too excited over significant results. We wonder how long it takes new scientists to realize that all that glitters is not meaningful.  Simply administering, say, the same 30 question survey to 4 different randomly-assigned groups should be enough to teach this lesson, so one would think that researchers ought to learn this quickly.

We notice that seasoned researchers, who are generally comfortable dismissing insignificant significance, fall into two camps. The first camp waves the results away as noise. The second camp believes that there was an underlying effect, but dismisses it as stemming from an ignore-worthy flaw in the design: “we’re seeing this because that group answered first thing in the morning”, “…right after lunch”, “…on Monday”, “…at the end of class”, etc. They never seem to say “we’re seeing this because a bunch of people who  answer that way got randomly put in that group”.


  1. The Endeavour » Blog Archive » Statistically significant but incorrect says:

    […] Decision Science News blog has an article highlighting a tool to illustrate how often experiments with significant […]

    August 19, 2008 @ 7:00 am

RSS feed Comments

Write Comment

XHTML: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>