Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)
GET A FEELING FOR ACCIDENTAL SIGNIFICANCE
Click through to site to try
Clicking the button simulates running 20 significance tests, each of which has a 5% chance of coming up significant when no effect is present. Underneath Jerry writes, “The chance that nothing is significant is only 0.3585, so don’t give up hope!”
This is probably a good thing to show to new grad students, whom I suspect get a bit too excited over significant results. We wonder how long it takes new scientists to realize that all that glitters is not meaningful. Simply administering, say, the same 30 question survey to 4 different randomly-assigned groups should be enough to teach this lesson, so one would think that researchers ought to learn this quickly.
We notice that seasoned researchers, who are generally comfortable dismissing insignificant significance, fall into two camps. The first camp waves the results away as noise. The second camp believes that there was an underlying effect, but dismisses it as stemming from an ignore-worthy flaw in the design: “we’re seeing this because that group answered first thing in the morning”, “…right after lunch”, “…on Monday”, “…at the end of class”, etc. They never seem to say “we’re seeing this because a bunch of people who answer that way got randomly put in that group”.