{"id":231,"date":"2007-06-13T06:47:32","date_gmt":"2007-06-13T05:47:32","guid":{"rendered":"http:\/\/www.decisionsciencenews.com\/?p=231"},"modified":"2007-07-17T13:59:39","modified_gmt":"2007-07-17T12:59:39","slug":"should-you-test-for-statistical-significance","status":"publish","type":"post","link":"https:\/\/www.decisionsciencenews.com\/?p=231","title":{"rendered":"Should you test for statistical significance?"},"content":{"rendered":"<p>ARGUMENTS AGAINST ALL SIGNIFICANCE TESTS<\/p>\n<p style=\"text-align: center\"><img decoding=\"async\" src=\"http:\/\/www.decisionsciencenews.com\/wp-content\/uploads\/2007\/06\/st.gif\" alt=\"st\" \/><\/p>\n<p>This week, the always-provocative J. Scott Armstrong submits this comment to Decision Science News:<\/p>\n<p><font color=\"#000000\">&#8220;About two years ago, I was a reasonable person who  argued that tests of statistical significance were useful in some limited  situations. After completing research for &#8220;Significance tests harm progress in  forecasting&#8221; in the<em> International Journal of Forecasting,<\/em> 23 (2007),  321-327, I have concluded that  tests of statistical significance should never  be used. Here is the abstract:<\/font><\/p>\n<blockquote><\/blockquote>\n<blockquote><p><font color=\"#000000\">I briefly summarize prior research showing that  tests of statistical significance are improperly used even in leading scholarly  journals. Attempts to educate researchers to avoid pitfalls have had little  success. Even when done properly, however, statistical significance tests are of  no value. Other researchers have discussed reasons for these failures. I was  unable to find empirical evidence to support the use of significance tests under  any conditions. I then show that tests of statistical significance are harmful  to the development of scientific knowledge because they distract the researcher  from the use of proper methods. I illustrate the dangers of significance tests  by examining a re-analysis of the M3-Competition. Although the authors of the  re-analysis conducted a proper series of statistical tests, they suggested that  the original M3-Competition was not justified in concluding that combined  forecasts reduce errors, and that the selection of the best method is dependent  on the selection of a proper error measure; however, I show that the original  conclusions were correct. Authors should avoid tests of statistical  significance; instead, they should report on effect sizes, confidence intervals,  replications\/extensions, and meta-analyses. Practitioners should ignore  significance tests and journals should discourage them.<\/font> <a href=\"http:\/\/dx.doi.org\/10.1016\/j.ijforecast.2007.03.004\" target=\"_blank\"> http:\/\/dx.doi.org\/10.1016\/j.ijforecast.2007.03.004<\/a><\/p><\/blockquote>\n<blockquote><\/blockquote>\n<p>The  paper is followed by commentaries by Keith Ord, Herman Stekler, and  Paul Goodwin, and by my reply<font color=\"#000000\"> &#8220;Statistical significance tests are unnecessary  even when properly done and properly interpreted: Reply to commentaries<\/font>&#8221; , which can be found online at <a href=\"http:\/\/http:\/\/dx.doi.org\/10.1016\/j.ijforecast.2007.01.010\" target=\"_blank\">http:\/\/dx.doi.org\/10.1016\/j.ijforecast.2007.01.010<\/a><\/p>\n<blockquote><\/blockquote>\n<p>This is happy news for practitioners, researchers, and students. On the  other hand, it might create anguish among faculty who teach people about  statistical significance.&#8221;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>ARGUMENTS AGAINST ALL SIGNIFICANCE TESTS This week, the always-provocative J. Scott Armstrong submits this comment to Decision Science News: &#8220;About two years ago, I was a reasonable person who argued that tests of statistical significance were useful in some limited situations. After completing research for &#8220;Significance tests harm progress in forecasting&#8221; in the International Journal [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false}}},"categories":[2,10],"tags":[],"class_list":["post-231","post","type-post","status-publish","format-standard","hentry","category-research-news","category-sjdm"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4LKj-3J","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.decisionsciencenews.com\/index.php?rest_route=\/wp\/v2\/posts\/231","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.decisionsciencenews.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.decisionsciencenews.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.decisionsciencenews.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.decisionsciencenews.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=231"}],"version-history":[{"count":0,"href":"https:\/\/www.decisionsciencenews.com\/index.php?rest_route=\/wp\/v2\/posts\/231\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.decisionsciencenews.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=231"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.decisionsciencenews.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=231"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.decisionsciencenews.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=231"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}