[ View menu ]
Main

Power pose co-author: I do not believe that “power pose” effects are real.

Filed in Gossip ,Ideas ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

A WELCOME BELIEF UPDATE ABOUT POWER POSES

gggs
Good scientists change their views when the evidence changes

In light of considerable evidence that there is no meaningful effect of power posing, Dana Carney, a co-author of the original article has come forth stating that she no longer believes in the effect.

The statement is online here, but we record it as plain text below, for posterity.

###BEGIN QUOTE###
My position on “Power Poses”

Regarding: Carney, Cuddy & Yap (2010).

Reasonable people, whom I respect, may disagree. However since early 2015 the evidence has been mounting suggesting there is unlikely any embodied effect of nonverbal expansiveness (vs. contractiveness)—i.e.., “power poses” — on internal or psychological outcomes.

As evidence has come in over these past 2+ years, my views have updated to reflect the evidence. As such, I do not believe that “power pose” effects are real.

Any work done in my lab on the embodied effects of power poses was conducted long ago (while still at Columbia University from 2008-2011) – well before my views updated. And so while it may seem I continue to study the phenomenon, those papers (emerging in 2014 and 2015) were already published or were on the cusp of publication as the evidence against power poses began to convince me that power poses weren’t real. My lab is conducting no research on the embodied effects of power poses.

The “review and summary paper” published in 2015 (in response to Ranehill, Dreber, Johannesson, Leiberg, Sul, & Weber (2015 ) seemed reasonable, at the time, since there were a number of effects showing positive evidence and only 1 published that I was aware of showing no evidence. What I regret about writing that “summary” paper is that it suggested people do more work on the topic which I now think is a waste of time and resources. My sense at the time was to put all the pieces of evidence together in one place so we could see what we had on our hands. Ultimately, this summary paper served its intended purpose because it offered a reasonable set of studies for a p-curve analysis which demonstrated no effect (see Simmons & Simonsohn, in press). But it also spawned a little uptick in moderator-type work that I now regret suggesting.

I continue to be a reviewer on failed replications and re-analyses of the data — signing my reviews as I did in the Ranehill et al. (2015) case — almost always in favor of publication (I was strongly in favor in the Ranehill case). More failed replications are making their way through the publication process. We will see them soon. The evidence against the existence of power poses is undeniable.

There are a number of methodological comments regarding Carney, Cuddy & Yap (2010) paper that I would like to articulate here.

Here are some facts

1. There is a dataset posted on dataverse that was posted by Nathan Fosse. It is posted as a replication but it is, in fact, merely a “re-analysis.” I disagree with one outlier he has specified on the data posted on dataverse (subject # 47 should also be included—or none since they are mostly 2.5 SDs from the mean. However the cortisol effect is significant whether cortisol outliers are included or not).
2. The data are real.
3. The sample size is tiny.
4. The data are flimsy. The effects are small and barely there in many cases.
5. Initially, the primary DV of interest was risk-taking. We ran subjects in chunks and checked the effect along the way. It was something like 25 subjects run, then 10, then 7, then 5. Back then this did not seem like p-hacking. It seemed like saving money (assuming your effect size was big enough and p-value was the only issue).
6. Some subjects were excluded on bases such as “didn’t follow directions.” The total number of exclusions was
5. The final sample size was N = 42.
7. The cortisol and testosterone data (in saliva at that point) were sent to Salimetrics (which was in State College, PN at that time). The hormone results came back and data were analyzed.
8. For the risk-taking DV: One p-value for a Pearson chi square was .052 and for the Likelihood ratio it was .05. The smaller of the two was reported despite the Pearson being the more ubiquitously used test of significance for a chi square. This is clearly using a “researcher degree of freedom.” I had found evidence that it is more appropriate to use “Likelihood” when one has smaller samples and this was how I convinced myself it was OK.
9. For the Testosterone DV: An outlier for testosterone were found. It was a clear outlier (+ 3SDs away from the mean). Subjects with outliers were held out of the hormone analyses but not all analyses.
10. The self-report DV was p-hacked in that many different power questions were asked and those chosen were the ones that “worked.”

Confounds in the Original Paper (Which should have been evident in 2010 but only in hindsight did these confounds become so obviously clear)

1. The experimenters were both aware of the hypothesis. The experimenter who ran the pilot study was less aware but by the end of running the experiment certainly had a sense of the hypothesis. The experimenters who ran the main experiment (the experiment with the hormones) knew the hypothesis.
2. When the risk-taking task was administered, participants were told immediately after whether they had “won.” Winning included an extra prize of $2 (in addition to the $2 they had already received). Research shows that winning increases testosterone (e.g., Booth, Shelley, Mazur, Tharp, & Kittok, 1989). Thus, effects observed on testosterone as a function of expansive posture may have been due to the fact that more expansive postured-subjects took the “risk” and you can only “win” if you take the risk. Therefore, this testosterone effect—if it is even to be believed–may merely be a winning effect, not an expansive posture effect.
3. Gender was not dealt with appropriately for testosterone analyses. Data should have been z-scored within-gender and then statistical tests conducted.

Where do I Stand on the Existence of “Power Poses”

1. I do not have any faith in the embodied effects of “power poses.” I do not think the effect is real.
2. I do not study the embodied effects of power poses.
3. I discourage others from studying power poses.
4. I do not teach power poses in my classes anymore.
5. I do not talk about power poses in the media and haven’t for over 5 years (well before skepticism set in)
6. I have on my website and my downloadable CV my skepticism about the effect and links to both the failed replication by Ranehill et al. and to Simmons & Simonsohn’s p-curve paper suggesting no effect. And this document.

References

Booth, A., Shelley, G. Mazur, A., Tharp, G., Kittok, R. (1989). Testosterone, and winning and losing in human competition.
Hormones and Behavior, 23, 556–571.
Ranehill, E., Dreber, A., Johannesson, M., Leiberg, S., Sul, S., & Weber, R. A. (2015). Assessing the Robustness of Power
Posing: No Effect on Hormones and Risk Tolerance in a Large Sample of
Men and Women. Psychological Science, 33, 1-4.
Simmons, J. P., & Simonsohn, U. (in press). Power Posing: P-Curving the Evidence. Psychological Science.

###END QUOTE###

0 Comments

No comments

RSS feed Comments

Write Comment

XHTML: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>