[ View menu ]

September 26, 2016

Power pose co-author: I do not believe that “power pose” effects are real.

Filed in Gossip ,Ideas ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


Good scientists change their views when the evidence changes

In light of considerable evidence that there is no meaningful effect of power posing, Dana Carney, a co-author of the original article has come forth stating that she no longer believes in the effect.

The statement is online here, but we record it as plain text below, for posterity.

My position on “Power Poses”

Regarding: Carney, Cuddy & Yap (2010).

Reasonable people, whom I respect, may disagree. However since early 2015 the evidence has been mounting suggesting there is unlikely any embodied effect of nonverbal expansiveness (vs. contractiveness)—i.e.., “power poses” — on internal or psychological outcomes.

As evidence has come in over these past 2+ years, my views have updated to reflect the evidence. As such, I do not believe that “power pose” effects are real.

Any work done in my lab on the embodied effects of power poses was conducted long ago (while still at Columbia University from 2008-2011) – well before my views updated. And so while it may seem I continue to study the phenomenon, those papers (emerging in 2014 and 2015) were already published or were on the cusp of publication as the evidence against power poses began to convince me that power poses weren’t real. My lab is conducting no research on the embodied effects of power poses.

The “review and summary paper” published in 2015 (in response to Ranehill, Dreber, Johannesson, Leiberg, Sul, & Weber (2015 ) seemed reasonable, at the time, since there were a number of effects showing positive evidence and only 1 published that I was aware of showing no evidence. What I regret about writing that “summary” paper is that it suggested people do more work on the topic which I now think is a waste of time and resources. My sense at the time was to put all the pieces of evidence together in one place so we could see what we had on our hands. Ultimately, this summary paper served its intended purpose because it offered a reasonable set of studies for a p-curve analysis which demonstrated no effect (see Simmons & Simonsohn, in press). But it also spawned a little uptick in moderator-type work that I now regret suggesting.

I continue to be a reviewer on failed replications and re-analyses of the data — signing my reviews as I did in the Ranehill et al. (2015) case — almost always in favor of publication (I was strongly in favor in the Ranehill case). More failed replications are making their way through the publication process. We will see them soon. The evidence against the existence of power poses is undeniable.

There are a number of methodological comments regarding Carney, Cuddy & Yap (2010) paper that I would like to articulate here.

Here are some facts

1. There is a dataset posted on dataverse that was posted by Nathan Fosse. It is posted as a replication but it is, in fact, merely a “re-analysis.” I disagree with one outlier he has specified on the data posted on dataverse (subject # 47 should also be included—or none since they are mostly 2.5 SDs from the mean. However the cortisol effect is significant whether cortisol outliers are included or not).
2. The data are real.
3. The sample size is tiny.
4. The data are flimsy. The effects are small and barely there in many cases.
5. Initially, the primary DV of interest was risk-taking. We ran subjects in chunks and checked the effect along the way. It was something like 25 subjects run, then 10, then 7, then 5. Back then this did not seem like p-hacking. It seemed like saving money (assuming your effect size was big enough and p-value was the only issue).
6. Some subjects were excluded on bases such as “didn’t follow directions.” The total number of exclusions was
5. The final sample size was N = 42.
7. The cortisol and testosterone data (in saliva at that point) were sent to Salimetrics (which was in State College, PN at that time). The hormone results came back and data were analyzed.
8. For the risk-taking DV: One p-value for a Pearson chi square was .052 and for the Likelihood ratio it was .05. The smaller of the two was reported despite the Pearson being the more ubiquitously used test of significance for a chi square. This is clearly using a “researcher degree of freedom.” I had found evidence that it is more appropriate to use “Likelihood” when one has smaller samples and this was how I convinced myself it was OK.
9. For the Testosterone DV: An outlier for testosterone were found. It was a clear outlier (+ 3SDs away from the mean). Subjects with outliers were held out of the hormone analyses but not all analyses.
10. The self-report DV was p-hacked in that many different power questions were asked and those chosen were the ones that “worked.”

Confounds in the Original Paper (Which should have been evident in 2010 but only in hindsight did these confounds become so obviously clear)

1. The experimenters were both aware of the hypothesis. The experimenter who ran the pilot study was less aware but by the end of running the experiment certainly had a sense of the hypothesis. The experimenters who ran the main experiment (the experiment with the hormones) knew the hypothesis.
2. When the risk-taking task was administered, participants were told immediately after whether they had “won.” Winning included an extra prize of $2 (in addition to the $2 they had already received). Research shows that winning increases testosterone (e.g., Booth, Shelley, Mazur, Tharp, & Kittok, 1989). Thus, effects observed on testosterone as a function of expansive posture may have been due to the fact that more expansive postured-subjects took the “risk” and you can only “win” if you take the risk. Therefore, this testosterone effect—if it is even to be believed–may merely be a winning effect, not an expansive posture effect.
3. Gender was not dealt with appropriately for testosterone analyses. Data should have been z-scored within-gender and then statistical tests conducted.

Where do I Stand on the Existence of “Power Poses”

1. I do not have any faith in the embodied effects of “power poses.” I do not think the effect is real.
2. I do not study the embodied effects of power poses.
3. I discourage others from studying power poses.
4. I do not teach power poses in my classes anymore.
5. I do not talk about power poses in the media and haven’t for over 5 years (well before skepticism set in)
6. I have on my website and my downloadable CV my skepticism about the effect and links to both the failed replication by Ranehill et al. and to Simmons & Simonsohn’s p-curve paper suggesting no effect. And this document.


Booth, A., Shelley, G. Mazur, A., Tharp, G., Kittok, R. (1989). Testosterone, and winning and losing in human competition.
Hormones and Behavior, 23, 556–571.
Ranehill, E., Dreber, A., Johannesson, M., Leiberg, S., Sul, S., & Weber, R. A. (2015). Assessing the Robustness of Power
Posing: No Effect on Hormones and Risk Tolerance in a Large Sample of
Men and Women. Psychological Science, 33, 1-4.
Simmons, J. P., & Simonsohn, U. (in press). Power Posing: P-Curving the Evidence. Psychological Science.


September 19, 2016

Pre-conference on debiasing at the SJDM meeting in Boston Nov 18, 2016

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



Carey Morewedge, Janet Schwartz, Leslie John, and Remi Trudel write:

We invite you to participate in a preconference on Friday, November 18th, 2016 at the Questrom School of Business at Boston University. The preconference will feature a day of talks on debiasing before the annual meeting of the Society for Judgment and Decision Making in Boston, MA. Rather than focusing on how to avoid or circumvent bias in particular context, our goal is to extend our field’s conversation about debiasing. To that end, the talks will present our state of the art knowledge on improving decision making abilities from three perspectives:

  • Who is more or less biased in their decision making?
  • Can we reduce bias within an individual?
  • When should we (not) reduce bias?


  • Richard Larrick (Duke University)


  • Rosalind Chow (Carnegie Mellon University)
  • Jason Dana (Yale University)
  • Calvin Lai (Harvard University)
  • Stephan Lewandowsky (University of Bristol)
  • Carey Morewedge (Boston University)
  • Emily Oster (Brown University)
  • Gordon Pennycook (Yale University)
  • Robert J. Smith (University of Miami, Harvard Law School)

The preconference is from 9am to 4pm and includes invited talks, a datablitz, lunch, and a keynote. The conference will be hosted at the Questrom School of Business at Boston University, 595 Commonwealth Ave., Boston, MA 02215. All registered attendees are welcome to submit a presentation for the data blitz, an hour of 5 minute talks. Please submit a title and abstract of no more than 150 words before September 1st for consideration. Due to space limitations, registration is on a first come, first served basis until all seats are filled. Registration costs to cover coffee and lunch is $40 for faculty and $20 for students/postdocs.

More information and a portal to sign up for the conference and submit a presentation for the data blitz, can be found here: http://blogs.bu.edu/decision/

September 12, 2016

The Lab @ DC hiring now. Deadline Sept 19, 2016.

Filed in Jobs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



It’s a great time to be in decision science / behaviorial science. Jobs everywhere in academia, industry and government. Hey, speaking of social science jobs in government, check this out (via David Yokum):

The Lab @ DC is a new scientific team in the Executive Office of the Mayor of the District of Columbia Government. We’re well funded, work across all areas of government, and we’re deeply excited about applied research.

We’re based directly in the Office of the City Administrator—so we’re connected and poised to work on the most important policy and programmatic issues—and we’ll conduct work that is both highly applied and cutting-edge. (One of the first projects, for example, is a large randomized controlled trial of the police body-worn camera program.) We’re working to embed the scientific method into the heart of day-to-day governance, across all policy areas.

We’re about to launch a website and other materials, but we’re already hiring  […] it’s going to be very competitive …

The deadline to apply to September 19th, 2016. You can find position descriptions for the following:

Take a look and please share with colleagues who you think would be a good fit. Note the initial application is quick: just drop a resume and complete an HR questionnaire before Sept. 19th [2016]; we’ll then follow up with more details.

If you or colleagues have general questions, you can also reach out at thelab@dc.gov.

September 5, 2016

FTC public workshop on Putting Disclosures to the Test, Sept 15, 2016

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



View this announcement online

Decision Science News will be in the house!

The Federal Trade Commission will host a public workshop in Washington, DC on September 15, 2016 to examine the testing and evaluation of disclosures that companies make to consumers about advertising claims, privacy practices, and other information.

Effective disclosures are critical in helping consumers make informed decisions in the marketplace.

Many advertisers have used disclosures in an attempt to prevent their advertisements from being deceptive. Disclosures must be crafted with care both with respect to their language and presentation. Disclosures used in the marketplace are sometimes ineffective. Commission staff has recommended that disclosures be tested for effectiveness.

Disclosures are also challenging in the privacy arena, whether disclosing to consumers that their physical location or online interactions are being tracked, or explaining privacy practices when consumers sign up for a service. Privacy policies are often long and difficult to comprehend and privacy-related icons may fail to communicate information meaningfully to consumers. Furthermore, the accompanying mechanisms for consumers to provide informed consent or exercise choices about the use of their data may also be confusing. The Commission has long encouraged the development and testing of shorter, clearer, easier-to-use privacy disclosures and consent mechanisms.
The FTC has issued guides to help businesses avoid deceptive claims, such as guidance related to endorsements, environmental claims, fuel economy advertising, and the jewelry industry. Often the guidance presents options for qualifying claims to avoid deception. In developing guides, the Commission has sometimes relied on consumer research to gauge whether specific disclosures can be used to qualify otherwise misleading claims.

The FTC has a long commitment to understanding and testing the effectiveness of consumer disclosure, and is especially interested in learning about the costs and benefits of disclosure testing methods in the digital age. A number of factors impact the effectiveness of disclosures, including whether they contain the most essential information and consumers notice them, direct their attention towards them, comprehend them, and are able to use that information in their decision making. Some testing methods are more appropriate than others for evaluating these factors.

The workshop is aimed at encouraging and improving the evaluation and testing of disclosures by industry, academics, and the FTC. The FTC’s workshop will explore how to test the effectiveness of these disclosures to ensure consumers notice them, understand them and can use them in their decision-making. It is intended to further the understanding of testing and evaluation of both offline and online consumer disclosures, including those delivered through icons, product labels, short text, long text, audio or video messages, interactive tools, and other media. Topics may include evaluation criteria, testing methodologies and best practices, case studies, and lessons learned from such testing.

No registration is necessary to attend. The workshop will be webcast and a link will be available here on the day of the event.

An agenda is online.

September 1, 2016

Winter School on Bounded Rationality in India, January 9-15, 2017

Filed in Programs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



The T A Pai Management Institute (TAPMI) in collaboration with the Max Planck Institute for Human Development (MPIB) is excited to announce the Winter School on Bounded Rationality at TAPMI, Manipal (Karnataka), India to be held from January 09–15, 2017. The winter school aims to foster understanding the process and quality of human decisions and to apply this knowledge to the real world, enabling people to make better decisions in a complex world. To this end, it offers a unique forum for decision-making scholars and researchers from various disciplines to share their approaches, discuss their research and applications, and inspire each other.

Gerd Gigerenzer
Director of the Center for Adaptive Behavior and Cognition and the Harding Center for Risk Literacy, Max Planck Institute for Human Development, Germany.

The winter school shall focus on diverse set of topics:

  • Bounded Rationality, Ecological Rationality, Social Rationality
  • Behavioral Economics and Finance
  • Heuristics
  • Fast and Frugal Trees
  • Risk and Risk Literacy
  • Medical Decision Making

Seminars, talks, panel discussions, workshops, poster sessions, and social events will take
place, allowing participants to learn and develop new ideas in broad areas of Judgment and
Decision Making, facilitated by frequent interactions with the teaching faculty members.

Deadline for Application is September 25 2016. Participation will be free, accommodation will be provided, and travel expenses will be partly reimbursed. Winter  School  web link  (includes  contact  details  and  application  procedure):

For further questions email us at winterschool@tapmi.edu.in We look forward to seeing you at Manipal!

August 24, 2016

FTC and Marketing Science joint conference on consumer protection in DC, Sept 16, 2016

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



View this announcement online

The Federal Trade Commission’s Bureau of Economics and Marketing Science are co-organizing a one-day conference to bring together scholars interested in issues at the intersection of marketing and consumer protection policy and regulation. As the primary consumer protection law enforcement agency, the FTC has benefited from the marketing literature in its long history of case and policy work. The goal of the conference is to promote an intellectual dialogue between marketing scholars and FTC economists. Specifically, the conference will serve as a vehicle for marketing scholars to learn about the FTC’s practice in consumer protection, promoting potentially high impact research in the area of consumer protection and regulation, and introducing FTC economists to some of the cutting-edge research being conducted by marketing scholars. The conference will feature academic research paper sessions and a panel discussion between FTC economists and marketing scholars.


The conference program will run from 8:30 am to 5:30 pm on Friday, September 16, 2016, in the FTC 5th Floor Conference Room at Constitution Center. There will be an optional dinner after the conference starting at 6:00 pm. A fee of $100 will apply to participants who choose to attend the dinner.

Pre-registration for this conference is necessary. To pre-register, please e-mail your name, affiliation, and whether you intend to participate in the conference dinner to marketingconf@ftc.gov (link sends e-mail). Attendees must register for the conference dinner by September 1. Your email address will only be used to disseminate information about the conference. If space permits, we may allow a very limited number of onsite registrations beginning at 8:15 am on September 16.

The scientific committee for this conference consists of:

K. Sudhir, Editor-in-Chief, Marketing Science and Professor of Marketing, Yale School of Management
Avi Goldfarb, Senior Editor, Marketing Science and Professor of Marketing, University of Toronto
Ganesh Iyer, Senior Editor, Marketing Science and Professor of Marketing, University of California, Berkeley
Ginger Jin, Director, Federal Trade Commission Bureau of Economics and Professor of Economics, University of Maryland
Andrew Stivers, Deputy Director, Federal Trade Commission Bureau of Economics


INFORMS Society of Marketing Science (ISMS)
Federal Trade Commission Bureau of Economics


Constance Herasingh

Those interested in the Marketing Science – Federal Trade Commission Economic Conference on Marketing and Consumer Protection may also be interested in the FTC Workshop: Putting Disclosures to the Test on September 15, 2016.

August 18, 2016

Turn your tough decisions into simple rules

Filed in Encyclopedia ,Ideas ,Programs ,R ,Tools
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



Fast and frugal trees allow you to make rapid decisions based on a few pieces of information. You can easily carry them out in your head. Surprisingly, the accuracy of these decisions rivals those made by gold-standard methods like logistic regression, especially when predicting out of sample.

Intrigued? Check out this post by Nathaniel Phillips and the new R Package he’s created to create, visualize and test fast and frual trees. For all you judgment and decision making researchers out there, Phillips will also be presenting the R package at the annual meeting of the Society for Judgment and Decision Making (SJDM) in Boston in November 2016. If you know R, you could be building fast and frugal trees today!

August 9, 2016

Professorship in Operations, Information, and Decisions Department at Wharton

Filed in Jobs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



The Operations, Information and Decisions Department at the Wharton School is home to faculty with a diverse set of interests in behavioral economics, decision-making, information technology, information-based strategy, operations management, and operations research. We are seeking applicants for a full-time, tenure-track faculty position at any level: Assistant, Associate, or Full Professor. Applicants must have a Ph.D. (expected completion by June 2017 is preferred but by June 30, 2018 is acceptable) from an accredited institution and have an outstanding research record or potential in the OID Department’s areas of research. The appointment is expected to begin July 1, 2017.
More information about the Department is available at:


Interested individuals should complete and submit an online application via our secure website, and must include:
• A curriculum vitae
• A job market paper
• (Applicants for an Assistant Professor position) Three letters of recommendation submitted by references

To apply, please visit this web site:


Further materials, including (additional) papers and letters of recommendation, will be requested as needed.
To ensure full consideration, materials should be received by November 1st, 2016.

OID Department
The Wharton School
University of Pennsylvania
3730 Walnut Street
500 Jon M. Huntsman Hall
Philadelphia, PA 19104-6340

The University of Pennsylvania is an affirmative action/equal opportunity employer. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.

August 3, 2016

Heuristica: An R package for testing models of binary choice

Filed in Encyclopedia ,Ideas ,Programs ,R ,Tools
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



It just got a lot easier to simulate the performance of simple heuristics.

Jean Czerlinski Whitmore, a software engineer at Google with a long history in modeling cognition, and Daniel Barkoczi, a postdoctoral fellow at the Max Planck Institute for Human Development, have created heuristica: an R package to model the performance of simple heuristics. It comprises the heuristics covered in the first chapters of Simple Heuristics That Make Us Smart such as Take The Best, Unit Weighted Linear model, and more. The package also includes data, such as the the original German cities data set which has become a benchmark for testing heuristic models of choice, cited in hundreds of papers.

A good place to start is the README vignette, as with vignettes:

Here’s the heuristica package’s home on CRAN and here’s a description of the package in the authors’ own words:

The heuristica R package implements heuristic decision models, such as Take The Best (TTB) and a unit-weighted linear model. The models are designed for two-alternative choice tasks, such as which of two schools has a higher drop-out rate. The package also wraps more well-known models like regression and logistic regression into the two-alternative choice framework so all these models can be assessed side-by-side. It provides functions to measure accuracy, such as an overall percentCorrect and, for advanced users, some confusion matrix functions. These measures can be applied in-sample or out-of-sample.

The goal is to make it easy to explore the range of conditions in which simple heuristics are better than more complex models. Optimizing is not always better!

July 25, 2016

We’ve bet that Hillary Clinton will win

Filed in Gossip ,Ideas
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


Click to enlarge

Some experts seem pretty sure Donald Trump will be the next US President. Michael Moore wrote an article entitled 5 Reasons Why Trump Will Win.


Prediction maven Nate Silver has warned a few days ago “Don’t think people are really grasping how plausible it is that Trump could become president. It’s a close election right now.”


Despite this, we think that Hillary Clinton is going to win.

And we’ve put our money where our mouth is. There’s a prediction market called PredictIt in which US citizens in most states can legally bet on events happening or not. There’s an $850 limit on any contract, but you can get around that, in the following way.

As the figure up top shows, we’ve placed two bets:

  • We bet $799.50 that the next President will not be a Republican. That is, we bought 1250 shares of “no” on that contract at 65 cents each. If the next President is indeed not a Republican, we’ll be able to sell those shares for a dollar each, or $1230. Otherwise we lose our money.
  • We bet $849.87 that Hillary Clinton will be the next President. That is, we bought 1349 shares of “yes” on that contract at 63 cents. If Hillary wins, we’ll be able to sell our shares for $1349. Otherwise we lose our money.

So, we’ve bet $1,649.37. If Hillary wins, we’ll have $2,579 (minus the market’s 10% fee on profits). If Trump or some other Republican wins, we’ll have bupkis.