[ View menu ]

October 21, 2016

Build your own distribution builders

Filed in Encyclopedia ,Ideas ,Programs ,Research News ,Tools
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


The drag-and-drop style Distribution Builder of Goldstein Johnson and Sharpe (2008).

A balls-and-buckets style Distribution Builder using Quentin André’s Javascript tool.

If you read this website, you probably want to elicit probability distributions from people. A Distribution Builder (DB for short) does just that, and elicits them as cognitively-friendly frequency histograms.

The Distribution Builder was created by Dan Goldstein, Bill Sharpe and Phil Blythe in the year 2000 and its first major publication was in 2008. The DB is a digital implementation of a method that was first used, as far as we can tell, by Kabus using poker chips in 1976, as cited in this paper by Goldstein and Rothschild (2014), which found that elicitation using a distribution builder beat conventional methods.

Now for the news. Quentin André has built a JavaScript distribution builder that anyone can use and adapt. It creates ball-and-bucket style distribution builders, and has a nice Web site full of documentation.

October 12, 2016

Social science does not reward citing outside the field

Filed in Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



We have talked in the past about how economics does not cite other fields much (see Pieters and Baumgartner, 2002). Are authors rewarded for writing papers this way? In social science, the answer seems to be yes.

A recent article in Plos One “The Impact of Boundary Spanning Scholarly Publications and Patents” by Xiaolin Shi, Lada A. Adamic, Belle L. Tseng, and Gavin S. Clarkson looks at the correlations between a paper’s impact and whether it cites within or across fields:

The question we ask is simple: given the proximity in subject area between a citing publication (paper or patent) and cited publication, what is the impact of the citing publication? If cross-disciplinary information flows result in greater impact, one would see a negative correlation between proximity and impact. On the other hand, if it is within-discipline contributions that are most easily recognized and rewarded, one would observe a positive correlation.

We find that a publication’s citing across disciplines is tied to its subsequent impact. In the case of patents and natural science publications, those that are cited at least once are cited slightly more when they draw on research outside of their area. In contrast, in the social sciences, citing within one’s own field tends to be positively correlated with impact.

Shi X, Adamic LA, Tseng BL, Clarkson GS (2009) The Impact of Boundary Spanning Scholarly Publications and Patents. PLoS ONE 4(8): e6547. doi:10.1371/journal.pone.0006547

October 5, 2016

ACR 2016, Berlin, Germany, Oct 27 -30th

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



What: ACR 2016
When: October 27-30, 2016
Where: Berlin, Germany
Conference website: http://www.acrweb.org/acr/
Conference email: ACRBerlin@rsm.nl
Registration: Registration link
Accomodation: Accomodation link

The 2016 North American Conference of the Association for Consumer Research which will be held – for the first time – outside of North America. Satisfy your Wanderlust and join us in Berlin, Germany from Thursday, October 27 through Sunday, October 30 2016 for this groundbreaking, boundary-spanning conference.

Berlin is one of the most exciting and interesting capital cities in the world. Its history – both distant and recent – has often been dramatic, leaving many signs and symbols on the city. In Berlin, the legacy of modern political struggles reverberates and feeds an amazing avant garde in art and design. It is a perfect place to broaden your academic horizons. The Maritim Hotel Berlin occupies a prime spot on the city’s Tiergarten park in the tranquil diplomatic quarter, within walking distance of the the “Kurfürstendamm” and the “Potsdamer Platz.” The area houses many parliamentary and governmental institutions, including the Bundestag in the Reichstag building, the new German Chancellery, and the residence of the German President.

Decision Science News will be there with a workshop:

Turkshop: How to experiment with the crowd

Dan Goldstein, Microsoft Research, USA
Gabriele Paolacci, Erasmus University Rotterdam, The Netherlands

Kathryn Sharpe Wessling, The Wharton School, University of Pennsylvania, USA
Jason Roos, Erasmus University Rotterdam, The Netherlands
Eyal Pe’er, Bar Ilan University, Israel

Come hear about the latest research about online experiments on Amazon Mechanical Turk and its alternatives. Check your assumptions about crowdsourced participants. Learn how to design online experiments in a smart way. There will be plenty of time for interactive discussion.

photo credit: https://www.flickr.com/photos/zanaguara/2480378901

September 26, 2016

Power pose co-author: I do not believe that “power pose” effects are real.

Filed in Gossip ,Ideas ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


Good scientists change their views when the evidence changes

In light of considerable evidence that there is no meaningful effect of power posing, Dana Carney, a co-author of the original article has come forth stating that she no longer believes in the effect.

The statement is online here, but we record it as plain text below, for posterity.

My position on “Power Poses”

Regarding: Carney, Cuddy & Yap (2010).

Reasonable people, whom I respect, may disagree. However since early 2015 the evidence has been mounting suggesting there is unlikely any embodied effect of nonverbal expansiveness (vs. contractiveness)—i.e.., “power poses” — on internal or psychological outcomes.

As evidence has come in over these past 2+ years, my views have updated to reflect the evidence. As such, I do not believe that “power pose” effects are real.

Any work done in my lab on the embodied effects of power poses was conducted long ago (while still at Columbia University from 2008-2011) – well before my views updated. And so while it may seem I continue to study the phenomenon, those papers (emerging in 2014 and 2015) were already published or were on the cusp of publication as the evidence against power poses began to convince me that power poses weren’t real. My lab is conducting no research on the embodied effects of power poses.

The “review and summary paper” published in 2015 (in response to Ranehill, Dreber, Johannesson, Leiberg, Sul, & Weber (2015 ) seemed reasonable, at the time, since there were a number of effects showing positive evidence and only 1 published that I was aware of showing no evidence. What I regret about writing that “summary” paper is that it suggested people do more work on the topic which I now think is a waste of time and resources. My sense at the time was to put all the pieces of evidence together in one place so we could see what we had on our hands. Ultimately, this summary paper served its intended purpose because it offered a reasonable set of studies for a p-curve analysis which demonstrated no effect (see Simmons & Simonsohn, in press). But it also spawned a little uptick in moderator-type work that I now regret suggesting.

I continue to be a reviewer on failed replications and re-analyses of the data — signing my reviews as I did in the Ranehill et al. (2015) case — almost always in favor of publication (I was strongly in favor in the Ranehill case). More failed replications are making their way through the publication process. We will see them soon. The evidence against the existence of power poses is undeniable.

There are a number of methodological comments regarding Carney, Cuddy & Yap (2010) paper that I would like to articulate here.

Here are some facts

1. There is a dataset posted on dataverse that was posted by Nathan Fosse. It is posted as a replication but it is, in fact, merely a “re-analysis.” I disagree with one outlier he has specified on the data posted on dataverse (subject # 47 should also be included—or none since they are mostly 2.5 SDs from the mean. However the cortisol effect is significant whether cortisol outliers are included or not).
2. The data are real.
3. The sample size is tiny.
4. The data are flimsy. The effects are small and barely there in many cases.
5. Initially, the primary DV of interest was risk-taking. We ran subjects in chunks and checked the effect along the way. It was something like 25 subjects run, then 10, then 7, then 5. Back then this did not seem like p-hacking. It seemed like saving money (assuming your effect size was big enough and p-value was the only issue).
6. Some subjects were excluded on bases such as “didn’t follow directions.” The total number of exclusions was
5. The final sample size was N = 42.
7. The cortisol and testosterone data (in saliva at that point) were sent to Salimetrics (which was in State College, PN at that time). The hormone results came back and data were analyzed.
8. For the risk-taking DV: One p-value for a Pearson chi square was .052 and for the Likelihood ratio it was .05. The smaller of the two was reported despite the Pearson being the more ubiquitously used test of significance for a chi square. This is clearly using a “researcher degree of freedom.” I had found evidence that it is more appropriate to use “Likelihood” when one has smaller samples and this was how I convinced myself it was OK.
9. For the Testosterone DV: An outlier for testosterone were found. It was a clear outlier (+ 3SDs away from the mean). Subjects with outliers were held out of the hormone analyses but not all analyses.
10. The self-report DV was p-hacked in that many different power questions were asked and those chosen were the ones that “worked.”

Confounds in the Original Paper (Which should have been evident in 2010 but only in hindsight did these confounds become so obviously clear)

1. The experimenters were both aware of the hypothesis. The experimenter who ran the pilot study was less aware but by the end of running the experiment certainly had a sense of the hypothesis. The experimenters who ran the main experiment (the experiment with the hormones) knew the hypothesis.
2. When the risk-taking task was administered, participants were told immediately after whether they had “won.” Winning included an extra prize of $2 (in addition to the $2 they had already received). Research shows that winning increases testosterone (e.g., Booth, Shelley, Mazur, Tharp, & Kittok, 1989). Thus, effects observed on testosterone as a function of expansive posture may have been due to the fact that more expansive postured-subjects took the “risk” and you can only “win” if you take the risk. Therefore, this testosterone effect—if it is even to be believed–may merely be a winning effect, not an expansive posture effect.
3. Gender was not dealt with appropriately for testosterone analyses. Data should have been z-scored within-gender and then statistical tests conducted.

Where do I Stand on the Existence of “Power Poses”

1. I do not have any faith in the embodied effects of “power poses.” I do not think the effect is real.
2. I do not study the embodied effects of power poses.
3. I discourage others from studying power poses.
4. I do not teach power poses in my classes anymore.
5. I do not talk about power poses in the media and haven’t for over 5 years (well before skepticism set in)
6. I have on my website and my downloadable CV my skepticism about the effect and links to both the failed replication by Ranehill et al. and to Simmons & Simonsohn’s p-curve paper suggesting no effect. And this document.


Booth, A., Shelley, G. Mazur, A., Tharp, G., Kittok, R. (1989). Testosterone, and winning and losing in human competition.
Hormones and Behavior, 23, 556–571.
Ranehill, E., Dreber, A., Johannesson, M., Leiberg, S., Sul, S., & Weber, R. A. (2015). Assessing the Robustness of Power
Posing: No Effect on Hormones and Risk Tolerance in a Large Sample of
Men and Women. Psychological Science, 33, 1-4.
Simmons, J. P., & Simonsohn, U. (in press). Power Posing: P-Curving the Evidence. Psychological Science.


September 19, 2016

Pre-conference on debiasing at the SJDM meeting in Boston Nov 18, 2016

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



Carey Morewedge, Janet Schwartz, Leslie John, and Remi Trudel write:

We invite you to participate in a preconference on Friday, November 18th, 2016 at the Questrom School of Business at Boston University. The preconference will feature a day of talks on debiasing before the annual meeting of the Society for Judgment and Decision Making in Boston, MA. Rather than focusing on how to avoid or circumvent bias in particular context, our goal is to extend our field’s conversation about debiasing. To that end, the talks will present our state of the art knowledge on improving decision making abilities from three perspectives:

  • Who is more or less biased in their decision making?
  • Can we reduce bias within an individual?
  • When should we (not) reduce bias?


  • Richard Larrick (Duke University)


  • Rosalind Chow (Carnegie Mellon University)
  • Jason Dana (Yale University)
  • Calvin Lai (Harvard University)
  • Stephan Lewandowsky (University of Bristol)
  • Carey Morewedge (Boston University)
  • Emily Oster (Brown University)
  • Gordon Pennycook (Yale University)
  • Robert J. Smith (University of Miami, Harvard Law School)

The preconference is from 9am to 4pm and includes invited talks, a datablitz, lunch, and a keynote. The conference will be hosted at the Questrom School of Business at Boston University, 595 Commonwealth Ave., Boston, MA 02215. All registered attendees are welcome to submit a presentation for the data blitz, an hour of 5 minute talks. Please submit a title and abstract of no more than 150 words before September 1st for consideration. Due to space limitations, registration is on a first come, first served basis until all seats are filled. Registration costs to cover coffee and lunch is $40 for faculty and $20 for students/postdocs.

More information and a portal to sign up for the conference and submit a presentation for the data blitz, can be found here: http://blogs.bu.edu/decision/

September 12, 2016

The Lab @ DC hiring now. Deadline Sept 19, 2016.

Filed in Jobs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



It’s a great time to be in decision science / behaviorial science. Jobs everywhere in academia, industry and government. Hey, speaking of social science jobs in government, check this out (via David Yokum):

The Lab @ DC is a new scientific team in the Executive Office of the Mayor of the District of Columbia Government. We’re well funded, work across all areas of government, and we’re deeply excited about applied research.

We’re based directly in the Office of the City Administrator—so we’re connected and poised to work on the most important policy and programmatic issues—and we’ll conduct work that is both highly applied and cutting-edge. (One of the first projects, for example, is a large randomized controlled trial of the police body-worn camera program.) We’re working to embed the scientific method into the heart of day-to-day governance, across all policy areas.

We’re about to launch a website and other materials, but we’re already hiring  […] it’s going to be very competitive …

The deadline to apply to September 19th, 2016. You can find position descriptions for the following:

Take a look and please share with colleagues who you think would be a good fit. Note the initial application is quick: just drop a resume and complete an HR questionnaire before Sept. 19th [2016]; we’ll then follow up with more details.

If you or colleagues have general questions, you can also reach out at thelab@dc.gov.

September 5, 2016

FTC public workshop on Putting Disclosures to the Test, Sept 15, 2016

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



View this announcement online

Decision Science News will be in the house!

The Federal Trade Commission will host a public workshop in Washington, DC on September 15, 2016 to examine the testing and evaluation of disclosures that companies make to consumers about advertising claims, privacy practices, and other information.

Effective disclosures are critical in helping consumers make informed decisions in the marketplace.

Many advertisers have used disclosures in an attempt to prevent their advertisements from being deceptive. Disclosures must be crafted with care both with respect to their language and presentation. Disclosures used in the marketplace are sometimes ineffective. Commission staff has recommended that disclosures be tested for effectiveness.

Disclosures are also challenging in the privacy arena, whether disclosing to consumers that their physical location or online interactions are being tracked, or explaining privacy practices when consumers sign up for a service. Privacy policies are often long and difficult to comprehend and privacy-related icons may fail to communicate information meaningfully to consumers. Furthermore, the accompanying mechanisms for consumers to provide informed consent or exercise choices about the use of their data may also be confusing. The Commission has long encouraged the development and testing of shorter, clearer, easier-to-use privacy disclosures and consent mechanisms.
The FTC has issued guides to help businesses avoid deceptive claims, such as guidance related to endorsements, environmental claims, fuel economy advertising, and the jewelry industry. Often the guidance presents options for qualifying claims to avoid deception. In developing guides, the Commission has sometimes relied on consumer research to gauge whether specific disclosures can be used to qualify otherwise misleading claims.

The FTC has a long commitment to understanding and testing the effectiveness of consumer disclosure, and is especially interested in learning about the costs and benefits of disclosure testing methods in the digital age. A number of factors impact the effectiveness of disclosures, including whether they contain the most essential information and consumers notice them, direct their attention towards them, comprehend them, and are able to use that information in their decision making. Some testing methods are more appropriate than others for evaluating these factors.

The workshop is aimed at encouraging and improving the evaluation and testing of disclosures by industry, academics, and the FTC. The FTC’s workshop will explore how to test the effectiveness of these disclosures to ensure consumers notice them, understand them and can use them in their decision-making. It is intended to further the understanding of testing and evaluation of both offline and online consumer disclosures, including those delivered through icons, product labels, short text, long text, audio or video messages, interactive tools, and other media. Topics may include evaluation criteria, testing methodologies and best practices, case studies, and lessons learned from such testing.

No registration is necessary to attend. The workshop will be webcast and a link will be available here on the day of the event.

An agenda is online.

September 1, 2016

Winter School on Bounded Rationality in India, January 9-15, 2017

Filed in Programs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



The T A Pai Management Institute (TAPMI) in collaboration with the Max Planck Institute for Human Development (MPIB) is excited to announce the Winter School on Bounded Rationality at TAPMI, Manipal (Karnataka), India to be held from January 09–15, 2017. The winter school aims to foster understanding the process and quality of human decisions and to apply this knowledge to the real world, enabling people to make better decisions in a complex world. To this end, it offers a unique forum for decision-making scholars and researchers from various disciplines to share their approaches, discuss their research and applications, and inspire each other.

Gerd Gigerenzer
Director of the Center for Adaptive Behavior and Cognition and the Harding Center for Risk Literacy, Max Planck Institute for Human Development, Germany.

The winter school shall focus on diverse set of topics:

  • Bounded Rationality, Ecological Rationality, Social Rationality
  • Behavioral Economics and Finance
  • Heuristics
  • Fast and Frugal Trees
  • Risk and Risk Literacy
  • Medical Decision Making

Seminars, talks, panel discussions, workshops, poster sessions, and social events will take
place, allowing participants to learn and develop new ideas in broad areas of Judgment and
Decision Making, facilitated by frequent interactions with the teaching faculty members.

Deadline for Application is September 25 2016. Participation will be free, accommodation will be provided, and travel expenses will be partly reimbursed. Winter  School  web link  (includes  contact  details  and  application  procedure):

For further questions email us at winterschool@tapmi.edu.in We look forward to seeing you at Manipal!

August 24, 2016

FTC and Marketing Science joint conference on consumer protection in DC, Sept 16, 2016

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



View this announcement online

The Federal Trade Commission’s Bureau of Economics and Marketing Science are co-organizing a one-day conference to bring together scholars interested in issues at the intersection of marketing and consumer protection policy and regulation. As the primary consumer protection law enforcement agency, the FTC has benefited from the marketing literature in its long history of case and policy work. The goal of the conference is to promote an intellectual dialogue between marketing scholars and FTC economists. Specifically, the conference will serve as a vehicle for marketing scholars to learn about the FTC’s practice in consumer protection, promoting potentially high impact research in the area of consumer protection and regulation, and introducing FTC economists to some of the cutting-edge research being conducted by marketing scholars. The conference will feature academic research paper sessions and a panel discussion between FTC economists and marketing scholars.


The conference program will run from 8:30 am to 5:30 pm on Friday, September 16, 2016, in the FTC 5th Floor Conference Room at Constitution Center. There will be an optional dinner after the conference starting at 6:00 pm. A fee of $100 will apply to participants who choose to attend the dinner.

Pre-registration for this conference is necessary. To pre-register, please e-mail your name, affiliation, and whether you intend to participate in the conference dinner to marketingconf@ftc.gov (link sends e-mail). Attendees must register for the conference dinner by September 1. Your email address will only be used to disseminate information about the conference. If space permits, we may allow a very limited number of onsite registrations beginning at 8:15 am on September 16.

The scientific committee for this conference consists of:

K. Sudhir, Editor-in-Chief, Marketing Science and Professor of Marketing, Yale School of Management
Avi Goldfarb, Senior Editor, Marketing Science and Professor of Marketing, University of Toronto
Ganesh Iyer, Senior Editor, Marketing Science and Professor of Marketing, University of California, Berkeley
Ginger Jin, Director, Federal Trade Commission Bureau of Economics and Professor of Economics, University of Maryland
Andrew Stivers, Deputy Director, Federal Trade Commission Bureau of Economics


INFORMS Society of Marketing Science (ISMS)
Federal Trade Commission Bureau of Economics


Constance Herasingh

Those interested in the Marketing Science – Federal Trade Commission Economic Conference on Marketing and Consumer Protection may also be interested in the FTC Workshop: Putting Disclosures to the Test on September 15, 2016.

August 18, 2016

Turn your tough decisions into simple rules

Filed in Encyclopedia ,Ideas ,Programs ,R ,Tools
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



Fast and frugal trees allow you to make rapid decisions based on a few pieces of information. You can easily carry them out in your head. Surprisingly, the accuracy of these decisions rivals those made by gold-standard methods like logistic regression, especially when predicting out of sample.

Intrigued? Check out this post by Nathaniel Phillips and the new R Package he’s created to create, visualize and test fast and frual trees. For all you judgment and decision making researchers out there, Phillips will also be presenting the R package at the annual meeting of the Society for Judgment and Decision Making (SJDM) in Boston in November 2016. If you know R, you could be building fast and frugal trees today!