[ View menu ]

January 16, 2019

Summer Institute on Bounded Rationality, June 11 – 19, 2019, Max Planck Institute Berlin

Filed in Conferences ,Programs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

APPLICATION DEADLINE MARCH 17, 2019

We are delighted to announce that the 18th Summer Institute on Bounded Rationality will take place June 11 – 19, 2019, at the Max Planck Institute for Human Development in Berlin, Germany.

The Summer Institute brings together talented young researchers and renowned scientists from around the globe and aims to spark a dialogue about decision making under the real world constraints of limited time, information, or computational power. The Summer Institute offers a forum for young scholars from various disciplines to share their approaches, discuss their research, and to inspire each other. The program will cover the fundamentals, methodology, and recent findings on bounded rationality.

The theme of 2019 is bounded rationality in a digital world. This year, we will approach the topic of bounded rationality in the context of recent technological developments and rapidly changing informational environments, as well as the new challenges they present to human rationality and decision making. The keynote talks will be given by Stephan Lewandowsky, University of Bristol; Oliver Brock, Technical University of Berlin; and Iyad Rahwan, Max Planck Institute for Human Development in Berlin.

On behalf of the directors of the Summer Institute, Gerd Gigerenzer and Ralph Hertwig, we invite young scholars of decision making from all fields to apply. Participation is free, accommodation is provided, and travel expenses will be partly reimbursed.

Applications are open until March 17, 2019.

Apply here: http://bit.ly/2npmmNT
Website: http://bit.ly/2DPGYcu

Please feel free to email any questions you might have to si2019@mpib-berlin.mpg.de.

January 9, 2019

Behavioral Science and Policy Association (BSPA) June 14, 2019

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

DEADLINE MARCH 9, 2019

The annual conference of the Behavioral Science & Policy Association (BSPA) will be held on June 14, 2019 in New York City, NY. Attendees include leading behavioral scientists, policy makers, behavioral science consultants, private and public sector executives, and members of the media.

BSPA seeks proposals by March 9 , 2019 for short (TED talk style) presentations highlighting research in six key areas in which behavioral scientists could have significant influence on policy. These include:

Education & Culture,
Energy & Environment
Financial Decision Making
Health
Justice & Ethics
Management & Labor

The short presentation session is designed to inform and influence academics, policy makers, and managers. Presentations may demonstrate recent key research findings (potentially from multiple papers) with meaningful implications for policy and practice and need not present new work-in-progress. These presentations should not be highly technical.

Click here to learn more and to submit: https://behavioralpolicy.org/wp-content/uploads/2019/01/BSPA-Call-For-Presenters-2019.pdf

December 31, 2018

The SJDM Newsletter is ready for download

Filed in SJDM
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

SOCIETY FOR JUDGMENT AND DECISION MAKING NEWSLETTER

The quarterly Society For Judgment and Decision Making newsletter is ready for download:

http://sjdm.org/newsletters/

December 26, 2018

Replication Markets Team Seeks Journal Partners for Replication Trial

Filed in Ideas ,Programs ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

TEST WHETHER PREDICTION MARKETS PREDICT REPLICATION AT YOUR JOURNAL

Replication

This week we present a letter from a group embarking on an interesting project in which journals work with the team in order to test the effectiveness of prediction markets for predicting how well experiments replicate.

“Recent attempts to systematically replicate samples of published experiments in the social and behavioral sciences have revealed disappointingly low rates of replication. Many parties are discussing a wide range of options to address this problem.

Surveys and prediction markets have been shown to predict, at rates substantially better than random, which experiments will replicate. This suggests a simple strategy by which academic journals could increase the rate at which their published articles replicate. For each relevant submitted article, create a prediction market estimating its chance of replication, and use that estimate as one factor in deciding whether to publish that article.

The Replication Markets Team seeks academic journals to join us in a test of this strategy. We have been selected for an upcoming DARPA program to create prediction markets for several thousand scientific replication experiments, many of which could be based on articles submitted to your journal. Each market would predict the chance of an experiment replicating. Of the already-published experiments in the pool, approximately one in ten will be sampled randomly for replication. (Whether submitted papers could be included in the replication pool depends on other teams in the program.) Our past markets have averaged 70% accuracy, and the work is listed at the Science Prediction Market Project page, and has been published in Science, PNAS, and Royal Society Open Science.

While details are open to negotiation, our initial concept is that your journal would tell potential authors that you are favorably inclined toward experiment article submissions that are posted at our public archive of submitted articles. By posting their article, authors declare that they have submitted their article to some participating journal, though they need not say which one. You tell us when you get a qualifying submission, we quickly tell you the estimated chance of replication, and later you tell us of your final publication decision.

At this point in time we seek only an expression of substantial interest that we can take to DARPA and other teams. Details that may later be negotiated include what exactly counts as a replication, whether archived papers reveal author names, how fast we respond with our replication estimates, what fraction of your articles we actually attempt to replicate, and whether you privately give us any other quality indicators obtained in your reviews to assist in our statistical analysis.

Please RSVP to: Angela Cochran, PM acochran@replicationmarkets.com 571 225 1450

Sincerely, the Replication Markets Team

Thomas Pfeiffer (Massey University)
Yiling Chen, Yang Liu, and Haifeng Xu (Harvard University)
Anna Dreber Almenberg & Magnus Johannesson (Stockholm School of Economics)
Robin Hanson & Kathryn Laskey (George Mason University)”

Photo Credit: https://flic.kr/p/5VdQP3

December 12, 2018

One year US government fellowships doing large-scale randomized studies

Filed in Jobs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

DEADLINE TO APPLY DECEMBER 30, 2018

The GSA Office of Evaluation Sciences (OES) is currently accepting applications for one-year fellowships beginning in October 2019 in Washington, D.C.

OES is a team of applied researchers tasked with building insights from the social and behavioral sciences into federal programs, and testing and learning what works. The work and role of OES are unique – directly designing, implementing and analyzing evidence-based interventions and randomized evaluations in a large-scale federal policy environment. OES Fellows apply promising interventions at a national scale, run large-scale tests reaching millions of people, and work closely with key decision makers in government. Fellows shape their own high-impact portfolio of work, design and direct projects, author academic publications, and benefit from a dynamic team and flexible Federal work environment.

Over the past four years, OES has completed over 60 randomized evaluations with agency partners. For more information on our portfolio to date, go to https://oes.gsa.gov/work/.

Please consider applying and pass this opportunity along. The deadline to submit an application is 11:59 pm EST Sunday, December 30, 2018.

December 5, 2018

Bayesian Crowd Conference, June 24-25, 2019

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

CALL FOR PAPERS. DEADLINE MARCH 3, 2019

On June 24-25, 2019, Erasmus University Rotterdam will host the 14th Annual Tinbergen Institute Conference in Rotterdam, The Netherlands. The theme of this year is “Bayesian Crowd”.

Topics
This conference is unique in that it brings together people from different backgrounds (economics, psychology, computer sciences, decision analysis) who work on truth-telling or wisdom of crowds, broadly speaking. It offers two days to mingle and present the latest breakthroughs on Bayesian and behavioral methods to elicit true answers from individuals and crowds:

  • scoring and incentives
  • Bayesian truth-serum and related mechanisms
  • behavioral approaches to promote truth-telling
  • crowds
  • individuals
  • wisdom of crowds and judgment aggregation
  • prediction markets
  • identifying experts

Keynote Speakers
David Budescu, Fordham University
Anna Dreber Almenberg, Stockholm School of Economics
Boi Faltings, École polytechnique fédérale de Lausanne

Abstract Submission Deadline: March 3, 2019 (Sunday) midnight.

The organizing committee will select papers for presentation at the conference based on abstracts.

All abstracts should be submitted online at http://bayesiancrowd.com/abstract-submission.
We encourage early submissions, and decisions about the abstracts will be communicated on a rolling basis to the authors, and latest by March 17, 2019.

Travel awards will be available for PhD students and junior faculty, covering flights and accommodation. Applications to travel awards should be made with the abstract submission.

More details about the conference, registration, travel awards and accommodations can be found at:
http://bayesiancrowd.com/

If you have any additional questions, please do not hesitate to contact us by email: Christina Månsson, Tinbergen Institute, tinbergen at tinbergen.nl

Organizing Committee
Aurelien Baillon, Erasmus University Rotterdam
Drazen Prelec, MIT
Dennie van Dolder, VU Amsterdam
Tong Wang, Erasmus University Rotterdam

November 28, 2018

Reinforcement Learning and Decision Making Conference Montreal, July 7-10, 2019.

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

RLDM 2019 WORKSHOP SUBMISSIONS DUE MARCH 1, 2019

The 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2019) will be held in Montréal, Canada, from July 7-10, 2019. Workshops are a new addition to the RLDM program, and will be held on the last afternoon of the meeting, Wednesday July 10, from 1pm to 5pm.

We invite researchers interested in chairing one of these workshops to submit workshop proposals. The goal of the workshops is to encourage interdisciplinary discussion and provide an informal forum for researchers to discuss important research questions and challenges. Controversial issues, open problems, and comparisons of competing approaches are encouraged as workshop topics.

We also invite both standard and innovative/non-standard formats, such as invited oral presentations, panel discussions, data modeling challenges, hackathons, debates, and workshops aimed at improving communication between researchers from different fields rather than presenting novel research.

Workshop organizers have several responsibilities, including coordinating workshop participation and content as well as providing the program for the workshop in a timely manner.

Due to the short length of the workshops, we discourage poster sessions.

Submission Instructions

To submit a workshop proposal, please e-mail your submission to Sam Gershman (gershman at fas.harvard.edu) by 23:59 UTC on Friday, March 1st, 2019. Notifications will be provided by March 15, 2019.

Proposals should clearly specify the following:
* Workshop title and acronym if available
* A brief description of the workshop focus, emphasizing why this workshop would appeal to the diverse RLDM audience
* A short description of the format of planned activities (talks, panels, invited speakers, activities, etc.)
* A list of which invited speakers have confirmed their willingness to participate
* A list of organizers with email addresses, web page URLs

RLDM will not be able to provide travel funding for workshop speakers. In other venues, some workshops have sought and received funding from external sources to bring in outside speakers and RLDM is open to that model.

November 21, 2018

JDM Pre-Conference at SPSP Portland, Feb 7, 2019

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

DEADLINE DECEMBER 1, 2018

This is a reminder about the Judgment and Decision Making Preconference in Portland on February 7th, 2019. The JDM Preconference will explore research at the intersection of social and personality psychology and judgment and decision making research. We hope that you plan to join us there!

The deadline for poster and data blitz submissions is only a few days away on December 1st, 2018 at 11:59pm EST.

To submit a poster for consideration, please send the title of your poster, all authors, a 200 word (max) abstract, and one figure or table of data to jdmspsppreconference@gmail.com.

Selected presentations will also be given the opportunity to present a 10-minute “data blitz” talk during the preconference. To be considered for the data blitz, please indicate that the first author is student in your poster submission.

Our scheduled speakers include:

  • Craig Fox
  • George Loewenstein
  • Christopher Olivola
  • Jane Risen
  • Juliana Schroeder
  • Anuj Shah
  • Mary Steffel
  • Abigail Sussman

To register for the conference, or for more information, please visit the preconference website at: meeting.spsp.org/preconferences/judgment

Hope to see you all in Portland!

Organizers:
Alex Imas, David Tannenbaum, and Elanor Williams

November 14, 2018

PhD students in decision making: Apply to win the de Finetti Award

Filed in Programs ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

DEADLINE MARCH 29, 2019

The European Association for Decision Making’s de Finetti Award has, since 1995, recognized outstanding work by PhD student researchers in the area of decision making.

The winner will receive a prize of 750 Euros, a certificate, and be asked to make a presentation at SPUDM 2019 in Amsterdam http://www.spudm2019.com

Please see the website https://www.spudm2019.com/de-fenetti-award for information on eligibility and how to submit.

USEFUL INFORMATION

– Only PhD students who did not have their PhD at the time of the last SPUDM conference (August 2017) are eligible.

-The PhD student should be the sole or first author and the work should be mainly that of the student. If co-authored, the paper is accompanied by a signed statement (PDF) from the co-author(s) to the effect that the student is credited as the primary source of ideas and the main author of this paper;

-The paper can be either published or unpublished at the moment of submission for the de Finetti competition. There is no longer a requirement that the paper be unpublished.

– Submissions in dissertation format will not be considered, but articles based on a dissertation are encouraged.

– Only one paper may be submitted per applicant;

– There will be blind review. Applicants are asked for two versions of the submitted paper: an anonymous and a non-anonymous version.

– The anonymous version should be formatted as a manuscript (i.e., not a published journal article) with figures and tables integrated into the text. Please remove with names, affiliations, and author notes removed for blind review.

– The non-anonymous version should contain names, affiliations, and author notes and can be formatted however the author chooses.

The papers will be evaluated by a committee appointed by the Board of EADM consisting of Tim Pleskac (chair), Johann Majer (previous award winner), Ellen Peters, Tim Rakow, Tomás Lejarraga, Ilana Ritov.

To be considered for this award, papers and statements should be submitted before Friday, March 29, 5:00 PM Central U.S. Daylight Time. Please submit the papers at this link: https://kusurvey.ca1.qualtrics.com/jfe/form/SV_eanKRMTwn9EdECF

Winners will be notified by early June 2019.

Please contact Tim Pleskac (pleskac at ku.edu) with any questions.

November 5, 2018

Reflections on the review process

Filed in Ideas ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

VIEWS FROM A VETERAN EDITOR

After 13 years of editing journals (Journal of Marketing, International Journal of Marketing Research, Journal of Service Research), Roland Rust wrote up some of his thoughts on the review process. We quote here some bits we found interesting. You can read the full article here: Reflections on the review process

Observation #1: there are too many review rounds

I was having dinner last week with a professor from one of the world’s leading universities who was discussing a paper he has had under review for five years at one of the field’s leading journals. The paper recently received yet another risky revision decision in the fourth round. Such delay, although no doubt well-intentioned on the part of the editor, harms the field, because it slows down the diffusion of knowledge. I would be willing to bet that 90+% of the paper’s current (and eventual) value was present in the initial submission.

To combat this problem, some journals have attempted to institute a 2-round policy. The idea is that the paper should achieve at least conditional acceptance in the second round. Such a policy may have unintended consequences. Given that the top journals all have very high standards for rigor, the only papers that will make it through in two rounds are papers that are already highly-polished in the initial submission, and only “safe” papers that are exploring standard topics in standard ways will have a chance.

Observation #2: perfection is valued more than timeliness

The example I gave previously shows the downside of this value system. If it takes 4–5 years to get a paper through the review process, there is no way that the marketing literature can respond in a timely way to fast-moving topics. The Computer Science field combats this by counting proceedings papers more than journal articles, and making fast decisions on those proceedings papers. By marketing’s standards, the CS review process seems “fast and loose.” But at least it is fast, and timely work can surface quickly. By contrast, the marketing literature always seems several years behind.

My serial co-author Preston McAfee told me about a journal he worked with that had a no revision policy. I believe the idea is that you send the paper in and it either gets a) rejected b) accepted conditional on making certain changes. If there were more journals like this, time would be saved by authors, reviewers, editors and support staff.

I have heard of professors that urge their students to take shoddy work and “just send it in,” planning to win over the reviewers over multiple rounds of review. There’s an incentive not to do this when you know that your paper will either be in or out.

Since I moved to industry labs, I’ve published more and more in Computer Science. In CS, conference proceedings, not journals, are the important things. You get tenure for publishing in conference proceedings, which can be as or more selective than the top journals in marketing or psychology. The conference proceeding model works as follows. You submit a manuscript. You get reviews. You write a reply to the reviewers (without revising the paper). You then get a) rejection or b) conditional acceptance. Every process has its tradeoffs. CS certainly publishes a number of “reinventions” and flawed analyses, but the upside is that it tends to capture all the good stuff. The crud gets ignored and the good ideas get built upon. It’s hard to argue that psych and marketing are making more cumulative progress than computer science is.

Recommendation #1: accept papers quicker

If there is a timeliness value for ideas, then editors need to recognize that getting that last 1% of rigor may result in a net loss of value. This means that it is often best for the editor to take a stand and accept a paper before everybody on the review team signs off. This means that we need to appoint editors who are secure in their standing in the field, and who are strong enough to make decisions that some AE’s or reviewers may disagree with.

Higher recall but slightly lower precision is the gist of the CS model.

Recommendation #2: editors need to be the importance police

Given the tendency of reviewers to simply attack papers and produce a list of problems, the editor needs to counteract the reviewers’ almost exclusive focus on rigor by insisting on problem importance. This can also sometimes mean rejecting an unimportant paper for which the reviewers find few problems. It can also mean giving a paper more of a chance if it is on an important topic. I recommend that papers on important and timely topics should be consciously given more slack with respect to expectations of rigor.

Hard to know important when you see it, though.

Recommendation #3: editors need to be willing to overrule the review team

In my view, a good editor respects the review team, but sees the reviews as advisory. The review decision should not be a vote count. In many cases I have given a paper a second round, even with a unanimous negative appraisal by the review team, if the paper was on a very important and timely topic. I have not overridden a unanimous rejection recommendation in the second or later rounds, because it is incumbent on the author(s) to eventually persuade somebody, but otherwise I have not let a negative reviewer stop a paper, if the paper is important enough, and the negative reviewer has not revealed what I believed to be a fatal flaw. Again, the editor needs to be secure enough to make such determinations.

Agreed: Just as you shouldn’t take a vote of a three-person focus group to decide to launch a product, you shouldn’t use the vote of three reviewers to decide on a paper.

Image source: https://flic.kr/p/nCcSpm