[ View menu ]

January 4, 2017

Apply behavioral insights, measure impact, and help make the US Government work for people

Filed in Jobs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


The Office of Evaluation Sciences (OES) is currently accepting applications for fellowships beginning in October 2017 in D.C. Based at the General Services Administration, OES is a team of applied researchers tasked with building insights from the social and behavioral sciences into federal programs, and testing and learning what works. OES partners with federal agencies to evaluate the effectiveness of new evidence-based interventions on program outcomes and provides agencies evidence to make informed programmatic decisions.

Over the past two years, OES has completed over 30 randomized evaluations with agency partners. OES has made major strides serving agencies and improving federal programs by applying and testing the impact of behavioral insights on a diverse range of agency outcomes. Dozens of agencies have joined this effort, creating innovative partnerships to tackle some of the most pressing challenges in the United States and abroad. OES has rigorously tested insights on diverse agency priorities such as promoting retirement security, responding to climate change, assisting job seekers, helping families get health coverage and stay healthy, and improving the effectiveness and efficiency of Government operations. For more information on our portfolio to date, go to https://oes.gsa.gov/work/.

Fellowship Details
OES is accepting applications for full-time fellowships starting in October 2017 based at the General Services Administration in Washington DC. Most Fellows join OES on-loan from academic, nonprofit, or government offices on either a reimbursable or non-reimbursable basis, typically for one to two years. Fellows have come from a variety of universities (e.g. City University of New York, Northeastern University, North Carolina State University, Reed College, University of Arizona, and University of Washington), non-profits (such as policy think tanks), and federal departments (e.g. Department of Education, Department of Agriculture, and Housing and Urban Development Department). Other types of federal appointments may be offered on a limited basis.

The OES team combines academic and research expertise with experience implementing and evaluating evidence-based interventions in complex settings. Responsibilities of OES Fellows include:

  • Understanding agency objectives and priorities, identifying opportunities to translate findings from the social and behavioral sciences into concrete recommendations.
  • Driving implementation on 3-5 projects at a time, including collaborating and communicating with agency partners to ensure that: intervention ideas and the pilot design meet agency goals; field experiments are implemented as planned; and the implications of results are clearly understood.
  • Working directly with agency collaborators to design and rigorously test interventions.
  • Performing data analyses and interpretation.
  • Distilling findings into reports, policy memos, and academic publications.
  • Assisting, as needed, on additional projects being managed by other team members.
  • Attending weekly team meetings, providing updates on project status, and being generally available to collaborate on and contribute to internal team tasks.
  • Representing the team by attending and presenting at internal government and external talks, conferences, and workshops.

Applicant Profile
OES team members must possess a unique set of technical and professional skills. This includes knowledge of at least one field within the social and behavioral sciences, the ability to creatively apply research knowledge within the federal government setting, the ability to design and manage the day-to- day operations of a large operational field trial, and exceptional communication and interpersonal skills

OES is currently recruiting for the following two roles with associated experience:

  • Fellows have substantial expertise in the social and behavioral sciences field. Typically they are researchers with a PhD and publication record in a social or behavioral science field (e.g., economics, psychology, political science, statistics, sociology, public policy, business, etc.).
  • Associate Fellows are typically pursuing a PhD in the social and behavioral sciences field, have recently completed a PhD or post-doc, or have a Master’s Degree plus two or more years of relevant experience.

Additionally, applicants must possess:

  •  General knowledge of applied social and behavioral sciences and specialized knowledge of at least one domain of study within the social and behavioral sciences.
  • Ability to think creatively about how insights from the social and behavioral sciences can be translated into concrete interventions that are feasible within specific Federal programs.
  • Curiosity and willingness to learn about federal agencies and their unique practical and regulatory constraints.
  • Knowledge of evaluation design and analysis strategies, such as randomized controlled trials.
  • Statistical competency in at least one data analytic programming language (e.g., R, Stata, Matlab, SAS, Python).
  • Ability to effectively explain technical concepts to broad audiences, orally and in writing.Strong and concise writing skills, including under tight deadlines. Excellent project management and organizational skills.
  • Flexibility, self-motivation, and the ability to manage multiple tasks efficiently in a team

Preferred qualifications include one or more of the following:

  • Significant experience conducting randomized evaluations in field settings.
  • Experience working with government programs, policies, operations, and/or data.
  • Advanced statistical and data skills, including experience handling large data sets.
  • Professional design skills (e.g. interaction design, visual communication design, etc.)
  • Expertise in one or more U.S domestic policy sectors.

Application Details
Applicants may apply online via https://oes.gsa.gov/apply/. The deadline to submit is 11:59 p.m. EST Sunday, January 15, 2017. Finalists will be invited to an interview process that will include a writing exercise, up to two stages of interviews, and an in-person research presentation. We expect final decisions to be communicated to candidates by mid-March 2017.

December 27, 2016

The 55th Edwards Bayesian Research Conference

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



The 55th Edwards Bayesian Research Conference will be held February 16-18, 2017, on the
campus of California State University, Fullerton.

Presentations at this conference may come from any area related to judgment and decision
making and are NOT limited to Bayes theorem or Bayesian statistics.

Submissions are due by January 9, 2017.

We maintain certain traditions that have made these meetings so enjoyable. As Ward Ed-
wards put it:

The atmosphere is informal, the discussion can get intense, and many of the best debates take place during coffee breaks or in the hospitality suite at the end of the day. This conference is a good place to try out your latest, wildest set of ideas on a kindly, knowledgeable, and critical audience.

Hotel rooms will be available at an excellent rate at the Fullerton Marriott, which is across the street from the meeting room.

Visit the conference website for more information.

Questions can be sent to Daniel Cavagnaro: dcavagnaro at fullerton.edu

December 21, 2016

The SJDM Newsletter is ready for download

Filed in Conferences ,Ideas ,Jobs ,Research News ,SJDM
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



The quarterly Society for Judgment and Decision Making newsletter can be downloaded from the SJDM site:


Dan Goldstein
SJDM Newsletter Editor

December 16, 2016

The typical American lives only 18 miles from mom, but …

Filed in Ideas ,Tools
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



The NYT had a nice infographic entitled “The typical American lives only 18 miles from mom“. They’re saying the median distance to mom is 18 miles.

But when you look at the data in greater depth (thanks to the graph in the article, which we reproduce above), it looks like the mean distance is over 200 miles. That’s the crow-flies distance from DC to New York, a 4 hour drive sans traffic. You might say the median is the better statistic, and we agree: relatively few people living on the opposite coast from their parents drive the average up. However, the downside of the median is that it doesn’t let you appreciate how far from their parents a third of the country lives. In the spirit of putting numbers into perspective to improve comprehension, let’s put that into perspective. Values are rounded. Drive times are from online maps and do not include traffic delays or even a single pit stop.

  • 1 in 3 lives over 100 miles from mom (New York to Philadelphia, a 2.5 hour drive)
  • 1 in 4 lives over 200 miles (DC to New York, a 4 hour drive)
  • 1 in 5 lives over 350 miles (DC to Boston, a 7.5 hour drive)
  • 1 in 6 lives over 600 miles (DC to Chicago, a 10.5 hour drive)
  • 1 in 10 lives over 900 miles (DC to Minneapolis, a 16.5 hour drive)

When the miles are in familiar units, people live farther from mom than it first seems. Also, mentioned in the article,

  • A lot of people take care of their parents
  • A lot of people’s parents help take care of their kids
  • A lot of people are poor or middle class, which increases the chance they’ll care for parents or grandkids
  • A lot of people live in super-dense areas

… which also make the surprising stat seems less surprising.

December 7, 2016

Boulder Summer Conference on Consumer Financial Decision Making: Submissions due Dec 15, 2016

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


Abstract Submission Deadline December 15, 2016

Submitting Abstracts

To submit an extended abstract (1 page single spaced pdf with author information), please visit the conference website


and click on the Submit Paper Abstract link:


Conference Overview

The Boulder Summer Conference in Consumer Financial Decision Making, now in its 8th year, is the world’s foremost conference for discussion of interdisciplinary research on consumer financial decision-making. Consumer welfare is strongly affected by household financial decisions large and small: choosing mortgages; saving to fund college education or retirement; using credit cards to fund current consumption; choosing how to “decumulate” savings in retirement; deciding how to pay for health care and insurance; and investing in the stock market, managing debt in the face of financial distress. This conference brings together outstanding scholars from around the world in a unique interdisciplinary conversation with regulators, business people in financial services, and consumer advocates working on problems of consumer financial decision-making.

Our goal is to stimulate cross-disciplinary conversation and improve basic and applied research in the emerging area of consumer financial decision-making. This research can inform our understanding of how consumers actually make such decisions and how consumers can be helped to make better decisions by innovations in public policy, business, and consumer education. Please see the 2016, 2015, and 2014 programs on the conference website to see abstracts of research by scholars in economics, psychology, sociology, behavioral finance, consumer research, decision sciences, behavioral economics, and law. Our format allows a very high level of opportunity for conversation and interaction around the ideas presented.

Conference Format

We begin with a keynote session late Sunday afternoon about how consumer financial behavior is influenced by credit scoring and use of credit scores for non-lending purposes. The keynote session will be followed by a reception and poster session. Monday and Tuesday we have ten 75-minute sessions with two related papers from different disciplines, with discussion by an industry or government expert or a scholar from a third field. We begin with financial decision making of consumers in distress because of poor financial decision-making or situational stress. We then turn our focus to more basic processes that guide everyday consumer financial decision-making, both good and bad. Throughout the conference we schedule significant time for informal interaction outside of the sessions.

The conference program committee will select papers for presentation at the conference based on extended abstracts. Selected papers must not be published prior to the conference. Authors submitting an abstract must commit to have a paper that is complete and available for review by discussants one month prior to the conference. Selections will be based on quality, relevance to consumers’ financial decision-making, and contribution to breadth of topics and disciplinary approaches. We consider not just the individual merits of the papers, but how they pair with another submission from a scholar in a different field. The organizers will invite authors of the best papers not selected for presentation at a plenary session to present their work at the Sunday evening poster session.

Registering for the Conference and Booking a Room

There are links on the conference website for booking at the St. Julien Hotel and for registering for the conference.

The conference will be held in the St. Julien Hotel & Spa. We have negotiated very attractive room rates for conference attendees (and families). Please note that the Conference has not guaranteed any rooms, rather they are on a “first come” basis. We encourage you to book your rooms as soon as you can. Boulder is a popular summer destination and rooms go quickly at the St. Julien Hotel.

December 1, 2016

Tools, methods to improve decision making, outcomes, information communication

Filed in Articles ,Encyclopedia ,Ideas ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



The author’s daughter’s tool kit

A while back, Decision Science News put out a call on the Society for Judgment and Decision Making email list looking for “tools, methods to improve decision making, outcomes, and information communication / visualization”:

I’m interested in learning about tools and methods people in this community have created to improve decision making, decision outcomes, and information communication / visualization. It would be good to have examples of things that are a) finished, codified b) tested for effectiveness. Bonus points if they are field tested and/or tested for long-term retention

This would include things like:
* Training programs incl. games, videos, procedures, tutorials
* Decision aids, calculators
* Elicitation techniques (e.g. SPIES, …)
* Changes in information format (e.g. frequency formats, …)
* Policies, procedures (e.g. Save More Tomorrow, …)
* Feats of choice architecture (e.g., reordering, ….)

To make the resulting list more useful to the community, it might be wise to structure your submissions like this:

Tool / method name:
One sentence description:
One sentence effectiveness test result:
Relevant cite(s):

Here are the responses we received. If you have more, put them in the comments, but please structure them as suggested above.


These tools are similar to the Distribution Builder (*) tool for eliciting probability distributions
(*) Goldstein, Daniel G., Johnson, Eric J. & Sharpe, William F. (2008). Choosing outcomes versus choosing products: Consumer-focused retirement investment advice. Journal of Consumer Research, 35(3), 440-456.

Quentin Andre’s Javascript Distribution Builder: https://quentinandre.github.io/DistributionBuilder

Don Moore and Uriel Haran’s SPIES elicitation tool: http://fbm.bgu.ac.il/lab/spies/spies.html

Charlie Strout’s Javascript Distribution Builder: https://github.com/sevenshadow/DistributionBuilder


Rick Larrick and Jack Soll created an online GPM (gallons per mile) calculator to help people compare cars/trade ins. http://gpmcalculator.com/


Larrick, R. P., & Soll, J. B. (2008). The MPG illusion. Science, 320(5883), 1593.


Ian Krajbich writes “By imposing per-decision time limits, we help people divert time from difficult unimportant decisions to easier important ones.”

Effectiveness test result: Time-constrained subjects achieve better objective outcomes (higher earnings) when they have per-decision time limits, compared to when they are left to allocate time to decisions on their own.

Oud B, Krajbich I, Miller K, Cheong JH, Botvinick M, Fehr E. 2016 Irrational time allocation in decision-making. Proc. R. Soc. B 283: 20151439. http://dx.doi.org/10.1098/rspb.2015.1439


Rob Hamm writes “The log odds formula for combining multiple independent diagnostic cues corresponds to the formula for combining the impact of equal weights (and helium balloons) placed at different positions on a balance beam. We constructed a demonstration for one medical diagnosis domain (acute chest pain).”

Effectiveness test result: The math works if we make the naïve Bayesian assumption of independent impacts of different pieces of evidence. A small sample of medical students and faculty played with the program and found it reasonable.

Hamm RM, Beasley WH, Johnson WJ. A balance beam aid for instruction in clinical diagnostic reasoning. Medical Decision Making 2014; 34(7):854-862 (doi: 10.1177/ 0272989X14529623).
Hamm RM, Beasley WH. The balance beam metaphor: A perspective on clinical diagnosis. Medical Decision Making 2014; 34(7):841-853 (doi: 10.1177/0272989X14528755).


Eyal Pe’er writes: “An enhanced speedometer that presents, alongside regular speed information, the pace of minutes required to complete 10 miles/km at given levels of speed (http://journal.sjdm.org/12/121007/fig2.png)

Effectiveness test result: When asked to estimate time saved when increasing speed (at various levels) participants who received the “paceometer” were correct at an overall average rate of about 58% compared to less than 20% in the control conditions. In another study, similar results were found on driving behavior using a driving simulator.

Peer, E., & Gamliel, E. (2013). Pace yourself: Improving time-saving judgments when increasing activity speed. Judgment and Decision Making, 8(2), 106.
Eriksson, G., Patten, C. J., Svenson, O., & Eriksson, L. (2015). Estimated time of arrival and debiasing the time saving bias. Ergonomics, 58(12), 1939-1946.

Maarten Cuypers writes: “Online tool with risk communication and values clarification exercise to support treatment selection in (early) prostate cancer patients”

Effectiveness test result: More knowledge, more value congruent treatment choices, though lower information satisfaction.

Cuypers, Maarten, Lamers, Romy, Kil, Paul, Poll-Franse, L. van de, & Vries, Marieke de (2015). Impact of a web-based treatment decision aid for early-stage prostate cancer on shared decision-making and health outcomes: Study protocol for a randomized controlled trial. Trials, 16(231)
Lamers, R.E.D., Cuypers, M., Vries, M. de, Poll-Franse, L. van de, Bosch, J.L.H.R., & Kil, P.J.M. (2016). How do patients choose between active surveillance, radical prostatectomy and radiotherapy?: The effect of a preference sensitive decision aid on treatment decision making for localized prostate cancer. Urologic Oncology: Seminars and Original Investigations


Emre Soyer writes “A simulation based on a model (e.g., regression), which allows decision makers to enter their inputs and sequentially observe (also graph and/or store) the estimated outcomes.”

Effectiveness test result: In different contexts, participants related easily with the tool, trusted their simulated experience more than their own analyses based on descriptions, and made more accurate judgments about uncertainties.

Hogarth, R. M., & Soyer, E. (Winter 2015). Simulated experience: Making intuitive sense of big data. MIT Sloan Management Review, p. 49-54. (sloanreview.mit.edu/x/56215)
Hogarth R. M., & Soyer E. (2015). Communicating forecasts: The simplicity of simulated experience. Journal of Business Research, 68, 1800-1809
Hogarth R. M., & Soyer E. (2015). Providing information for decision making: Contrasting description and simulation. Journal of Applied Research in Memory and Cognition, 4, 221-228.
Hogarth R. M., Mukherjee, K., & Soyer, E. (2013). Assessing the chances of success: Naïve statistics vs. kind experience. Journal of Experimental Psychology: Learning, Memory and Cognition, 39, 14-32.
Hogarth, R. M., & Soyer, E. (2011). Sequentially simulated outcomes: Kind experience vs. non-transparent description. Journal of Experimental Psychology: General, 140, 3, 434-463.
M.A. Bradbury, T. Hens and S. Zeisberger, “Improving Investment Decisions With Simulated Experience,” Review of Finance, published online June 6, 2014.
C. Kaufmann, M. Weber and E. Haisley, “The Role of Experience Sampling and Graphical Displays on One’s Investment Risk Appetite,” Management Science 59, no.2 (February 2013): 323-340.
B.K. Hayes, B.R. Newell and G.E. Hawkins. “Causal Model and Sampling Approaches to Reducing Base Rate Neglect,” in “Proceedings of the 35th Annual Conference of the Cognitive Science Society,” eds. M. Knauff, M. Pauen, N. Sebanz and I. Wachsmuth (Austin, Texas: Cognitive Science Society, 2013.)


Olga Kostopoulou writes “A diagnostic support tool that integrates with the patient’s electronic health record, and presents physicians with diagnostic alternatives (according to patient age, sex, risk factors and current complaint) at the start of the consultation, BEFORE physicians elicit any further information from the patient.”

Effectiveness test result: Presenting family physicians with diagnostic alternatives early on, before eliciting any information themselves, increased diagnostic accuracy both in a study with computer-simulated patients, and in a study with standardised patients (actors).

Kostopoulou O, Rosen A, Round T, Wright E, Douiri A, Delaney BC. Early diagnostic suggestions improve accuracy of GPs: a randomised controlled trial using computer-simulated patients. British Journal of General Practice 2015 Jan; 65(630): e49-e54. http://dx.doi.org/10.3399/bjgp15X683161

Kostopoulou O, Porat T, Corrigan D, Mahmoud S, Delaney BC. Supporting first impressions reduces diagnostic error: evidence from a high-fidelity simulation. British Journal of General Practice. In Press.

Note: The tool is still at the prototype stage and has not been field-tested yet. It is based on an ontology of medical diagnostic concepts, and will be open source eventually.


Aba Szollosi writes “We tested whether analogical encoding can foster the transfer from learning abstract principles to improving behavioral performance on a range of decision biases.”
Effectiveness test result: The method might be effective in eliminating biases on tasks where the violations of statistical principles are measured.

Relevant cite(s): Aczel, B., Bago, B., Szollosi, A., Foldes, A., & Lukacs, B. (2015). Is it time for studying real-life debiasing? Evaluation of the effectiveness of an analogical intervention technique. Frontiers in psychology, 6:1120. doi: 10.3389/fpsyg.2015.01120

Richard Hodgett writes about a tool he created to structure complex decision problems. ChemDecide is a suite of software tools which incorporates three Multi-Criteria Decision Analysis (MCDA) techniques: Analytical Hierarchy Process (AHP), Multi-Attribute Range Evaluations (MARE) and ELimination Et Choix Traduisant la REalité trois (ELECTRE III). The software has been used for addressing decisions such as route selection, equipment selection, resource allocation, financial budgeting and project prioritization.

Cite: Hodgett, R. E., Martin, E. B., Montague, G., & Talford, M. (2014). Handling uncertain decisions in whole process design. Production Planning & Control, 25(12), 1028-1038.

November 23, 2016

Researcher and postdoc positions in Computational Social Science at Microsoft Research NYC

Filed in Jobs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


Microsoft Research NYC seeks outstanding applicants for researcher and postdoctoral positions in computational social science. Successful applicants will have strong quantitative and programming skills. For more information, see the call at the MSR NYC Computational Social Science website.


  • MSR-NYC is a seriously quantitative place. For the social science postdocs, applicants should have strong competence in computer programming, math, or statistics at the level of someone with a Bachelor’s or Master’s degree in CS, math, or stats. Simply meeting the stats requirements in a social science PhD program would not be enough to be considered.
  • In additional to having computational or mathematical skills, only applicants with computational or statistical research interests will be considered.
  • The researcher positions are similar to professorships, with a focus on discovery and publication.
  • The postdocs are good preparation for a career in academia (often taken to defer starting a professorship by a year or two) and are not intended for people looking to move into industry.

November 14, 2016

Nominate a JDM researcher for the FABBS early career impact award

Filed in SJDM ,SJDM-Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



FABBS (Federation of Associations in Brain and Behavioral Sciences) is a coalition of scientific societies that share an interest in advancing the sciences of mind, brain, and behavior.

To recognize scientists who have made outstanding research contributions, FABBS grants early career impact awards. (Here early means within 10 years of receiving a PhD.)

Awards are rotated tri-annually among various subsets of societies that are members of this larger federation.

In 2017, the subset includes the Society for Judgment and Decision Making (SJDM).

Accordingly, we are seeking nominations for the FABBS early career impact award.

If you wish to recognize the contributions of a judgment and decision making (JDM) scholar who obtained their PhD in the last 10 years, please email the name of your nominee to Shane Frederick (shane.frederick at yale.edu) by this Friday, November 18th, 2016.

The SJDM executive board will review the set of nominees and make our recommendation to FABBS by November 30, 2016.

Those seeking more information about this award can obtain it here:


November 9, 2016

4:1 longshot Trump wins election

Filed in Encyclopedia ,Ideas ,Research News ,Tools
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



We know Decision Science News isn’t your main news source and assume you know that Donald Trump surprised many and won the election last night.

Models like the Princeton Election Consortium, which put Clinton’s probability of winning at 99%, probably need re-examining. Even PollyVote which averages polls, models, expert judgment, prediction markets, and citizen forecasts, forecast Clinton would win with 99% probability. It’s an average of 20 sources: none of which predicted Trump would win the most electoral votes. Historically, the average of many predictions is hard to beat.

The PredictIt prediction market (pictured above), mispredicted it though prediction markets weren’t that bad compared to other classes of forecast. In November, PredictIt was assigning Trump a 25-30% probability of winning. We bet against Trump on PredictIt when he was at 36% (2 or 3:1) and lost. This is sad for more than one reason.

Prediction market Hypermind (pictured below, lower graph is zoomed to November), fared similarly, giving Trump over a 25% chance in much of November (dates are written DD-MM-YY because Europe).


The Iowa Electronic Markets prediction market results are below. This is actually a winner take all market based on the popular vote plurality winner, but it’s close enough for jazz, meaning that people probably treat it the same as if it predicted the electoral vote winner(*). Note that this chart is on a different time scale (and we don’t have time to do anything about that), but focus on the period since August to compare to PredictIt and the period since October to compare to Hypermind. They had some volatility in predictions, going from 40% Trump down to 10% and back up to 40% a week before the election, though the average November prediction is comparable to PredictIt and Hypermind.

The summary is that all the prediction markets were wrong, but they weren’t steadily predicting 10:1 against Trump either.


Prediction market predictions were less wrong, going by something like Brier Score. Prediction markets predicted something near 20% to 25% Trump and a 4:1 or 3:1 horse won the race. As the French say ça arrive.

We could talk about more unique predictors like fivethirtyeight.com (below) which were volatile but still over 25% in November, and Keys to the White House, which is a simple tallying model that actually and barely predicted that Trump would win. However, we feel it’s better to talk about classes of predictions (like expert judgments or prediction markets or models) than unique cases. Also fivethirtyeight.com made three different forecasts, so, how fair is that?


(*) One interesting thing is that the IEM market was correctly predicting that Hillary would capture the majority of the popular (as opposed to electoral) vote going into the election. On election day, it moved the wrong way (predicting Trump would win the popular vote). The day after the election it predicted a 95% chance that Hillary would win the popular vote.

November 4, 2016

2016 SJDM conference program available

Filed in SJDM ,SJDM-Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



What: SJDM 2016 Conference
When: November 18 to 21, 2016
Where: Sheraton Boston Hotel, 39 Dalton St, Boston, MA 02199
Special Features
* Plenary address by Linda Babcock
* Tribute to Baruch Fischhoff
* Presidential address by Dan Goldstein
* Women in JDM networking event
* Einhorn Award revelation
* Social event at a swank speakeasy

As the Society for Judgment and Decision Making conference is right around the corner, it’s time to make your last minute travel and hotel arrangements if you haven’t already. There have been quite a few early online registrations, and total registrations are expected to number around 675. It’s too late to register online, but you can do so in person at the conference (which 15% to 20% of people do). At $400 onsite for members ($200 for student members), it’s one of the least expensive conferences around. It would be cheaper than that but, you know, Boston. If you aren’t a member, you can join here for $50.

You can download the current copy of the program here. As you know, the talks were selected by a representative panel of reviewers this year and we see many amazing talks and posters on the program.

See you soon in Boston!