[ View menu ]

February 26, 2015

Don’t be that person who mixes up opt-in and opt-out

Filed in Encyclopedia ,Ideas
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



We talk a lot about default policies, in particular opt-in policies and opt-out policies (e.g., opt-in vs. opt-out policies for membership in organ donor pools).

When we speak about this stuff, people asking us questions often use the terms backwards or incorrectly. They say opt-in when they mean opt-out and they say opt-out when they mean opt-in. Or they use either when talking about forced choice. Here’s an example from the Chief Technical Officer of Lenovo making the mistake when talking about the Lenovo adware fiasco.

Q. What kind of quality assurance process would even allow for installing this kind of adware on Lenovo machines?

A. At a high level, the team that defines what is in these products will encounter stuff in the market, then they will say, “Here is something we want to do,” and they will engage an engineering team. Then we will go through this thing and make sure it adheres to our policies and practices. We make sure it doesn’t know who the individual is. We make sure it’s opt-in. But what was completely missed in this was the security exposure caused by the design of the certificate authority they used.

Q. There was nothing about this experience that was opt-in.

A. When you buy a Lenovo machine and turn it on, this was one of the programs that was presented to you. At that point, you could click a button that says, “I don’t want to use this.”

Q. I have to press you on that. What did the opt-in prose look like? Nobody recalls anything about this being opt-in.

A. I don’t have it in front of me, but I will get it to you. We want to make this right going forward. Part of this is what we are doing to fix the problem and what are we doing to make this right going forward. To that end, we’re trying to present – in much more plain English — a view of what these programs do.

If the program activated the adware for those who didn’t click “I don’t want to use this”, it was opt-out, not opt-in.

If the program made you answer before activating your computer, it was forced choice (or mandated choice), not opt-in.


Opt-in means users are out by default and can choose (i.e., opt) to be in.

Opt-out means users are in by default and can choose (i.e., opt) to be out.

Forced choice means people are deprived of the product or service unless they choose to be in or out.

For many policies, forced choice is not an option. For instance, for organ donation, they can make you choose in order to get a driver’s license; they can deprive you of the license. But if you decide you don’t want a driver’s license, the default of the country applies to you. You can’t make being an organ donor a forced choice.

February 20, 2015

Put the size of countries in perspective by comparing them to US states

Filed in Ideas ,R
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


This Mercator projection is famous for distorting land areas

Like Jake Hofman, we at Decision Science News love putting things in perspective. Watch this space for a paper we are writing on the topic. We recently thought:

  • Wouldn’t it be cool for US readers to see how big foreign countries are by comparing them to presumably familiar US states?
  • Wouldn’t it be cool for non-US readers to see how big US states are by comparing them to presumably familiar countries?
  • Wouldn’t it be fun to group countries by area?

To keep things simple, we only consider the area of each state, twice the area of each state, and the area of the entire USA as units. We only bother with twice states’ area thing for big countries (larger than 2,500,000 sq km). For compactness, we do not provide the reverse mapping from countries to US states. R code available upon request.

Here you are. A list of US states along with countries and dependencies that are roughly as large as them:
Smaller than Rhode Island (4,002 sq km):

Andorra, Antigua and Barbuda, Bahrain, Barbados, Bermuda, Comoros, Cook Islands, Dominica, Gaza Strip, Grenada, Guadeloupe, Guam, Guernsey, Holy See (Vatican City), Hong Kong, Kiribati, Liechtenstein, Luxembourg, Macau, Maldives, Malta, Marshall Islands, Martinique, Mauritius,  Micronesia (Federated States of), Monaco, Nauru, Palau, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Samoa, San Marino, Seychelles, Singapore, São Tomé and Príncipe, Tonga, Tuvalu


As big as Rhode Island (4,002 sq km):

Cape Verde, French Polynesia


As big as Delaware (5,061 sq km):

Brunei, Cyprus, Puerto Rico, Trinidad and Tobago, West Bank


As big as Connecticut (14,359 sq km):

Bahamas, East Timor, Falkland Islands, Fiji, Gambia (The), Jamaica, Kuwait, Lebanon, Nagorno-Karabakh Republic, Qatar, Swaziland, Vanuatu


As big as New Jersey (22,590 sq km):

Belize, Djibouti, El Salvador, Israel, Slovenia


As big as Vermont (24,903 sq km):

Republic of Macedonia


As big as Massachusetts (27,337 sq km):



As big as Hawaii (28,314 sq km):

Albania, Armenia, Burundi, Equatorial Guinea, Solomon Islands


As big as Maryland (32,134 sq km):

Belgium, Bhutan, Denmark, Estonia, Guinea-Bissau, Lesotho, Moldova, Netherlands, Republic of China (Taiwan), Switzerland


As big as West Virginia (62,758 sq km):

Bosnia and Herzegovina, Costa Rica, Croatia, Dominican Republic, Georgia, Ireland, Latvia, Lithuania, Sierra Leone, Slovakia, Sri Lanka, Togo


As big as South Carolina (82,898 sq km):

Austria, Azerbaijan, Czech Republic, Panama, United Arab Emirates


As big as Maine (91,652 sq km):

French Guiana, Jordan, Portugal


As big as Indiana (94,327 sq km):

Hungary, South Korea


As big as Kentucky (104,664 sq km):

Iceland, Serbia and Montenegro


As big as Tennessee (109,158 sq km):



As big as Virginia (110,771 sq km):

Benin, Bulgaria, Cuba, Honduras, Liberia


As big as Pennsylvania (119,290 sq km):

Eritrea, Malawi, North Korea


As big as Mississippi (125,443 sq km):



As big as Louisiana (134,273 sq km):



As big as New York (141,090 sq km):

Nepal, Tajikistan


As big as Iowa (145,754 sq km):



As big as Wisconsin (169,652 sq km):

Suriname, Tunisia


As big as Missouri (180,545 sq km):



As big as Oklahoma (181,048 sq km):



As big as Washington (184,674 sq km):



As big as South Dakota (199,742 sq km):

Kyrgyzstan, Senegal


As big as Kansas (213,109 sq km):



As big as Idaho (216,456 sq km):



As big as Minnesota (225,181 sq km):

Laos, Romania, Uganda


As big as Michigan (250,737 sq km):

Ghana, Guinea, United Kingdom


As big as Colorado (269,618 sq km):

Burkina Faso, Gabon, New Zealand, Western Sahara


As big as Nevada (286,367 sq km):



As big as Arizona (295,274 sq km):

Italy, Philippines


As big as New Mexico (314,924 sq km):

Congo (Republic of the), Côte d’Ivoire, Finland, Malaysia, Norway, Oman, Poland, Vietnam


As big as Montana (380,847 sq km):

Germany, Japan, Zimbabwe


As big as California (423,999 sq km):

Cameroon, France, Iraq, Morocco, Papua New Guinea, Paraguay, Spain, Sweden, Thailand, Turkmenistan, Uzbekistan, Yemen


As big as Texas (695,673 sq km):

Afghanistan, Bolivia, Botswana, Central African Republic, Chile, Colombia, Egypt, Ethiopia, Kenya, Madagascar, Mauritania, Mozambique, Myanmar, Namibia, Nigeria, Pakistan, Somalia, Tanzania, Turkey, Ukraine, Venezuela, Zambia


As big as Alaska (1,700,133 sq km):

Algeria, Angola, Chad, Congo (Democratic Republic of the), Greenland, Indonesia, Iran, Libya, Mali, Mexico, Mongolia, Niger, Peru, Saudi Arabia, South Africa, Sudan


Twice as big as Alaska (3,400,266 sq km):

Argentina, India, Kazakhstan


As big as the United States (9,826,630 sq km):

Australia, Brazil, Canada, China


Twice as big as the United States (19,653,260 sq km):



Map credit:http://en.wikipedia.org/wiki/List_of_map_projections#mediaviewer/File:Miller_projection_SW.jpg

Country Areas: http://simple.wikipedia.org/wiki/List_of_countries_by_area

February 11, 2015

How to get a no-nonsense weather forecast

Filed in Encyclopedia ,Ideas
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)


Click to visit

People ask us, “You folks at Decision Science News, how do you get your US weather forecasts?”

Because we like graphs and probabilities, we go to a page by US National Weather Service puts out that tells us for every hour in the next few days, the predicted temperature, the chance of precipitation, the predicted amount of rain, the predicted amount of snow, and that’s it.

Here’s how to get graphs for your location (Feb 2015)

1. Go to weather.gov
2. Enter your location code where it says “Local forecast by ‘City, St’ or ZIP code” at the top left.
3. On the resulting page, scroll all the way to the bottom and look for the link “Hourly Weather Graph” under “Additional Forecasts and Information” (or click the colorful “Hourly Weather Graph” at right).
4. On the resulting page, there will be a graph, but it will be a hot mess full of stuff you don’t care about. Stuff like dew point and wind direction. Bad defaults. Uncheck everything except:

  • Predicted temperature
  • Precipitation potential
  • Rain
  • Snow

5. Season to taste.
6. Save the resulting URL. Add it to your bookmarks toolbar. Make it your homepage. We have.

The link we use here in New York City is:


Or, as a link: http://1.usa.gov/1ELoek6

Note that you can just steal our link and replace “40.77664” with your latitude and “-73.95215″ with your longitude, and it should just work inside the US.

If you don’t know your latitude or longitude, just go to Google or Bing and type “Los Angeles, CA latitude longitude” (or whatever) and it will show it to you. Note that “100 West” would be written as “-100″ in the URL.

Enjoy your no nonsense, accurate, short term, localized weather forecast.

February 10, 2015

2015 Summer Institute on Bounded Rationality in Berlin

Filed in Conferences ,Programs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



Until March 8, 2015, applications are open for the 2015 Summer Institute on Bounded Rationality, which will take place on June 4–11, 2015, at the Max Planck Institute for Human Development in Berlin, Germany.

The Summer Institute will gather renowned scientists and talented young researchers from around the globe for an interdisciplinary dialogue on human decision making. The Summer Institute aims to foster understanding of the process and quality of decision making when the conditions of rational choice theory are not met. To this end, it offers a forum for decision-making scholars from various disciplines to share their approaches, discuss their research, and be inspired.

This year’s Summer Institute focuses on how humans make decisions in the wild, including the economy, and how they should make those decisions. The keynote address will be given by Stanford business professor Kathleen Eisenhardt. On behalf of the directors of the Summer Institute, Gerd Gigerenzer and Ralph Hertwig, we invite young decision-making scholars from all fields to apply. Participation will be free, accommodation will be provided, and travel expenses will be partly reimbursed.

Applications are open until March 8, 2015.
Save the deadline: bit.ly/SI2015_deadline
See the website: bit.ly/SI2015_info

Please feel free to email any questions you might have to si2015@mpib-berlin.mpg.de

To pass on this call for applications, you can find a pdf-version here: bit.ly/SI2015_cfa

February 6, 2015

2016 Invitational Choice Symposium – Lake Louise, Alberta

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



The 10th Triennial Invitational Choice Symposium will be held at Lake Louise, Alberta (in the heart of the Canadian Rockies) May 14-17, 2016. It will be hosted by the University of Alberta, and chaired by Gerald Häubl and Peter Popkowski Leszczyc.

The call for workshop proposals will be issued in May 2015, and the submission deadline will be September 15, 2015.

About the Choice Symposium:
The purpose of the Triennial Invitational Choice Symposium is to provide a forum for in-depth interaction among the world’s preeminent scholars (from various scientific disciplines) in the domains of human choice behavior and decision making. These domains are defined broadly. In particular, the Choice Symposium is designed to facilitate discourse that will lead to advances both in our theoretical/substantive understanding of how people make choices and in the methods for studying choice behavior. The Symposium entails a number of parallel workshops on specific, well-defined themes. Each of these workshops is (a) organized by two or three thought leaders on a theme that they propose and (b) attended by a total of 10-12 additional participants who are invited by the workshop organizers.

About the Venue:
Lake Louise is located in Alberta’s Banff National Park, a UNESCO World Heritage Site. The venue of the 2016 Choice Symposium is the Fairmont Chateau Lake Louise, an iconic lakefront hotel surrounded by spectacular mountains.

For more information:

January 27, 2015

Put your model where your mouth is: a choice prediction competition

Filed in Ideas ,Programs ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)




Ido Erev, that Eyal Ert, and Ori Plonsky (henceforth “we”) invite you to participate in a new choice prediction competition The goal of this competition is to facilitate the derivation of models that can capture the classical choice anomalies (including Allais, St. Petersburg, and Ellsberg paradoxes, and loss aversion) and provide useful forecasts of decisions under risk and ambiguity (with and without feedback).

The rules of the competition are described in http://departments.agri.huji.ac.il/cpc2015. The submission deadline is May17, 2015. The prize for the winners is an invitation to be a co-author of the paper that summarizes the competition (the first part can be downloaded from http://departments.agri.huji.ac.il/economics/teachers/ert_eyal/CPC2015.pdf).

Here is a summary of the basic idea. We ran two experiments (replication and estimation studies, both are described in the site), and plan to run a third one (a target study) during March 2015. To participate in the competition you should email us (to eyal.ert at mail.huji.ac.il) a computer program that predicts the results of the target study.

The replication study replicated 14 well-known choice anomalies. The subjects faced each of 30 problems for 25 trials, received feedback after the 6th trial, and were paid for a randomly selected choice. The estimation study examined 60 problems randomly drawn from a space of problems from which the replication problems were derived. Our analysis of these 90 problems (see http://departments.agri.huji.ac.il/cpc2015) shows that the classical anomalies are robust, and that the popular descriptive models (e.g., prospect theory) cannot capture all the phenomena with one set of parameters. We present one model (a baseline model) that can capture all the results, and challenge you to propose a better model. The models will be compared based on their ability to predict the results of the new target experiment. You are encouraged to use the results of the replication and estimation studies to calibrate your model. The winner will be the acceptable model (see criteria details in the site) that provides the most accurate predictions (lowest mean squared deviation between the predicted choice rates and the choice rates observed in the target study).

January 23, 2015

Save the date: ACR 2015, Oct 1-4, New Orleans

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



We invite you to attend the 2015 North American Conference of the Association for Consumer Research, to be held at the Hilton New Orleans Riverside, from Thursday, October 1 through Sunday, October 4. The conference theme is Advancing Connections. It is inspired by a desire to build better connections across different research paradigms and approaches and to facilitate connections among academics, practitioners, and public policy makers, as well as to consumers. In recent years, many members of the ACR community have expressed the desire for more research endeavors that take a broader perspective and have the potential to make greater impact on theory and practice. We hope and believe that when we individually and collectively reach across research silos and make meaningful connections it promotes rigorous and relevant work that generates important insights about consumer behavior.

We hope that encouraging broad participation is facilitated by this year’s conference location: New Orleans. New Orleans itself advances connections between a wide range of cuisines, musical styles (particularly as the birthplace of jazz), and historic celebrations–most importantly, of course, Mardi Gras. The Hilton Riverside has a prime downtown location and sits on the banks of the Mississippi River. It is steps from the streetcar lines and three blocks from the French Quarter. New Orleans is served by the Louis Armstrong New Orleans International Airport (MSY).

Full information available at


Conference Co-chairs:
– Kristin Diehl, University of Southern California
– Carolyn Yoon, University of Michigan

January 14, 2015

Choose a frequent flier program – 2015 Edition

Filed in Ideas ,Programs ,Tools
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



A few weeks ago, we posted a tool to help you choose a frequent flier program that is right for where you live.

But it had some limitations:

  • It was ugly.
  • It had no pretty color-coded graphs.
  • It didn’t let you select multiple origin airports.
  • It didn’t let you select destination airports, let alone multiple ones.

Well, Jake Hofman and I have fixed all that. We present the new “choose an airline loyalty program” tool for 2015. It has all the bullet-pointed goodness we advertise above. You can leave the destination field blank to see a count of all departures, or fill it in to limit things down to the places you go in the USA.

Try it. Switch your frequent flier program (if desired). Reap the benefits. Let us know how we can improve it. Or fork it “from the git hub” and do it yourself. This was made possible with R, dplyr and d3.



January 9, 2015

SPUDM 2015, August 16 – 20, 2015 Budapest, Hungary

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



The European Association for Decision Making invites you to attend its next biannual 25th Subjective Probability, Utility, and Decision Making Conference (SPUDM 24), which will be held at the Corvinus University of Budapest, Hungary, on August 16-20, 2015.

Submissions of paper abstracts, poster abstracts, and proposals for workshops are invited on any topic in basic and applied judgment and decision making research.

Deadline for all submissions is March 8th, 2015.

The organizing committee is pleased to announce that the conference will feature the following invited speakers:

* Barbara Mellers, University of Pennsylvania, USA
* Nick Chater, Warwick Business School, UK
* Botond Koszegi, Central European University, Hungary

The scientific committee:

* Richárd Szanto, Corvinus University of Budapest (Chair)
* Balazs Aczel, Eötvös Loránd University
* Ido Erev, Technion – Israel Institute of Technology
* Andreas Glöckner, Göttingen University
* Ana Franco-Watkins, Auburn University

The call for papers is available at: http://www.spudm25.eu

Attending this meeting will also be an opportunity to discover Budapest, a city of diversity, where you can find the marks of different historical eras: to feel the Turkish atmosphere go and see the burial monument of Gul Baba; then visit the rustic streets and monuments of the Castle district; you can witness the rapid transformation that took place during the 19th century by walking along Andrassy street or the boulevards.

January 2, 2015

Those annoying animated ads may cost more than they are worth to websites

Filed in Encyclopedia ,Ideas ,Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)



The Globe and Mail reports on a recent Journal of Marketing Research article [Download] by Microsoft researchers Dan Goldstein, Sid Suri, Preston McAfee, Fernando Diaz and Northeastern University graduate student Matthew Ekstrand-Abueg entitled “The economic and cognitive costs of annoying display advertisements”.

In the Globe and Mail, Susan Krashinsky writes:

The researchers studied the impact of the annoying ones [ads] on a group of people paid to carry out the e-mail categorization task, versus two other groups doing the same, who were shown non-annoying ads or no ads at all.

People doing the task looked at one e-mail per page, and each page had two banner ads on either side – either two annoying ones, or two that were not annoying – or just white space in the margins.

The researchers experimented with different levels of pay per e-mail the participants reviewed; unsurprisingly, people who were paid more looked at more pages. But in every case, people shown annoying ads looked at fewer pages than people in the same pay scale who were shown non-annoying or no ads.

The math tells a troubling tale: Looking at the behaviours across pay scales, people in this experiment who saw bad ads had to be paid .115 cents more per page to match the level of work done by someone shown good ads, and .135 cents more to match the level of someone shown no ads at all.

That may not sound like a lot (the pay per-page for the task was relatively low because it took so little time to do), but when expanded to the cost per thousand impressions (CPM or cost-per-mille) – the metric by which ads are commonly sold – it is significant. It comes out to $1.15 CPM difference between bad ads and good ads. Considering that the CPMs of many online ads can be very low, depending on a number of factors, the revenue they provide to publishers may be outstripped by users dropping off their sites.

The figure above shows the estimates of a model the researchers fit to the data.

On the x axis we see how much people were paid to classify an email. The three pay conditions were .2, .4 and .6 cents per email.

The y axis shows the number of emails people actually to classified before quitting.

The colored lines show the advertising conditions people were randomly assigned to. The red condition saw annoying ads, the blue condition saw harmless ads, and the green condition saw no ads. The graph shows that the more people are paid, the more emails they classify, but the more annoying the ads, the fewer emails they classify.

A little algebra shows that you need to pay people more than a dollar CPM (cost per thousand impressions) to make up for the dropout caused by the annoying ads.

Interestingly, websites are typically paid less than $1 CPM to run annoying ads.

This technique for measuring the compensating differential was invented by Michael Toomim and colleagues.

Daniel G. Goldstein, Siddharth Suri, R. Preston McAfee, Matthew Ekstrand-Abueg, and Fernando Diaz (2014) The Economic and Cognitive Costs of Annoying Display Advertisements. Journal of Marketing Research: December 2014, Vol. 51, No. 6, pp. 742-752.
doi: http://dx.doi.org/10.1509/jmr.13.0439