Biases in choices about fairness: Psychology and economic inequality

Judgment and Decision Making, Vol. 10, No. 2, March 2015, pp. 198-203

Biases in choices about fairness: Psychology and economic inequality

Zachary Michaelson*

This paper investigates choices about “distributional fairness” (sometimes called “distributive justice”), i.e., selection of the proper way for resources to be distributed in group. The study finds evidence that several of the same biases of risky decision making also apply to choices about distributional fairness, in particular focusing on the key biases that lead to prospect theory. This finding is achieved by introducing a novel thought experiment regarding the fairness of resource distributions, then manipulating the percentage of individuals who gain or lose in these distributions, and changing the sizes of gains and losses. Shared biases may mean similar heuristics are being employed. The mechanism behind this result leaves room for future exploration, as do the implications of the finding for related applications in inequality research.


Keywords: distributional fairness, Allais paradox, isolation effect, certainty effect, peanuts effect, inequality, reflection effect, prospect theory.

1  Introduction

Much of the research on biases has focused on choices about risks to oneself, called here “risky choice”. These are biases of risk aversion and seeking. In contrast, this study focuses on choices about risks to a group of others, called “distributional choices”. These are biases of inequality aversion or fairness. Choices about risk and inequality may be directly linked. If so, they may share some of the same biases, and may even share psychological mechanisms like heuristics.

In philosophy, risky and distributional choices have a strong connection that results from a method for trying to make objective decisions about inequality and fairness. Harsanyi and Rawls each suggested imagining that one could be randomly reassigned to be any member of society, from poorest to wealthiest (Harsanyi, 1955; Rawls, 1957). They proposed that the choices one would make from this perspective, Rawls’ “original position”, would be objectively fair. In this thought experiment, it turns out that choices about inequality and risk are also literally the same. For example, “2% of society lives in poverty” is a statement about inequality. “You have a 2% chance of being assigned to living in poverty” is a statement about risk. In this scenario, those two statements are equivalent. This principle of an “assignment gamble”—or “veil of ignorance”, as Rawls calls it—is applicable whether we are discussing income in large societies or who will get the largest piece of cake.

In general, for a decision maker in the original position and behind the veil of ignorance, percentages regarding distributions in the group are equivalent to probabilities of outcomes in the assignment gamble. Rawls argued that it is most rational to be highly risk averse when deciding from the original position, given the huge stakes, and he proposed a “maximin” decision rule for justice (maximizing the interests of the least well off), which he called the “difference principle” (Rawls, 1971). In recommending this type of high risk aversion as part of fairness, he was also advocating for high inequality aversion and reducing the distribution of outcomes in society (from the bottom up). Harsanyi, on the other hand, advocated for individuals to make choices about fairness the same way he believed they should about risk, based upon their expected utility using a von Neumann-Morgenstern utility function (Harsanyi, 1975). With this debate began the literature on exactly which decision rules, preferences, and utility functions individuals should exhibit when making distributional choices.

Economists have also explored the link between distributional and risk preferences with quantitative methods. Economic theorists have established, at least in the mathematics of utility functions, that having more risk aversion results in more inequality aversion (Vickrey, 1945), and that the reverse is also true, more inequality aversion results in more risk aversion (Chambers, 2012). Consistent with this theory, empirical studies have found that the degrees to which individuals are risk averse is correlated with their degrees of inequality aversion (Ferrer-i-Carbonell & Ramos, 2010; Carlsson, Daruvala & Johansson-Stenman, 2005).

But the mathematics of utility functions and cross-sectional correlations of large groups do not reveal much about psychology and decision processes. Not all behavior follows the dictums of economic calculus, and population correlations could be picking up personality trait differences or any number of other attributes. It remains to be established whether the actual decision processes for choices about risk and inequality are similar and what the consequences of this may be. If the same psychological biases appear in both types of decisions, it could serve as evidence of a connection at the decision process level. This follows the established practice of evoking inconsistencies in judgments that should appear only if a decision maker is using some heuristic (Messick, 1993).

In most of the instances in which “heuristics” for distributional choice have been explored, authors have discussed moral commitments that serve as decision rules, rather than “rules of thumb” or “time saving approximations”, as the term “heuristic” has traditionally meant (Baron, 1993). An example of this is the “equality heuristic”, which proposes that the desire for equal outcomes is a durable guidepost across a variety of distributional choice settings and problems (Harris & Joyce, 1980; Messick, 1993). Evidence has found the commitment to favor equality (the “equality heuristic”) even in situations where what is equal can have different meanings shaped by context and framing (Bar-Hillel, 1993). A similar result has been found in negotiation settings where this commitment to equality has been referred to as a “reference point”, which is also a recognized heuristic tool (Loewenstein et al., 1989; Hoffman & Spitzer, 1985). So, while there is literature on heuristics in distributional choice decisions, so far they are largely different from the heuristics discussed in the literature on risky decision making.

One of the closest findings on a shared heuristic and bias is with the “identifiable victim effect” (Jenni & Loewenstein, 1997) and related “psychic numbing” effect (Slovic, 2007): the plight of a specific individual is more concerning than statistics about many similar victims. This appears to be a potential example of the “availability heuristic” and “availability bias”. To see this, consider the risky choice experiment of Johnson et al. (1993) on availability, where the authors found that people were willing to pay more for insurance on their plane crashing due to terrorism than for a general insurance policy covering any form of disaster in their flight. Just as the crash due to terrorism is but a subpart of all plane disasters, so is the one starving child but a subpart of the starvation reflected in a statistic. The paradox in Johnson et al. is that one should not be more concerned about the risk of a terrorist plane crash than all possible forms of crash. Likewise, one should not be more concerned about one starving child than all the starving children. In both cases though, the availability and vividness of the identifiable risk or victim elicits even more concern than the non-specific group of all such risks or victims. This is usually called the “availability heuristic”, when referring to probability and risk choices (Tversky & Kahneman, 1973), and the labels “identifiable victim effect” and “psychic numbing” may be essentially describing a manifestation of the same heuristic in a different context.

Considering the proposition that probabilities in risky choices may be like percentages of people in distributional choices, per the “veil of ignorance” thought experiment, there are several other well-studied biases of risky choice worth examining in the context of distributional choices. That is the focus of this paper.

One bias to be examined is demonstrated in the Allais paradox (Allais, 1953), a violation of Savage’s (1954) “sure thing principle”, which holds that choices should not be affected by consequences that occur in some state of the world regardless of the option chosen. In the distributional analogy, choices should not be affected by a consequence that affects some group of people regardless of the option chosen.

Another is the “certainty effect”: the anomaly that people “underweight outcomes that are merely probable in comparison with outcomes that are obtained with certainty” (Kahneman & Tversky, 1979). The certainty effect is one explanation of the Allais Paradox. One illustration of the certainty effect involves the “isolation effect” (Kahneman & Tversky, 1979), in which a gamble is presented in two stages. A common consequence is ignored when it is determined in the first stage, thus isolating the second stage and thus creating apparent certainty for one option in that stage. If risky and distributional choices are similar enough that these biases carry over, the isolation effect should mean that decision-makers will disregard individuals whose outcomes do not depend on the choice, when these are identified in the first stage, and focus only on those whose outcomes vary across choices. The certainly effect suggests that people might highly prefer perfect equality, relative to outcomes that contain merely a small degree of inequality. The small chance of getting a bad outcome in a gamble would translate into a small group or individual being singled out in the distribution to receive a bad outcome. A finding along these lines was reported in Ubel et al. (2001), who found that people would rather offer a less effective screening test to 100% of a Medicaid population than a more effective test to 50% of the population.

Another paradox of risky choice that may apply to distributional fairness is the “peanuts effect”, referring to the observation that the usual risk aversion actually turns into risk seeking as prospective rewards become very low (Weber & Chapman, 2003; Markowitz, 1952). While it is not impossible to reconcile the peanuts effect with Expected Utility Theory under some utility functions, the experimental results are at least inconsistent with the way risk aversion is traditionally modeled. Usually the assumption of declining marginal utility would mean that risk aversion would be highest in the domain of low amounts of money and diminish as the money goes up; and declining marginal utility, in fact, has been found to apply in other distributional choice experiments (e.g., Greene & Baron, 2001). However, if the peanuts effect applies to distributional fairness, inequality aversion would also flip direction and become inequality seeking as prospective rewards become very low, just as risk aversion does.

2  Method

The study presented here was conducted through surveys asking questions about a novel thought experiment, administered to a diverse group of subjects drawn broadly from the general population.

2.1  Subjects

288 subjects participated in the study by completing a survey. They were solicited to participate at a table in several New York City parks. Subjects came from many different occupations, backgrounds, and education levels. The age of subjects ranged from 18 to 92, with about half of subjects aged between 25 and 55, and about a third between 18 and 25. About 75% of subjects self-identified as White; the remainder approximately equally African-American, Asian, and Multi-Racial or “Other”. The median household income of subjects was in the category $40,000-$60,000/year, in line with the national median. Gender balance was approximately equal.

2.2  Measures and procedure

Subjects were presented with a survey containing four questions and optional demographic questions about gender, age, ethnicity, and income. These were the instructions:

You are serving as the prize administrator of a game of chance that is to be played by a class of high school students. Being a game of chance, prizes are awarded completely at random, e.g., by a computer generating random numbers. There are 100 students in the senior class of this school, each of whom is college bound. None have yet received any scholarships for college. All 100 members of the class are entered in the game.

In the following questions you will be asked to make choices about what prizes will be given away to the students for playing. You do not know any of these students, the school is not in your area, and none of the students know you or know that you are the person selecting the prizes that they will receive. The game is conducted publicly in an assembly; the students will be aware of what they receive and what others receive. You will not be there while the game is conducted, nor get reports back afterwards.

In the following questions you will be given a choice between two different packages of prize offerings. Each question is completely independent. That is, when you are offered the choice between two prize packages, they are in no way an alternative to previous choice pairs and no one is aware of the choices you made when given previous offerings or that such alternatives existed.

You may use a calculator or any other tool, if you would like. Your job as prize administrator is not an indirect attempt to test your mental math abilities.

Efforts were made to prevent subjects from comparing similar questions and self-consciously reevaluating their choices. That is, it would have been undesirable for subjects to say to themselves, “well, if I answered the first question this way, then I guess I am supposed to answer this other question that way (otherwise I am not logical)”. In particular, no survey contained a question testing for a bias, as well its control question. However, a subject may have been in the control group for one bias, but the test group for a different one. The order in which questions appeared was maximally altered across surveys so as to minimize any effects from the sequence.1

3  Results

There are three individual hypotheses of this study, pertaining to the different biases and effects already found in risky choices and cited above. The hypotheses are that each of these effects will also appear in choices about fairness.

H1

The inequality isolation and certainty effects: When losers appear isolated, more people express inequality aversion.

Hypothesis 1 proposes that the Allais Paradox applies to distributional choices; the Allais Paradox being the amplified risk aversion observed when the certainty effects applies (Allais, 1953). In distributional choices, it would be inequality aversion not risk aversion that would appear amplified with certainty. The Allais Paradox in distributional choices differs from the identifiable victim effect and psychic numbing. For it to apply, people must be more concerned even with “statistical” (unidentified and to be determined) victims, merely because of the size of their group relative to the population. In this study, winners and losers are not identified at all, and they are not even selected yet (selection being the weak identifiability used by Small & Loewenstein, 2003). Alternatively, psychic numbing studies have increased the size of the population, showing that it lowers concern for a constant-sized pool of victims (Fetherstonhaugh et al., 1997). In this study, even the size of the population remains fixed. Thus, this study investigates whether the paradox exists even in absence of the identifiable victim effect or psychic numbing.

The Allais paradox is tested with the questions below from the survey. The percentages of people receiving particular rewards are the same as those used for probabilities in Allais’ original paper (1953), and both the percentage of people and the nominal size of rewards are the same as those used to illustrate the paradox Kahneman and Tversky (1979).

Prize Package 1. You may award one of the two following packages to the 100 students:
A:33 students receive a $2500 scholarship for future tuition expenses.
 67 students receive nothing. ($82,500 in total prizes.)
B:34 students receive a $2400 scholarship for future tuition expenses.
 66 students receive nothing. ($81,600 in total prizes.)
Prize Package 2. You may award one of the two following packages to the 100 students:
A:33 students receive a $2500 scholarship for future tuition expenses.
 66 students receive a $2400 scholarship for future tuition expenses.
 1 student receives nothing.($240,900 in total prizes.)
B:All students receive a $2400 scholarship for future tuition expenses. ($240,000 in total prizes.)

For the Allais paradox to be supported, more subjects had to choose option “B” (the inequality-averse choice) in Question 2 than in Question 1. This would be consistent with the result found experimentally for risky choices by Kahneman and Tversky. Here are the results:

Question 1: A, 21 (22%); B, 75 (78%)

Question 2: A, 13 (14%); B, 83 (86%)

p = 0.0273

Thus, the Allais paradox is supported.

A related “inequality certainty effect” was also tested, this time by isolating a single loser through framing choices in terms of multiple rounds. The ultimate distributional result is the same as Prize Package 1 above, but it is framed very differently, which had a significant effect on how it was regarded by the subjects. This is analogous to the “isolation effect” describe by Kahneman and Tversky (1979).

Prize Package 3. This game will take place in two rounds and both stages are public. In Round 1, 66 students are eliminated and receive nothing. 34 students move on to Stage 2 of the game. At Stage 2, you may award one of these packages:
A:33 students receive a $2500 scholarship for future tuition expenses.
 1 student receives nothing. ($82,500 in total prizes.)
B:All 34 students receive a $2400 scholarship. ($81,600 in total prizes.)

For the isolation certainty effect to be supported, significantly more subjects had to choose the inequality-averse option when the choice was framed in stages (creating a false impression of isolation) than when it was not; i.e.,, more subjects had to choose option “B” in Question 3 than in Question 1. Here are the results:

Question 1: A, 21 (%); B, 75 (78%)

Question 3: A, 9 (%); B, 87 (91%)

p = 0.0011

The inequality certainty effect is supported with the evidence from introducing the framing of eliminating students in two different stages.

H2

The inequality peanut effect: When prospective gains are low, fewer people express inequality aversion.

Hypothesis 2 (H2) applies the “peanuts effect” to distributional fairness and inequality aversion. The peanuts effect is the observation that risk aversion turns into risk seeking as the prospective rewards become very low. So, if choices about fairness are like choices about risk, inequality aversion would also turn into inequality seeking as the prospective rewards become very low. That possibility is tested explicitly in this study. The peanut inequality hypothesis asserts a non-traditional view of inequality aversion, though, in that mere changes in the scale of rewards are not traditionally thought to have large effects on distributional preferences.

The “inequality peanut effect” was tested by comparing the results from these alternatives in the survey:

Prize Package 4. You may award one of the two following packages to the 100 students:
A:7 students receive a $10 gift card for textbooks and school supplies.
 93 students receive nothing.
B:All students receive 70 cents off their next textbook or school supplies purchase.
Prize Package 5. You may award one of the two following packages to the 100 students:
A:7 students receive a $1000 gift card for textbooks, school supplies, and dorm furniture.
 93 students receive nothing.
B:All students receive a $70 gift card for textbooks, school supplies, and dorm furniture.

Using Question 5 as the control, for the Peanuts Inequality hypothesis to be supported, significantly more subjects had to choose the inequality-averse option (B) in Question 5, where the stakes were significant, than in Question 4 where the stakes were “peanuts”. Here are the results:

Question 4: A, 46 (%); B, 26 (36%)

Question 5: A, 28 (%); B, 44 (61%)

p = .0000

Thus, the inequality peanuts effect is strongly supported by the data. This powerful result compares in magnitude to tests of the peanuts effect in risky gambles, too (Weber & Chapman, 2005).2

4  Discussion

First, on the primary question of this study, these results broadly support the notion that some of the biases and anomalies of risky choices also apply to choices about distributional fairness, and hence some of the same heuristics maybe be employed as well. This implies a deeper connection between these two types of decisions than has been previously suggested. While previous literature has tied these two types of choices philosophically, or found correlation in magnitudes of the two types of aversion across individuals, this study’s findings present evidence that the decisions are actually being made in very similar ways, with similar thought processes and rules of thumb. Notably, this result held even though the premise of the survey emphasized that subjects had no personal tie to the outcomes, and hence no personal risk themselves.

This result may have extensions to other areas of study that deal with decisions about fairness and inequality. These results may shed light on social dynamics, including the degree of concern individuals have about economic inequality in society. For example, the Allais paradox for distribution may be extended to shed light on why Americans have become less concerned about inequality as they have become more economically unequal with a smaller middle-class (Kelly & Enns, 2010). Specifically, it is surprising that in Question 2, by inserting a “middle class”, i.e., an additional 66 students receive $2400 in both outcomes relative to Question 1, not only do preferences significantly change, but they become much more inequality averse. Conversely, in a circumstance that starts with higher inequality and no “middle class”, many people are actually less inequality averse. Thus, with the right starting condition, more inequality in the group actually makes people less concerned about inequality; a result which may have political economic implications if it applies at a societal level.

These results may also be informative for incorporating real decision making effects when evaluating the attributes and parameters of social welfare functions and decisions, as has been done, for example, in Amiel and Cowell (1999), Dolan and Robinson (2001), Dolan and Tsuchiya (2011), and Turpcu et al. (2012). Some of the issues raised by the findings here include that: (a) social welfare functions may be more complex than currently conceived; (b) welfare preferences may be quite heterogeneous across society; (c) framing, reference points, and subdividing of groups can significantly alter beliefs about fairness in a group or society; (d) social welfare functions may exhibit increasing relative inequality aversion; (e) social welfare functions may not be independent of irrelevant individuals, i.e., if groups are split apart they do not become independent; and (f) social welfare functions are not invariant to independent changes of units.

The results here also provide a basis for further investigation of Prospect Theory and related theories models for choices about inequality. The present findings suggest that neither utilitarian nor Rawlsian objectives will properly describe what most people believe is fair.

As with much of the literature on decision making, these results highlight the malleable nature of choice processes. The findings here show that people will often feel differently about identical outcomes simply because of a minor adjustment in the presentation of how that outcome is reached. That is, frames, values, scales, reference points, and the like all matter in choices about fairness and inequality. A fruitful area for future research may be in better understanding the particular decision processes used in making choices about fairness and inequality with actual process measures. The relief in these findings is that the complexity found here fits well within the broader tapestry of heuristics, biases, and anomalies in decision making. That this is the case is not only a consolation, but also deeply informative about the nature of heuristics, biases, and decision making in general.

References

Allais, M. (1953). Le comportement de l’homme rationnel devant le risque, critique des postulats et axioms de l’école Américaine. Econometrica, 21, 503–546.

Amiel, Y., & Cowell, F. (1999). Thinking about inequality. New York: Cambridge University Press.

Bar-Hillel, M., & Yaari, M. (1993). Judgments of distributive justice. In Mellers, B, Baron, J. (Eds.) Psychological perspectives on justice: Theory and applications. New York: Cambridge University Press.

Baron, J. (1993). Postscript. In Mellers, B., & Baron, J. (Eds.) Psychological perspectives on justice: Theory and applications. New York: Cambridge University Press.

Carlsson, F., Daruvala, D., & Johansson-Stenman, O. (2005). Are people inequality-averse or just risk-averse. Economica, 72, 375–396.

Chambers, C. (2012). Inequality aversion and risk aversion. Journal of Economic Theory, 147(4), 1642–1651.

Dolan, P., & Robison, A., (2001). The measurement of preferences over the distribution of benefits: The importance of the reference point. European Economic Review, 45(9), 1697–1709.

Dolan, P., & Tsuchiya, A. (2011). Determining the parameters in a social welfare function using stated preference data: An application to health. Applied Economics, 43(18), 2241–2250.

Ferrer-i-Carbonell, A., & Ramos, X. (2010). Inequality aversion and risk attitudes. IZA Discussion Paper No. 4703.

Fetherstonhaugh, D., Slovic, P, Johnson, M., & Friedrich, J. (1997). Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14, 283–300.

Greene, J., & Baron, J. (2001). Intuitions about declining marginal utility. Journal of Behavioral Decision Making, 14, 243–255.

Harris, R., & Joyce, M. (1980). What’s fair? It depends on how you ask the question. Journal of Personality and Social Psychology, 38, 165–170.

Harsanyi, J. (1955). Cardinal welfare, individualistic ethics, and interpersonal comparisons of utility. Journal of Political Economy, 63, 309–321.

Harsanyi, J. (1975). Can the maximin principle serve as a basis for morality? A critique of John Rawls’s theory. The American Political Science Review, 69(2), 594–606.

Hoffman, E., & Spitzer, M. (1985). Entitlements, rights, and fairness: An experimental examination of subjects’ concepts of distributive justice. Journal of Legal Studies, 14(2), 259–297.

Jenni, K., & Loewenstein, G. (1997). Explaining the ‘identifiable victim effect.’ Journal of Risk and Uncertainty, 14, 235–257.

Johnson, E., Hershey, J., Meszaros, J., & Kunreuther, H. (1993). Framing, probability distortions, and insurance decisions. Journal of Risk and Uncertainty, 7, 35—51.

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 42(2), 263–291.

Kelly, N., & Enns, P. (2010). Inequality and the dynamics of public opinion: The self-reinforcing link between economic inequality and mass preferences. American Journal of Political Science, 54(4), 855–870.

Loewenstein, G., Thompson, L., & Bazerman, M. (1989). Social utility and decision making in interpersonal contexts. Journal of Personality and Social Psychology, 57(3), 426–441.

Markowitz, H. (1952). The utility of wealth. Journal of Political Economy, 60(2), 151–158.

Messick, D. (1993). Equality as a decision heuristic. In B. Mellers & J. Baron (Eds.) Psychological perspectives on justice: Theory and applications, pp. 11–31. New York: Cambridge University Press.

Rawls, J. (1957). Justice as fairness. Journal of Philosophy, 54, 653–662.

Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

Savage, L. J. (1954). The foundations of statistics. New York: Wiley.

Slovic, P. (2007). “If I look at the masses I will never act”: Psychic numbing and genocide. Judgment and Decision Making, 2(2), 79–95.

Small, D., & Loewenstein, G. (2003). Helping a victim or helping the victim: Altruism and identifiability. Journal of Risk and Uncertainty, 26(1), 5–16.

Turpcu, A., Bleichrodt, H., Le, Q., & Doctor, J. (2012). How to aggregate health? Separability and the effect of framing. Medical Decision Making, 22(2), 259–265.

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207–233.

Ubel, P., Baron, J., & Asch, D. (2001). Preference for equity as a framing effect. Medical Decision Making, 21(3), 180–189.

Vickrey, W. (1945). Measuring marginal utility by reactions to risk. Econometrica, 13, 215–236.

Weber, B., & Chapman, G. (2005). Playing for peanuts: Why is risk seeking more common for low-stakes gambles? Organizational Behavior and Human Decision Processes, 97, 31–46.


*
Finance Department, New York University. Email: zm13@nyu.edu.
I thank Max Bazerman, Christine Jolls, Amartya Sen, Sendhil Mullainathan, Dante Spetter, Michael Norton, Christine Hooker, seminar participants at Harvard Business School, and my students at NYU, for useful comments.

Copyright: © 2015. The authors license this article under the terms of the Creative Commons Attribution 3.0 License.

1
One additional item was used, which was not comparable to any other items. Results are reported below.
2
One additional package was presented:

Prize Package 6. All students receive a $5000 scholarship at the beginning simply for playing. In the game you may then dole out one of the following outcomes:

A:7 students are selected to contribute $2500 of their scholarships to the school’s designated educational charity and are left with $2500.
 93 students are not compelled to give anything and keep their full $5000 scholarship.
B:All students are compelled to give back $175 of their scholarships to the school’s educational charity and are left with $4825.

9 subjects (12%) chose A; 68 (88%) chose B.


This document was translated from LATEX by HEVEA.