Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 123-136

The Short Maximization Inventory

Michal Ďuriník*   Jakub Procházka#   Hynek Cígler$

We developed the Short Maximization Inventory (SMI) by shortening the Maximization Inventory (Turner, Rim, Betz & Nygren, 2012) from 34 items to 15 items. Using the Item Response Theory framework, we identified and removed the items of the Maximization Inventory that contributed least to the performance of the original scale. The construct validity assessed for SMI is similar to the full MI and is in line with the predictions from the literature: the Satisficing subscale is positively related to the indices of well-being, while the Decision Difficulty and Alternative Search subscales are negatively related to well-being. The new scale retains the good psychometric properties of the original scale. Furthermore, its brevity allows researchers to use the scale in studies in which maximization is not the primary focus. Although the SMI lacks the “High Standards” subscale, as did the original MI, we believe that SMI is a step towards developing a reliable and conceptually sound measure of maximizing that can be used in various research designs.


Keywords: maximization, Maximization Inventory, scale shortening, Short Maximization Inventory

1  Introduction

In economics and other social sciences, humans are often modeled as homo economicus. Homo economicus is an all-knowing individual who is flawless in calculating expected utilities from individual alternatives and choosing the one that provides the highest utility. Homo economicus maximizes. Simon (1955), however, argues that we are often unable to fulfill this goal of perfect optimization. Instead, we satisfice: we choose options that meet a certain threshold of acceptability. When our sub-perfect knowledge and abilities prevent us from opting for the best, we resort to choosing what is “good enough”.

Half a century later, Schwartz, Ward, Monterosso, Lyubomirsky, White and Lehman (2002) revisited Simon’s work and proposed maximizing to be a stable personality trait. According to Schwartz et al., each individual falls somewhere on a continuous scale between being a Maximizer (one who tries to find the best of all alternatives) and a Satisficer (one who is comfortable selecting a “good enough” alternative). Nenkov, Morrin, Ward, Hulland and Schwartz (2008) later proposed that maximizing has three dimensions: Decision Difficulty (the extent to which one experiences difficulty selecting from a range of options), Alternative Search (the tendency to exert effort and time exploring available alternatives) and High Standards (the tendency to hold high standards for oneself and one’s choices). Recently, Cheek and Schwartz (2016) proposed maximizing to have two components: the goal of choosing the best and the strategy of alternative search.

Extensive literature has found that maximizers are more likely than satisficers to report low self-esteem (Schwartz et al., 2002) and less likely to feel happy (Larsen & McKibban, 2008; Polman, 2010; Schwartz et al., 2002) and to be satisfied with their lives (Dahling & Thompson, 2012; Schwartz et al., 2002). Maximization has also been linked to depression (Schwartz et al., 2002), regret (Moyano-Díaz, Martínez-Molina & Ponce, 2014; Schwartz et al., 2002), ruminating on past events (Paivandy, Bullock, Reardon & Kelly, 2008) and other maladaptive traits and behaviors (see Cheek & Schwartz, 2016, for a more extensive list).

The results mentioned above were collected using the 13-item unidimensional Maximization Scale (MS; Schwartz et al., 2002). MS is the first and most widely used maximization measure, but it has received significant criticism. An item response analysis conducted by Rim, Turner, Betz and Nygren (2011), together with previous classical test theory analyses (e.g., Diab, Gillespie & Highhouse, 2008; Giacopelli, Simpson, Dalal, Randoplh & Holland, 2013; Nenkov et al., 2008), found MS to have poor psychometric properties. The main points of criticism towards MS are its composite scoring (although analyses indicate it possesses a three-factor structure of Alternative Search, Decision Difficulty, and High Standards); weak internal consistency (Cronbach’s alpha at the lower bound of acceptability for use in research); and the presence of items that are either conceptually too distant from the construct of maximizing (e.g., “I often fantasize about living in ways that are quite different from my actual life”) or focus on overly specific behaviors (e.g., “Renting videos is really difficult. I’m always struggling to pick the best one”). In addition, Rim et al. (2011) note that satisficing in MS is measured only indirectly, through a lack of maximizing, and they argue that a direct measure of satisficing could be a useful contribution.

Since the publication of MS (Schwartz et al., 2002), the scale has been shortened (Nenkov et al., 2008) and modified (Lai, 2010; Weinhardt, Morse, Chimeli & Fisher, 2012), and new scales to measure maximization have been developed (Diab et al., 2008; Misuraca, Faraci, Gangemi, Carmeci & Miceli, 2015; Turner et al., 2012). See Cheek and Schwartz (2016) for a list and discussion of the existing maximization scales. Some authors, notably Diab, Gillespie, and Highhouse (2008) and Giacopelli, Simpson, Dalal, Randolph, and Holland (2013) note that various measures of maximizing yield different correlations with indices of well-being, indicating that the scale selection is likely to influence the results observed in a study.

Following Rim et al.’s (2011) analyses, Turner et al. (2012) developed the 34-item Maximization Inventory (MI). This relatively new scale has been used by a number of researchers since its publication (e.g., Djulbegovic et al., 2014; Miller, 2014; Moyano-Díaz et al., 2014; Patalano, Weizenbaum, Lolli & Anderson, 2015; Rim, 2017; Rogge, 2016; Sharif & Spiller, 2014).

MI is the first scale to measure satisficing directly, as a separate subscale, instead of indirectly through low maximizing scores. Weinhardt et al. (2012) highlight the presence of the Satisficing scale as an important advancement, as “the data do not support the assumption that maximizing and satisficing are on opposite ends of a continuum and therefore developing a satisficing measure is extremely important” (p. 655). Cheek and Schwartz (2016) acknowledge the possible benefits of measuring satisficing directly, but challenge the content validity and face validity of MI’s Satisficing subscale. They suggest that, although the subscale shows internal consistency, some of its items appear to relate to other constructs than satisficing. Two other subscales of MI are Decision Difficulty and Alternative Search.

As reported by Turner et al. (2012), Decision Difficulty was correlated with negative indices of well-being, while Alternative Search was unrelated to them. Meanwhile, Satisficing was associated with adaptive decision making and good mental health indices (Turner et al., 2012). Psychometric properties of MI were shown by its authors to be superior to MS, using both classical test theory and item response analysis. Weinhardt et al. (2012) note the use of general statements in MI as a significant advantage over MS, which uses specific statements.

Another maximization scale, the Maximization Tendency Scale (MTS, Diab et al., 2008), consists mostly of High Standards items. As Weinhardt et al. (2012) propose, MI is to be perceived as a measure of maximization behavior, while MTS as a measure of maximization goal (Cheek & Schwartz, 2016).

A High Standards subscale, which is a standard component of other maximization-related scales, is not present in MI. High Standards (HS) items were present in the original pool of items, and an HS subscale was considered for MI. However, both exploratory and confirmatory factor analysis, together with IRT, failed to provide support for High Standards as a separate factor (Turner et al., 2012). In their recent review of maximization measures (published after our analysis was conducted), Cheek and Schwartz (2016) point out that MI does not contain a High Standards dimension (p. 132). However, later in their review, they argue that “it is not actually having high standards that defines the goal of maximization” (p. 135), as Satisficers can also have high standards.1 Having high standards is essential to maximizing, but is not exclusive to it. Rather than having high standards, Cheek and Schwartz define maximizing through the desire to choose the best option, the “maximum”. We acknowledge that MI (and consequently SMI) lacks a measure of this maximization goal, yet we see MI (especially its Alternative Search subscale) as a useful measure of behavior relevant to the goal of maximizing.

Cheek and Schwartz (2016) propose a two-component model of maximization, distinguishing between maximization goal (choose the best) and maximization strategy (extensive alternative search).2 For maximization goal measurements, they recommend Dalal, Diab, Zhu and Hwang’s (2015) 7-Item Maximization Tendency Scale, as it has good psychometric properties and focuses on the goal of choosing the best. For maximization strategy measurements, Cheek and Schwartz tentatively recommend the use of MI’s Alternative Search subscale. However, they encourage further refinement of this measure by future researchers. In this paper, we contribute to such refinement.

Turner et al. (2012) report satisfactory psychometric properties of the overall MI model with three subscales (Cronbach’s alphas ≥ 0.73; RMSEA=0.063). Upon closer inspection, however, some MI items display low factor loadings. Turner et al. (2012) report λ <0.3 for items 5, 7 and 9 of the Satisficing scale and λ ≤0.4 for 13 out of the total of 34 items). Applying classical test theory criteria on MI using data reported by Turner et al. (2012) is a challenging task, as some important statistics are absent (e.g., CFA Chi-squared and CFI/TLI statistics). Item response theory (IRT) analysis can provide more insight into individual item performance, and Turner et al. (2012) present some IRT analysis results in their report. The item discrimination parameter for item 24 is 0.59 (according to Baker, 2001, discriminability lower than 0.65 is considered low). For items 5, 7, 9, 15, 17 and 21, item discrimination parameters are lower than 0.9. In total, Turner et al. (2012) report item discrimination parameters lower than 1.0 for 12 items of MI. Items low in this parameter have flatter item information curves and, relatively to items high in this parameter, contribute poorly to the total test information. They enhance the total test information and thus lower errors of latent trait estimates. At the same time, however, these items also influence (usually increase) the variance of estimated latent traits and can thus decrease the test reliability.3 Additionally, Moyano-Díaz et al. (2014), who used (a Spanish translation of) MI in their research, reported poor performance of the Satisficing subscale. The internal consistency of the subscale was low (Cronbach’s alpha = 0.64) and the authors suggested a two-factor solution for this subscale. They also noted that the meanings of some Satisficing items overlap with other dimensions of MI.

The three subscales of MI contain a total of 34 items. While a scale this large is perfectly acceptable for studies in which maximizing is the focal construct, its rather large size might discourage researchers from using MI as a supplementary method. When researchers compose a battery of scales to measure several different constructs, they face a trade-off between brevity and better psychometric properties. We believe that one of the reasons for the Maximization Scale’s (Schwartz et al., 2002) popularity is its conciseness and ease of use.

Based on these indices, we conjecture that an appropriate shortening of the Maximization Inventory might produce a scale that is concise, creates a much smaller burden on participants and provides results which are as reliable and valid as those from the original scale. Furthermore, developing a short version of MI is an opportunity to flag and remove problematic items, should any be identified, resulting in higher-quality measurement per item.

Turner et al. (2012) conducted multiple studies on MI, but all of them used samples consisting of undergraduate students enrolled in a psychology course. Such samples differ from the general population in terms of age distribution, intelligence, and academic achievement. Moreover, some items may display lower discriminability because of the lower response variability in a homogenous sample. Examining MI’s psychometric properties with a different and more heterogeneous sample is thus desirable. This paper contributes by administering MI to a diverse sample of subjects (aged 18 to 83, with education levels ranging from elementary to postgraduate).

In addition, by recruiting subjects from the Czech Republic, this paper expands maximization research to a new cultural environment. So far, maximization has been studied in the U.S. (e.g., Rim et al., 2011; Schwartz et al., 2002; Turner et al., 2012), Italy (Misuraca et al., 2015), Norway (Lai, 2010), the Netherlands, Belgium, China (Roets, Schwartz & Guan, 2012) and Chile (Moyano-Díaz et al., 2014). Roets et al. (2012) found in their cross-cultural study that maximizers in the U.S. and Western Europe report lower well-being than satisficers. In China, a collectivist culture with a strong long-term orientation (Hofstede, 2016) where choice is not as abundant as in the U.S. and Western Europe, the relationship between maximization and well-being was insignificant. Compared to the U.S. (Hofstede, 2016), Czech culture is higher in uncertainty avoidance and long-term orientation and is lower in individualism. These differences, together with the fact that the Czech nation faced limited (both consumer and political) choice opportunities under the communist regime, might be reflected in Czechs’ decision-making and well-being correlates. Following previous research on maximizing, we use the well-being indices of Happiness (Lyubomirsky & Lepper, 1999), Optimism (Scheier, Carver & Bridges, 1994), Self-Efficacy (Schwarzer & Jerusalem, 1995) and Regret (Schwartz et al., 2002). Although it is not our main motivation, this research provides the opportunity to investigate whether maximizing has the same correlates and factor structure in the Czech sample as in the U.S. sample. The primary focus of the correlation analysis is to provide evidence for construct validity of the Short Maximization Inventory.

In this paper, we first analyze the Maximization Inventory as administered to the Czech sample. We replicate the vast majority of MI’s psychometric properties and well-being indices correlations reported in Turner et al. (2012). We also report more complete statistics for individual items of MI. We replicate the three-factor structure proposed in Turner et al. (2012); however, using classical test theory and item response theory, we find multiple items with sub-standard properties. We proceed to develop a short version of MI, following the goal of creating a concise scale with solid psychometric properties. Our main criteria were the overall fit of the model and exclusion of items that did not substantially contribute towards the model’s good properties. Using a different set of participants, we then demonstrate the favorable psychometric properties of the new scale.

By removing poorly performing items and refining both the Alternative Search and Satisficing subscales, we partially address the suggestions offered by Cheek and Schwartz (2016) and Weinhardt et al. (2012). The resulting Short Maximization Inventory is a compact yet powerful measurement tool that might benefit the whole field, as it facilitates further research on maximizing.

2  Part 1: Development of the Short Maximization Inventory

The purpose of this study was to assess the psychometric properties of the Maximization Inventory (Turner et al., 2012) and to develop a shortened version of MI.

2.1  Method

2.1.1  Scale translation

With the permission of one of its authors, the Maximization Inventory was first translated into Czech following guidelines proposed by Beaton, Bombardier, Guillemin and Ferraz (2000). The process included three independent translations, back-translations and an expert committee assessment. As an additional step, two think-aloud cognitive interviews and two concurrent verbal probing cognitive interviews (Willis, 1999) were conducted to ensure that the items were clear and easy to comprehend. Finally, 12 Masaryk University students participated in the online pilot testing of the translated scale and reported no difficulties understanding and responding to the items. The translation and adaptation of the scale into Czech is described in Ďuriník (2016).

2.1.2  Participants

A total of 902 adult individuals participated in this study. Originally, 913 responses were collected. After screening the raw data for suspicious answer patterns (e.g., 1-2-3-4-5-1-2-3-4-5), too-short response times (less than one second per item) and invalid responses (e.g., a reported age of 11,000 years), the responses from 11 participants were removed.

A total of 77 Masaryk University students completed the scale after being invited to do so via e-mail. 835 members of the general public were also recruited via www.vyplnto.cz, an online platform for survey participant recruitment. Of the total sample, 29% were male and 71% were female. The mean age was 35.4 (SD = 13.62). Each respondent participated voluntarily, and no reward was promised or given for participation.

We randomly assigned approximately two-thirds of the respondents (see the online supplement for the code) to Data Set 1 (n = 603; 66.9 %) for exploratory purposes; the rest of the respondents formed Data Set 2 (n = 299; 33.1%).

2.1.3  Procedures

Participants rated their degree of agreement with 34 items of the Maximization Inventory on a standard 5-point scale with anchors (1 = Strongly Disagree; 5 = Strongly Agree). Next, four other scales were administered (see Part 2 of this paper). With Data Set 1, we assessed the performance of the 34-item Maximization Inventory and developed the shortened version. With Data Set 2, we verified the factor structure of the shortened scale.

2.1.4  Data Analysis

All the analyses were carried out using R environment (R Core Team, 2017). We worked under Item Response Theory parametrization – as the measurement model, we used the confirmatory multidimensional Graded Response Model fitted using the mirt package (Chalmers, 2012). Model fit was evaluated using M2* statistics (Maydeu-Olivares & Joe, 2006) with collapsing over response categories (Cai & Hansen, 2013). This allowed us to see if the proposed model (three dimensions with each item loaded on just one factor) describes the observed data sufficiently well.4 We inspected the standardized residual matrices and p-values for local dependencies using the LDG2 statistic (Chen & Thissen, 1997). LDG2 is based on a bivariate table with predicted and observed item response frequencies. The significant p-value (e.g., below 0.05) associated with the LDG2 statistic suggests a local dependence of two items that is not predicted by the IRT model. As the LDG2 statistic is chi-squared distributed, the effect size of residual relations can be expressed using Cramer’s V as in other chi-squared tests.

The signed chi-squared test (S-X2; Orlando & Thissen, 2000), which is also based on the difference between observed and predicted response frequencies, was used as an item fit statistic. The significant values of S-X2 suggest that the observed responses to a particular item do not comply with the IRT model. Reliability was estimated using latent trait estimates and their associated standard errors; this is the reliability of latent trait estimated using the IRT model. We also used Cronbach’s alpha for item sums under classical test theory, as in the original study (Turner et al., 2012).

2.2  Results

First, we used Data Set 1 to fit the multidimensional graded response model. The model fitted the data well, M2* = 1303.5, df = 422, p < 0.001; RMSEA = 0.059 with 95% CI [0.055, 0.063], TLI = 0.932, SRMSR = 0.083.5 Item discrimination parameters and item fit are shown in Table 1.

Reliability estimates using Cronbach’s alpha were similar to the original study by Turner et al. (2012; reported as α original in Table 1). IRT reliability estimates were higher for all three subscales, as reported in Table 1.


Table 1: Maximization Inventory items – results of Item Response Theory analysis. N=603.
 
2.5inSurvived
into SMI
Discrimination parameters Thresholds Item fit 
Item No. aSaDDaAS d1d2d3d4 S-X2dfprxx α (α original)
1
Yes
1.48400 5.403.281.39−1.45 112.01150.5620.774 0.711 (0.73)
2
No
1.20400 5.744.322.800.61 82.5800.401 
3
Yes
1.47800 5.063.471.20−1.09 115.81170.515 
4
Yes
1.54700 4.162.140.25−2.45 135.81430.653 
5
No
1.01800 4.842.781.32−0.63 127.61310.568 
6
Yes
1.37200 5.803.902.16−0.35 106.8970.233 
7
No
0.82000 4.322.480.84−0.85 128.71410.763 
8
No
0.56100 4.753.282.190.48 81.7870.640 
9
Yes
0.82400 2.030.25−1.17−2.73 131.31370.622 
10
No
0.12900 4.612.921.56−0.15 106.8990.278 
11
No
02.0530 1.70−0.39−1.71−3.16 124.81480.9170.914 0.891 (0.85)
12
Yes
02.7220 4.001.22−0.42−2.98 131.31350.575 
13
Yes
02.4770 2.690.73−0.63−2.64 179.21470.036 
14
Yes
01.8680 3.081.480.14−1.74 152.81470.354 
15
No
01.7250 1.47−0.49−1.90−3.55 123.31160.304 
16
No
01.8400 1.93−0.29−1.74−3.62 146.81380.288 
17
Yes
01.3860 1.810.26−1.22−2.62 147.51560.674 
18
No
01.5250 2.330.17−1.55−2.80 138.51480.700 
19
No
01.6090 3.631.620.28−1.67 121.51310.712 
20
No
01.4000 2.480.98−0.13−2.02 156.41550.453 
21
No
00.4770 2.880.98−0.56−1.75 121.21380.844 
22
Yes
00.9560 2.731.18−0.20−1.78 162.21500.235 
23
No
000.758 3.401.690.35−1.33 155.51400.1750.916 0.881 (0.83)
24
Yes
000.899 1.850.45−0.43−1.84 179.91680.251 
25
No
002.610 3.451.17−0.71−3.78 139.01250.185 
26
Yes
002.987 4.432.110.07−3.47 127.71240.392 
27
Yes
002.073 3.101.15−0.46−2.57 14.71340.329 
28
No
002.249 3.841.71−0.25−3.07 157.21290.046 
29
Yes
001.974 2.550.75−0.66−2.96 13.61380.661 
30
No
001.486 1.29−0.02−0.94−2.47 162.71530.280 
31
No
001.264 3.912.140.46−1.96 15.11220.043 
32
Yes
001.030 3.231.600.54−1.22 98.51240.955 
33
No
001.194 3.792.280.54−1.18 14.21210.112 
34
No
001.184 2.39.89−0.81−2.57 134.61480.778 
Note: With factor correlation S–DD r = −0.312 p < .001, S–AS r = 0.119, p < 0.01, and DD–AS r = 0.223, p < .001.
rxx is IRT latent trait estimation reliability (Kim & Feldt, 2010).

Although the model had a good fit, there was a substantial number of locally dependent items. The LDG2 test was significant at p < 0.01 for the 239 item pairs (43%), of which 146 pairs (26%) were more dependent than one could expect based on the model. This suggests that the responses to many pairs of items are not driven only by the three measured dimensions, but to a small extent also by another hidden factor. This can be wording, other unmeasured traits, etc.

Items 2, 6, 8 and 106 had high skewness, kurtosis and high mean raw scores (above 4 on a scale ranging 1 to 5), which led to very high thresholds (especially the d4 threshold between responses 4 and 5). All of these items are general statements about the nature of life and decision making7 and do not refer to specific decision-making situations in life. Judging by their content, it is easy to understand why most respondents chose extreme values of 4 or 5 when responding to these items. We flagged these as potentially problematic; with these items, respondents tend to select the highest values available, and items thus have low discrimination ability or low item information.

Items 13, 28 and 31 did not fit an IRT model at p < 0.05; however, the actual discrepancies were small. Items 7–10 and 21–24 had small discrimination parameters (below 1.0). The discrimination parameter of item 10 was not significantly different from 0 (95% CI = [−0.055, 0.313]). This means that this item does not significantly discriminate between people with a higher and lower level of the satisficing trait.

A residual matrix inspection revealed the tendency of item 10 (Satisficing subscale) to have a high residual correlation with items from the Decision Difficulty factor (Cramer’s V > 0.12, Md = 0.14), as well as the high residual correlation of item 5 (Satisficing subscale) with items from the Alternative Search factor (Cramer’s V > 0.10, Md = 0.14). As the first part of item 5 is essentially a definition of alternative search,8 this was not surprising. We also found high correlated residuals between items 25 and 26 (Cramer’s V = 0.26) and between items 23 and 59 (Cramer’s V = 0.24). These pairs of items are essentially re-wordings of each other and artificially inflate the measured model fit. Regardless of the calculated psychometric properties, we consider it redundant to include two items that ask the same question. Other major inter-item correlations not explained by the factor were between items 29 and 30 (V = 0.20), 23 and 31 (V = 0.19), 16 and 18 (V = 0.15), 7 and 8 (V = 0.16), and 15 and 16 (V = 0.15).


Table 2: The Short Maximization Inventory model fit statistics.
model invariance      model comparison
 constrainsχ 2dfRMSEA [95 % CI]TLIBIC χ 2dfp
configural 154.9840.031 [0.023, 0.038]0.97437330.1    
metricslopes186.9990.031 [0.024, 0.038]0.97237243.0 14.9150.456
parallel+ intercepts244.41590.024 [0.018, 0.030]0.98336898.3 63.6600.351
one-group analysis110.2420.042 [0.033, 0.052]0.97836881.1    


Table 3: Short Maximization Inventory items.
No.(No. in MI)Satisficing items
1(1)I usually try to find a couple of good options and then choose between them.
2(3)In life, I try to make the most of whatever path I take.
3(4)There are usually several good options in a decision situation.
4(6)Good things can happen even when things don’t go right at first.
5(9)I know that if I make a mistake in a decision that I can go “back to the drawing board.”
No.(No. in MI)Decision Difficulty Items
6(12)I am usually worried about making a wrong decision.
7(13)I often wonder why decisions can’t be more easy.
8(14)I often put off making a difficult decision until a deadline.
9(17)The hardest part of making a decision is knowing I will have to leave the item I didn’t choose behind.
10(22)I do not agonize over decisions.
No.(No. in MI)Alternative Search Items
11(24)I take time to read the whole menu when dining out.
12(26)I usually continue to search for an item until it reaches my expectations.
13(27)When shopping, I plan on spending a lot of time looking for something.
14(29)I find myself going to many different stores before finding the thing I want.
15(32)When I see something that I want, I always try to find the best deal before purchasing it.

The results presented in this section provide strong support for our original conjecture: Maximization Inventory could benefit from having its poorly performing items removed. The newly developed Short Maximization Inventory has the potential to display psychometric properties at least as good as those of the original MI, with the added benefit of greater conciseness.

We removed the problematic items and kept the best items in terms of discrimination ability, factor loading, and correlated residuals. Based on the criteria of very low discrimination ability, we removed items 7–8, 10, 21 and 23. Item 2 was excluded based on its low difficulty and thus small item information. Other items were excluded based on dual loadings, sometimes combined with small discrimination parameters.

This led to a final solution with three factors of five items each. This shortened scale fit the data from Data Set 1 very well, M2* = 85.6, df = 42, p < 0.001; RMSEA = 0.042 with 95% CI [0.029, 0.054], TLI = 0.979, SRMSR = 0.047. We cross-validated this model on Data Set 2, where the fit was excellent as well, M2* = 69.5, df = 42, p = 0.005; RMSEA = 0.047 with 95% CI [0.026, 0.067], TLI = 0.971, SRMSR = 0.061.

We then performed a series of multigroup IRT analyses to test scale invariance. The results in Table 2 indicate that there are no significant differences between Data Set 1 and Data Set 2. Fixing the parameters did not enhance the model. Furthermore, the more constrained model had better fit statistics (BIC10, TLI, RMSEA) than the less constrained models. Therefore, we used all the data from Data Sets 1 and 2 for subsequent analyses. We refer to this scale as the Short Maximization Inventory (SMI). A list of all 15 items is presented in Table 3.

All SMI items, except for item 11, have discrimination parameters greater than 1. The IRT model parameters of the Short Maximization Inventory are presented in Table 4. Three items (2, 6 and 10) differ from the Graded Response Model significantly at p < 0.05; however, the total model fit is very good, as one can see in Table 2. Furthermore, the shortened version of the scale no longer displays significant correlation between the Alternative Search and Satisficing subscales. The construct validity is the same as for the full inventory. Correlations of latent trait estimates for the whole sample (merged Data Sets 1 and 2) between the Maximization Inventory and Short Maximization Inventory are quite high (Satisficing r = 0.944, Decision Difficulty r = 0.937 and Alternative Search r = 0.950, all p < 0.001).

We also estimated the reliability for these three scales using IRT reliability based on latent trait estimates and their associated errors of estimation, and using conventional Cronbach’s alpha to assure the comparability with previous research. Furthermore, we also used Raykov’s omega from ordinal confirmatory factor analysis,11 which provided similar results to the multidimensional IRT. Raykov’s omegas can be understood as the squared correlation between the sum of items and the latent trait. Researchers who wish to use IRT latent trait scores should use rxx estimations from Table 4. Researchers who wish to work with raw scores (e.g., sums or means of items) should use Raykov’s omegas (ω coefficients in Table 4), as Cronbach’s alphas slightly underestimate the true reliability as they assume tau-equivalence and the interval scale of items.


Table 4: Short Maximization Inventory parameters of the multidimensional IRT model: discrimination parameters, thresholds, item fit, and reliabilities. N=902.
 Discrimination parameters Thresholds Item fit 
 aSaDDaAS d1d2d3d4 S-X2dfpReliability
11.34600 5.283.181.25−1.49 89.4810.245rxx‘ = 0.746 ω  = 0.717 α  = 0.695
21.33700 5.113.381.21−0.95 105.1800.031 
32.18100 5.082.670.29−2.90 97.1980.507 
41.14600 5.533.742.05−0.28 78.0730.324 
51.07500 2.290.29−1.32−2.97 117.51080.251 
602.9850 4.111.16−0.49−3.16 123.2940.023rxx‘ = 0.855 ω  = 0.794 α  = 0.782
703.0670 3.220.87−0.76−3.22 11.0920.097 
801.6870 2.891.470.27−1.66 76.9950.913 
901.2270 1.700.25−1.15−2.56 106.91050.429 
1001.0550 2.821.22−0.12−1.85 13.01000.024 
11000.939 1.820.50−0.37−1.89 115.71010.151rxx‘ = 0.837 ω  = 0.776 α  = 0.773
12002.489 4.111.980.29−2.84 95.2830.169 
13002.871 3.941.52−0.35−2.98 101.7810.060 
14002.023 2.620.86−0.63−2.76 8.5840.588 
15001.012 3.271.670.60−1.20 91.2970.647 
Note: With-factor correlation S-DD r = −0.397, p < .001 S-AS r = 0.059 and DD-AS r = 0.249, p < .001. rxx − IRT latent trait estimation reliability; ω — Raykov’s omega; α – Cronbach’s alpha.

3  Part 2: Correlation study

The purpose of these analyses was to provide evidence about the construct validity of the Short Maximization Inventory (SMI). We correlated the SMI scales with measures of constructs that should be, according to the theory, related to maximization dimensions. We also correlated the SMI scales with full MI scales to show that the short scales provide results similar to those of the original scales.

3.1  Method

3.1.1  Measures, Participants, Procedures

Maximization. The Maximization Inventory (Turner et al., 2012) consists of three subscales (number of items): Satisficing (10), Decision Difficulty (12) and Alternative Search (12). The Short Maximization Inventory, presented in Table 2, consists of three subscales (number of items): Satisficing (5), Decision Difficulty (5) and Alternative Search (5). SMI responses were obtained by extracting responses to respective items of the MI.

Self-Efficacy. To assess self-efficacy, we used the General Self-Efficacy Scale (Schwarzer & Jerusalem, 1995) translated and validated by Křivohlavý, Schwarzer and Jerusalem (1993). This 10-item self-reported scale is intended to measure a general sense of perceived self-efficacy. Responses are indicated on a 4-point scale ranging from “not true at all” to “exactly true”. Schwarzer and Jerusalem report Cronbach’s alphas ranging from 0.76 to 0.9 over samples from 23 nations. In our sample, the Cronbach’s alpha was 0.9.

Happiness. To measure subjective happiness, we used the Subjective Happiness Scale (Lyubomirsky & Lepper, 1999) in a translation developed by Kresanová (2015). The scale consists of 4 items, with the fourth item reverse-scored. Responses are obtained on 7-point scales with anchors. Lyubomirsky and Lepper report Cronbach’s alphas ranging from 0.79 to 0.94 across 14 samples. In our sample, the Cronbach’s alpha was 0.83.

Optimism. To measure optimism, we used the Life Orientation Test – Revised (Scheier et al., 1994) as translated by Bek (2007). This ten-item scale contains four filler items that are not scored and six scored items, of which three are reverse-scored. Respondents indicated their responses on a five-point scale ranging from “Strongly agree” to “Strongly disagree”. Scheier et al. report a Cronbach’s alpha of 0.78; in our sample, Cronbach’s alpha was 0.86.

Regret. To measure regret, we used the Regret Scale (Schwartz et al., 2002). This scale consists of five items, one of which is reverse-scored. Participants respond to items on a 7-point scale (1 = completely disagree, 7 = completely agree). We developed our own translation of the scale via independent translations, back-translation, and expert committee discussion. Schwartz et al. report a Cronbach’s alpha of 0.67; in our sample, it was 0.77.

A total of 902 participants were recruited online. The sample is the same sample used in Study 1. After taking the Maximization Inventory, participants were administered the Life Orientation Test — Revised, General Self-Efficacy Scale, Subjective Happiness Scale and Regret Scale.

3.1.2  Data analysis

First, we analyzed raw scores, defined as the sums of items. Then, we performed ordinal confirmatory factor analysis (CFA).12 We performed unidimensional CFA for each scale of the Maximization Inventory to check the structure of each scale. Then we performed a multidimensional CFA for all these scales and for the full and the shortened version of MI. Reliabilities were estimated using Revelle’s omega. This measure outperforms Cronbach’s alpha as it does not assume tau-equivalent items (the same factor loadings for all items).

We used ordinary fit statistics with common cut-values.

3.2  Results

3.2.1  Raw scores

We investigated the relationship of the Short Maximization Inventory’s subscales to the original full-length subscales as well as to other measures. To do so, we summed responses for each (sub)scale for each participant and then used Pearson correlations to assess the relationship intensity. The descriptive statistics are reported in Table 5.


Table 5: Descriptive statistics for scales used in Study 2.
 NMinMaxMeanS.D.
SMI-Satisficing90252518.353.216
SMI-Decision difficulty90252515.534.732
SMI-Alternative search90252516.304.620
Happiness90242818.605.138
Optimism90263020.305.409
Self-Efficacy902104029.105.768
Regret90253618.116.588

First, we examined the correlations of the original Maximization Scale subscale to their corresponding shortened versions. In all three cases, the correlations are strong: Satisficing (r = 0.87, p < 0.01), Decision Difficulty (r = 0.93, p < 0.01) and Alternative Search (r = 0.94, p < 0.01). This indicates that the shortened MI scales measure the same constructs as the full scales.

The Satisficing scale of SMI was positively correlated with the indices of well-being: happiness (r = 0.53, p < 0.01), optimism (r = 0.56, p < 0.01) and self-efficacy (r = 0.61, p < 0.01). These findings are in line with the relationships reported by Turner et al. (2012), as well as with Schwartz’s (2007) and Schwartz et al.’s (2002) proposed relation of satisficing to individual well-being. Additionally, satisficing was moderately negatively related to regret (r = −0.26, p < 0.01).

The Decision Difficulty scale was negatively related to all three well-being indices: happiness (r = −0.45, p < 0.01), optimism (r = −0.44, p < 0.01) and self-efficacy (r = −0.51, p < 0.01). Decision difficulty was positively related to regret (r = 0.57, p < 0.01). Turner et al. (2012) and Rim et al. (2011) report no significant relationship between decision difficulty and happiness. This inconsistency cannot be explained by the shortening of the scale (as the full-sized Decision Difficulty subscale administered to our sample was also negatively correlated with happiness, r = −0.44, p < 0.01). We argue that the different correlations are related to the nature of the samples used (U.S. vs. Czech sample). This argument is further developed in the discussion part of this paper.

We found the Alternative Search scale of SMI to be weakly negatively related to happiness (r = −0.11, p < 0.01) and optimism (r = −0.14, p < 0.01), unrelated to self-efficacy (r = 0.03, p > 0.05) and weakly positively related to regret (r = 0.21, p < 0.01).

Table 6 provides correlations of SMI scales with each other, as well as with well-being measures. The correlations for the full-sized 34-item MI we administered are provided in brackets. It is evident that the shortened scale correlates with measures of well-being similarly to the full scale. The correlations found are similar to those reported by Turner et al. (2012), with the exception of the relationship between Decision Difficulty and Happiness, mentioned above. We did not compare the correlations with the Regret scale, as Turner et al. (2012) used the Decision-Making Style Inventory (Nygren & White, 2002) for regret assessment, whereas we used the Regret Scale (Schwartz et al., 2002).


Table 6: Correlations of SMI and MI with measures of well-being.
 SMI-S (MI-S)SMI-DD (MI-DD)SMI-AS (MI-AS)
SMI-S1(0.873)    -0.342(-0.328)    0.044(0.078)
SMI-DD-0.342(-0.220)    1(0.928)    0.196(0.196)
SMI-AS0.044(0.066)    0.193(0.234)    1(0.944)
SHS0.529(0.406)    -0.454(-0.440)    -0.111(-0.100)
LOT-R0.556(0.398)    -0.443(-0.448)    -0.138(-0.133)
GSES0.612(0.514)    -0.510(-0.495)    -0.030(0.008)
Regret-0.262(-0.202)    0.571(0.618)    0.221(0.194)

3.2.2  Construct validity (latent traits)

The model fit for Maximization Inventory was presented in Study 1 (there we presented the result of the IRT model, the fit of the ordinal CFA was similar). For all the other scales, the model fit the data well.

SHS: χ 2(2) = 20.97, p < 0.001, TLI = 0.987, RMSEA = 0.103 with 95% CI [0.066; 0.144], SRMR = 0.015. Although the RMSEA is very high, for CFAs with small degrees of freedom it is not a reliable indicator of fit (Kenny, Kaniskan & McCoach, 2015). Reliability was good, ω  = 0.832.

LOT-R: χ 2(9) = 255.41, p < 0.001, TLI = 0.944, RMSEA = 0.174 (95% CI = [0.156; 0.193]), SRMR = 0.054. The same RMSEA issue holds here as holds for SHS; reliability was good, ω  = 0.872.

GSES: χ 2(35) = 513.55, p < 0.001, TLI = 0.939, RMSEA = 0.123 (95% CI = [0.114; 0.133]), SRMR = 0.055 with good reliability ω  = 0.901

Regret: χ 2(5) = 247.7, p < 0.001, TLI = 0.854, RMSEA = 0.232 (95% CI = [0.208; 0.257]), SRMR = 0.068. Because the fit was not good, we inspected the residual correlation matrix and discovered a high residual correlation between items 4 and 5 (r = 0.159). We therefore allowed for residual covariances between these items, which improved the fit, Δ χ 2(1) = 115.9, < 0.001. The final model fit the data very well (except RMSEA; see above), χ 2(4) = 67.84, p < 0.001, TLI = 0.952, RMSEA = 0.133 (95% CI = [0.106; 0.162]), SRMR = 0.034. Reliability was acceptable,13 ω  = 0.732.

For both MI and SMI, the correlations with latent traits (Table 7) are quite high. We performed two CFAs over all the scales for both versions of MI. The fit of the full model with the Short Maximization Inventory was acceptable, χ 2(718) = 3153.27, p < 0.001, TLI = 0.915, RMSEA = 0.061 (95% CI = [0.059; 0.064]), SRMR = 0.058. The fit with the full Maximization Inventory was poorer, χ 2(1630) = 6526.63, p < 0.001, TLI = 0.883, RMSEA = 0.058 (95% CI = [0.056; 0.059]), SRMR = 0.072; the TLI was particularly low.


Table 7: Construct validity for Maximization Inventory and Short Maximization Inventory. Correlations of latent traits. N = 902.
 MI-SMI-DDMI-ASSHSLOT-RGSESRegret
MI-S1-0.4020.1290.6480.6770.759-0.332
MI-DD-0.42810.254-0.497-0.504-0.5680.715
MI-AS0.0710.2571-0.117-0.145-0.0040.214
SHS0.692-0.549-0.14610.7870.623-0.434
LOT-R0.722-0.541-0.1680.78710.599-0.461
GSES0.780-0.595-0.0570.6230.6001-0.430
Regret-0.3300.7020.256-0.433-0.460-0.4281
Correlations above ± .145 are significant on α = .001 and all correlations above .117 are significant on α = .01
Below the diagonal are the results for the model with the Short Maximization Inventory, above the diagonal are the results for the original Maximization Inventory.

3.3  Conclusions

Strong correlations between the original subscales and their short versions indicate that the Short Maximization Inventory is a compact measurement tool that is equivalent to the original Maximization Inventory. Concerning correlations with well-being indices, the results we found for SMI are similar to what we, in line with Turner et al. (2012), found for the full MI: decision difficulty and alternative search are negatively related to the indices of well-being and positively related to regret. Satisficing is positively related to the indices of well-being and negatively related to regret, suggesting that satisficing is related to positive adaptation. The validity of SMI is thus supported by two pillars: the correlations found for SMI are in line with our theoretical predictions, and they replicate the correlations found for the full MI.

4  General Discussion

In this paper we pointed out several problems with the original Maximization Inventory (Turner et al., 2012). Eliminating problematic items from the MI, we developed a 15-item Short Maximization Inventory (SMI). This newly developed SMI performs well in measuring individual dimensions related to maximization. It also displays psychometric properties that are comparable or better than those of the original MI. Finally, thanks to its brevity, SMI is less taxing on respondents.

After administering the MI to 902 participants, we found that several of its items display a ceiling effect. These items were mostly general statements that are easy to relate to and agree with (e.g., MI item 8: “All decisions have pros and cons”). Items with heavily skewed responses have low discriminatory power, as most subjects selected “Strongly Agree”.

Highly correlated residuals indicated item overlap. Overlapping items are, in effect, merely paraphrases of each other, and their presence does not improve scale performance. We found and excluded several such cases (e.g., item 25, “I will continue shopping for an item until it reaches all of my criteria,” and item 26, “I usually continue to search for an item until it reaches my expectations”).

Some items of the MI tend to load onto more than one factor (e.g., item 5: “I try to gain plenty of information before I make a decision, but then I go ahead and make it” is connected with both Satisficing and Alternative Search). We developed the Short Maximization Inventory by excluding problematic items from the original MI while retaining the items with satisfactory item discrimination, high factor loading, and low correlated residuals. SMI consists of three subscales of five items each.

In general, a scale with more items allows for finer discrimination among respondents and potentially captures very high and very low levels of trait better. On the other hand, presenting subjects with long questionnaires may result in fewer responses and lower response quality (Galesic & Bosnjak, 2009). Therefore, MI’s size (34 items) might be discouraging for researchers who intend to use it as a supplementary method in their research alongside other scales. In developing Short Maximization Inventory, we removed from MI the items with the lowest discrimination and with the lowest factor loadings. This minimizes the loss of favorable properties associated with scale shortenings. Furthermore, our analysis shows that SMI measures the same construct as MI and discriminates between respondents well. SMI allows researchers to use a measure of maximization that has good psychometric properties yet is compact and convenient to administer.

The Short Maximization Inventory model showed a good fit with the data we used to develop it, as well as with an independent sample of subjects. The scales of SMI correlate very strongly with the scales of the full MI, indicating they are measures of the same constructs.

Turner et al. (2012) provided evidence for the construct validity of MI’s three scales by correlating them with measures of well-being: happiness, optimism, and self-efficacy. With SMI, we found the same relationships between maximization dimensions and well-being that Turner et al. (2012) found with the full MI. The only exception was that we found a significant negative correlation between Decision Difficulty and happiness, while Turner et al. (2012) reported no significant relationship. However, this difference cannot be attributed to the scale reduction as, in our sample, the full 12-item Decision Difficulty scale also correlated negatively with happiness. The difference between our result and Turner et al.’s (2012) may be caused by cultural differences between the U.S. sample used in the earlier study and the Czech sample we used. According to Hofstede (2016), Czechs are significantly higher than Americans in uncertainty avoidance. High uncertainty avoidance corresponds to more negative feelings related to uncertainty and ambiguity. Therefore, Czech people who perceive their decisions to be difficult are likely to experience more negative feelings and lower happiness levels than Americans.

Based on Parker, Bruine de Bruin and Fischhoff (2007); Rim et al. (2011); Schwartz et al. (2002); and Turner et al. (2012), we expected high Satisficing scores to be associated with the positive indices of well-being, and high Decision Difficulty and Alternative Search scores to be associated with the negative indices of well-being. Our correlation analysis provides evidence for SMI’s construct validity: Satisficing displays significant positive correlations with the indices of well-being, while Alternative Search and Decision Difficulty show negative correlations with well-being. In line with Schwartz et al.’s (2002) reasoning, we find regret to be negatively related to Satisficing and positively related to Alternative Search and Decision Difficulty. That said, we do not consider SMI’s (or MI’s) Satisficing subscale to be perfect, as we reflect in the Limitations section of the discussion.

As reviewed by Cheek and Schwartz (2016), there are 11 maximization-related scales available at this time. The reason we have chosen to introduce yet another one is that we recognize the Maximization Inventory’s (Turner et al., 2012) solid psychometric properties relative to other scales, and our short version further improves on this quality. The Short Maximization Inventory displays excellent properties, as judged from both the Classical Test Theory and Item Response Theory viewpoint. Although Cheek and Schwartz (2016) offer some criticism of the Maximization Inventory, they tentatively recommend the use of its Alternative Search subscale in research. Moreover, they encourage researchers to further refine the measurement, which we have done by formulating SMI.

4.1  Limitations of the study

The primary purpose of the Confirmatory Factor Analysis is the confirmation of an already existing model, not the development of a new one. Although our use of CFA in shortening the scale can be identified as a limitation of the study, our intention was not to develop a new model but to simplify one that already existed. We thus adopted an approach similar to that used by Nenkov et al. (2008), who shortened the original Maximization Scale. Once the short scale was developed, we used CFA again with an independent data set to verify our new model in a pure, confirmation-only setting.

SMI, just like the original Maximization Inventory, does not contain a High Standards subscale. Although items relating to having high standards were originally considered when developing MI, Turner et al. (2012) did not include these items. Subsequently, a measure of high standards or the desire to choose the best is absent from SMI too. Cheek and Schwartz (2016), however, present strong arguments that the goal of choosing the best, together with the strategy of alternative search, is an essential component of maximizing. We recognize this and, following Cheek and Schwartz (2016), recommend using the 7-Item Maximizing Tendency Scale (MTS-7) developed by Dalal et al. (2015) to measure the maximizing goal of choosing the best. The MTS-7 together with SMI may provide a complex measurement of the maximization construct. However, future research should focus on the incremental validity of MTS-7 over SMI (or MI) subscales and on the existence of the single high standards factor within the maximization model.

A novel feature of MI (and consequently SMI) is the presence of the Satisficing subscale. Turner et al. (2012) argue that satisficing is not simply the lack of maximizing, but an adaptive trait of its own. Although the Satisficing subscale of both MI and SMI shows good psychometric properties, concerns have been raised about its content validity (Cheek & Schwartz, 2016), incremental validity (Moyano-Díaz et al., 2014) and reliability (Dewberry, Juanchich & Narendran, 2013). We acknowledge these concerns. Some of the Satisficing subscale items are difficult to interpret. Consider for example item 1 “I usually try to find a couple of good options and then choose between them”. Agreement with this item signifies satisficing, but what does disagreeing with this item mean? Maybe the respondent considers many options in an effort to pick the best one, or maybe he accepts the first alternative he comes across. The Satisficing subscale of MI (and SMI) is internally consistent and correlates with the indices of well-being as predicted by the theory. On the other hand, its face validity is dubious (Cheek and Schwartz, 2016, relate some of its items to uncertainty tolerance and to “make the best of the situation” approach, rather than to satisficing). That said, satisficing conceptualized as a construct distinct from maximizing may be worth studying in the future, if the concept of satisficing as anything other than “not maximizing” can itself be clarified.

SMI was not administered to participants as a separate scale. Instead, we administered the full MI and then extracted the items that compose SMI. This approach is identical to that of Nenkov et al.’s (2008) Analysis 3. Although this is not likely, item responses may have been influenced by the context of other items presented (Knowles & Condon, 2000). Related to this issue, Smith, McCarthy and Anderson (2000) note that this approach is likely to result in overestimated correlations between the short form and the full form of the scale. We acknowledge this, but we still consider our results valid; we not only report high correlations between the full MI and SMI but also find correlations with the indices of well-being similar to those reported by Turner et al. (2012) for a different dataset. Following this, a suggested direction for future research is to conduct a study using the Short Maximization Inventory as a separate scale.

Examining the test-retest validity of SMI would provide useful information on the stability of results obtained with this scale over time. We also encourage researchers to contrast SMI results with behavioral measures associated with maximizing and satisficing to shed more light on the topic.

Data for our convergent validity investigation were collected from all subjects, for all constructs, using the same scales. This poses the risk of inflated correlations due to common-method bias (Podsakoff, MacKenzie, Lee & Podsakoff, 2003). On the other hand, a similar approach was used for assessing the convergent validity of the original MI (Turner et al., 2012), and the authors reported no issues related to common-method bias. Our aim was to demonstrate that the SMI produces correlations to the indices of well-being similar to those of other scales. We did not want to research in depth the relationship between maximization and other constructs.

Administering the scales in Czech translation poses the threat of shifts in the meanings the items convey. We exercised great care to mitigate this risk by following (and exceeding) Beaton et al.’s (2000) guidelines on cross-cultural adaptation of scales. We obtained three independent translations and back-translations of the items, commissioned an expert committee to assess the translations and to select the most appropriate ones. We also conducted two types of cognitive interviews and pilot-tested the translated scales. Compared to other studies using non-English measures of maximizing, we dedicated more effort to ensuring that the translation was correct with no loss or distortion of meaning of the items (compare, e.g., with Roets et al., 2012, who had one person translate the scale and “double-checked the final translation with other colleagues” or Lai, 2010, who used only iterated translation and back-translation). Our correlation study results, similar to those reported by Turner et al. (2012), indicate that the translation process was successful and that our study does not suffer from significant cultural differences.

The aim of this paper was to verify the psychometric properties of MI and to provide researchers with its shorter yet well-performing version. We believe this has been accomplished. We consider the results to be robust, given our sample size of 902 (comparable to N=828 in Turner et al., 2012). To achieve a balance between our model’s fit with the data and its predictive power, we split responses randomly into two data sets. We demonstrate very good fit with both data sets.

The main contribution this paper reports on is the development of the Short Maximization Inventory (SMI). SMI contains 15 (5+5+5) best-performing items of the Maximization Inventory (Turner et al., 2012), which has 34 (10+12+12) items. As demonstrated in this paper, SMI is an effective yet concise tool for assessing maximization as an individual trait. We expect it, or at least its subscales for Decision Difficulty and Alternative Search (given the need for further conceptual clarification of satisficing itself) will be well received by researchers who wish to investigate maximization as a supplementary measure in their research projects. This compact yet powerful tool for maximization measurement will allow researchers to expand their research scope without dramatically inflating the number of items presented to subjects. To measure the two-component construct of maximization, as it is presented by Cheek and Schwartz (2016), the Alternative Search subscale of SMI together with MTS-7 (Dalal et al., 2015) appears the most appropriate.

References

Baker, F. B. (2001). The basics of item response theory. Washington, DC: ERIC.

Beaton, D., Bombardier, C., Guillemin, F., & Ferraz, M. (2000). Guidelines for the process of cross-cultural adaptation of self-report measures. Spine, 25(24), 3186–91.

Bek, V. (2007). Optimistický postoj k životu jako kognitivní styl (Master’s thesis). Masaryk University, Brno.

Cai, L., & Hansen, M. (2013). Limited-information goodness-of-fit testing of hierarchical item factor models. British Journal of Mathematical and Statistical Psychology, 66(2), 245–276.

Chalmers, R. P. (2012). mirt[202F?]: a multidimensional item response theory package for the r environment. Journal of Statistical Software, 48(6), 1–29.

Cheek, N., & Schwartz, B. (2016). On the meaning and measurement of maximization. Judgment and Decision Making, 11(2), 126-146).

Chen, W. H., & Thissen, D. (1997). Local dependence indexes for item pairs using item response theory. Journal of Educational and Behavioral Statistics, 22(3), 265–289.

Dahling, J., & Thompson, M. (2012). Detrimental relations of maximization with academic and career attitudes. Journal of Career Assessment, 21(2), 278–294.

Dalal, D. K., Diab, D. L., Zhu, X. S., & Hwang, T. (2015). Understanding the construct of maximizing tendency: a theoretical and empirical evaluation. Journal of Behavioral Decision Making, 28(5), 437–450.

Dewberry, C., Juanchich, M., & Narendran, S. (2013). Decision-making competence in everyday life: the roles of general cognitive styles, decision-making styles and personality. Personality and Individual Differences, 55(7), 783–788.

Diab, D., Gillespie, M., & Highhouse, S. (2008). Are maximizers really unhappy? the measurement of maximizing tendency. Judgment and Decision Making, 3(5), 364–370.

Djulbegovic, B., Beckstead, J. W., Elqayam, S., Reljic, T., Hozo, I., Kumar, A., … Paidas, C. (2014). Evaluation of physicians’ cognitive styles. Medical Decision Making, 34(5), 627–637.

Ďuriník, M. (2016). Translating maximization inventory into Czech language. In Š. Majtán et al. (Ed.), Aktuálne problémy podnikovej sféry 2016 Conference Proceedings (pp. 195–200). Bratislava: Ekonom.

Galesic, M., & Bosnjak, M. (2009). Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opinion Quarterly, 73(2), 349–360.

Giacopelli, N. M., Simpson, K. M., Dalal, R. S., Randolph, K. L., & Holland, S. J. (2013). Maximizing as a predictor of job satisfaction and performance: A tale of three scales. Judgment and Decision Making, 8(4), 448–469.

Hofstede, G. (2016). Country comparison. retrieved from https://geert-hofstede.com/countries.html on March 3rd, 2017.

Kenny, D. A., Kaniskan, B., & McCoach, D. B. (2015). The performance of RMSEA in models with small degrees of freedom. Sociological Methods & Research, 44(3), 486–507.

Kim, S., & Feldt, L. S. (2010). The estimation of the IRT reliability coefficient and its lower and upper bounds, with comparisons to CTT reliability statistics. Asia Pacific Education Review, 11(2), 179–188.

Knowles, E. S., & Condon, C. A. (2000). Does the rose still smell as sweet? Item variability across test forms and revisions. Psychological Assessment, 12(3), 245–252.

Kresanová, J. (2015). Štěstí: metody měření a agregace (Bachelor’s thesis). Masaryk University, Brno.

Křivohlavý, J., Schwarzer, R., & Jerusalem, M. (1993). Czech adaptation of the general self-efficacy scale. Retrieved from http://userpage.fu-berlin.de/~health/czec.htm on July 3rd 2015.

Lai, L. (2010). Maximizing without difficulty: A modified maximizing scale and its correlates. Judgment and Decision Making, 5(3), 164–175.

Larsen, J. T., & McKibban, A. R. (2008). Is happiness having what you want, wanting what you have, or both? Psychological Science, 19(4), 371–377.

Lyubomirsky, S., & Lepper, H. S. (1999). A measure of subjective happiness: preliminary reliability and construct validation. Social Indicators Research, 46(2), 137–155.

Maydeu-Olivares, A., & Joe, H. (2006). Limited information goodness-of-fit testing in multidimensional contingency tables. Psychometrika, 71(4), 713–732.

Miller, S. A. (2014). Assessing the sensitivity, composition, and effects of information distortion (Dissertation). The Ohio State University.

Misuraca, R., Faraci, P., Gangemi, A., Carmeci, F. a., & Miceli, S. (2015). The Decision Making Tendency Inventory: A new measure to assess maximizing, satisficing, and minimizing. Personality and Individual Differences, 85, 111–116.

Moyano-Díaz, E., Martínez-Molina, A., & Ponce, F. P. (2014). The price of gaining: maximization in decision-making, regret and life satisfaction. Judgment and Decision Making, 9(5), 500–509.

Nenkov, G. Y., Morrin, M., Ward, A., Hulland, J., & Schwartz, B. (2008). A short form of the Maximization Scale[202F?]: Factor structure, reliability and validity studies. Judgment and Decision Making, 3(5), 371–388.

Nygren, T. E., & White, R. J. (2002). assessing individual differences in decision making styles: analytical vs. intuitive. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 46(12), 953–957.

Orlando, M., & Thissen, D. (2000). Likelihood-based item-fit indices for dichotomous item response theory models. Applied Psychological Measurement, 24(1), 50–64.

Paivandy, S., Bullock, E. E., Reardon, R. C., & Kelly, F. D. (2008). the effects of decision-making style and cognitive thought patterns on negative career thoughts. Journal of Career Assessment, 16(4), 474–488.

Parker, A. M., Bruine de Bruin, W., & Fischhoff, B. (2007). Maximizers versus satisficers: Decision-making styles, competence, and outcomes. Judgment and Decision Making, 2, 342–350.

Patalano, A. L., Weizenbaum, E. L., Lolli, S. L., & Anderson, A. (2015). Maximization and search for alternatives in decision situations with and without loss of options. Journal of Behavioral Decision Making, 28(5), 411–423.

Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. The Journal of Applied Psychology, 88(5), 879–903.

Polman, E. (2010). Why are maximizers less happy than satisficers? Because they maximize positive and negative outcomes. Journal of Behavioral Decision Making, 23(2), 179–190.

R Core Team. (2017). R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing,

Rim, H. Bin. (2017). Impacts of maximizing tendencies on experience-based decisions. Psychological Reports, 120(3), 460–474.

Rim, H. Bin, Turner, B. M., Betz, N. E., & Nygren, T. E. (2011). Studies of the dimensionality, correlates, and meaning of measures of the maximizing tendency. Judgment and Decision Making, 6(6), 565–579.

Roets, A., Schwartz, B., & Guan, Y. (2012). The tyranny of choice: a cross-cultural investigation of maximizing-satisficing effects on well-being. Judgment and Decision Making, 7(6), 689–704.

Rogge, N. (2016). love is blind: how our love for more choice costs time. Psychology & Marketing, 33(5), 358–371.

Rosseel, Y. (2012). lavaan: an R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36.

Scheier, M. F., Carver, C. S., & Bridges, M. W. (1994). Distinguishing optimism from neuroticism (and trait anxiety, self-mastery, and self-esteem): a reevaluation of the Life Orientation Test. Journal of Personality and Social Psychology, 67(6), 1063–1078.

Schwartz, B. (2004). The Paradox of Choice: Why More is Less. New York: Harper Perennial.

Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., & Lehman, D. R. (2002). Maximizing versus satisficing: happiness is a matter of choice. Journal of Personality and Social Psychology, 83(5), 1178–1197.

Schwarzer, R., & Jerusalem, M. (1995). General Self-efficacy Scale. Measures in Health Psychology: A User’s Portfolio. Causal and Control Beliefs, (2008), 35–37.

Sharif, M. A., & Spiller, S. A. (2014). Indecisive consumers and opportunity cost Consideration. In J. Cotte & S. Wood (Eds.), NA - Advances in Consumer Research Volume 42 (pp. 210–214). Duluth, MN: Association for Consumer Research.

Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118.

Smith, G. T., McCarthy, D. M., & Anderson, K. G. (2000). On the sins of short-form development. Psychological Assessment, 12(1), 102–111.

Turner, B. M., Rim, H. B., Betz, N. E., & Nygren, T. E. (2012). The Maximization Inventory. Judgment and Decision Making, 7(1), 48–60.

Weinhardt, J. M., Morse, B. J., Chimeli, J., & Fisher, J. (2012). An item response theory and factor analytic examination of two prominent maximizing tendency scales. Judgment and Decision Making, 7(5), 644–658.

Willis, G. B. (1999). Cognitive interviewing. A “how to” guide. Evaluation, 1(1), 1–37.


*
Faculty of Economics and Administration, Masaryk University; Macquarie Graduate School of Management
#
Faculty of Social Studies and Faculty of Economics and Administration, Masaryk University
$
Faculty of Social Studies, Masaryk University

We wish to thank the editor and the reviewers for the insightful and valuable comments they provided. This paper is part of the Masaryk University Specific Research Project MUNI/A/1021/2015. While finishing this paper, Michal Ďuriník was a holder of Macquarie University Research Excellence Scholarship.

Copyright: © 2018. The authors license this article under the terms of the Creative Commons Attribution 3.0 License.

1
Consider two people who both have high standards: one is a maximizer, the other one is a satisficer. The maximizer tries to find and evaluate all options available to make sure he selects the best one. The satisficer stops the search upon finding the first option that meets his (high) standards.
2
As noted by the editor, there are specific scenarios in which an active search is not possible, yet the goal of maximization may still be relevant. When selecting from job candidates, one usually does not search actively, but simply waits for applications to arrive. A maximizer will wait until he is reasonably sure that no better candidate will apply. A satisficer will accept the first candidate that meets the criteria.
3
Test reliability in Item Response Theory is usually estimated using the equation r = VAR(EAP)/[VAR(EAP) + MSE], where VAR(EAP) is the variance of expected a-posteriori latent trait estimates and MSE is the mean of error variance of these estimates. The resulting reliability thus depends on the mean error variance (negatively) and the variance of estimated latent traits (positively).
4
The M2* procedure provides asymptotic chi-squared statistics of model fit, which can be used directly, or to compute RMSEA (root mean square error of approximation) and interpreted in the same way as in a confirmatory factor analysis: RMSEA of well-fitting model approaches 0. If the M2* statistics are also computed for the null model (in which all item discrimination parameters are fixed to 0), incremental fit indices such as CFI or TLI can be computed too and values close to 1 (e.g., above 0.90) are usually considered good. The last fit statistic we used is SRMSR (standardized root mean squared residual) which can be interpreted as the squared root of the mean difference between model-predicted and observed item correlations (similarly to SRMR statistic in factor analysis). SRMSR approaches zero in a well-fitting model.
5
Note: M2* is the value of chi-squared model fit test with the appropriate number of degrees of freedom.
6
In this paper, we use Turner et al.’s (2012; Table 3) numbering of items. The Satisficing subscale consists of items 1-10, the Decision Difficulty subscale consists of items 11-22, and the Alternative Search subscale consists of items 23-34.
7
Item 2: “At some point you need to make a decision about things.”

Item 6: “Good things can happen even when things don’t go right at first.”

Item 8: “All decisions have pros and cons.”

Item 10: “I accept that life often has uncertainty.”

8
Item 5: “I try to gain plenty of information before I make a decision, but then I go ahead and make it.”
9
Consider, for example, the similarity of item 25 (“I will continue shopping for an item until it reaches all of my criteria”) with item 26 (“I usually continue to search for an item until it reaches my expectations”).
10
Bayesian information criteria based on likelihood function, which can be used for comparing nested models (differently invariant models are nested). Lower value suggests better fitting model.
11
We used the WLSMV estimator (diagonally weighted least squares estimator used to estimate model parameters and full weighted matrix used to compute robust standard errors) of the polychoric correlation matrix in the lavaan package (Rosseel, 2012).
12
CFA was performed in the lavaan package (Rosseel, 2012) with WLSMV estimation based on polychoric correlation matrices.
13
Note that Revelle’s omega accounts properly for residual correlations, which therefore do not bias the reliability estimate.

This document was translated from LATEX by HEVEA.