Judgment and Decision Making, Vol. 15, No. 1, January 2020, pp. 82-92

The effect of incentive structure on search in the secretary problem

Yu-Chin Hsiao*   Simon Kemp#

We tested the effectiveness of performance-based incentive structures using three incentive structures — commission base, best only and flat fee — and two levels of context — no context and house-selling — in an experiment in which participants made decisions in a variant of the secretary problem. Key measures of performance were the amount of search and the rounds in which the very best (optimal) offer was chosen. We found that having a commission-based proportional incentive did not produce better performance than having a flat payment for any of the performance measures considered. However, another performance-based incentive — the best only — increased the length of their searches and led to more optimal offers. These results applied both when there was no context and when the context was selling a house.


Keywords: sequential decision-making, secretary problem, incentive, context

1  Introduction

In this paper we present research that shows how different performance-based incentive structures affect performance on a variant of the secretary problem. In particular, we examined the effect of a constant (flat) fee for participation, a commission-based incentive (earnings are proportional to chosen prices), and a best only incentive that was only obtained if the participant made the very best (optimal) choice possible in that round. The key dependent variables were the length of the search and the number of optimal choices made.

Little attention has been paid to the structure of the performance-based incentive used in laboratory experiments. Yet using inappropriate incentive structures in experiments runs the risk of impairing performance (e.g., Ariely, Gneezy, Loewenstein & Mazar, 2009, Ariely, Bracha & Meier, 2009) or producing bad decisions (e.g., Cole, Kanz & Klapper, 2015). The problem of not knowing the effect of incentive structure on the intended performance measures can be particularly acute in addressing research questions. Participants who do not perform well because of an inappropriate incentive structure may be mistakenly perceived as not performing well as a result of the conditions. Ultimately, understanding the effectiveness of different incentive structures is an important design issue.

This study also investigates the interactive effect of context and incentive structures. Framing the task with a meaningful context is commonly used in the laboratory to facilitate understanding of the instructions. Does context influence the effectiveness of different incentive structures?

1.1  The effectiveness of monetary incentive on performance

The presence of a monetary incentive could convey that the task is unattractive or undesirable (Frey & Oberholzer-Gee, 1997) and change how people perceive the task (Cole et al., 2015). On occasion, extrinsic motivation, for example, a monetary incentive, may crowd out the intrinsic motivation, for example, the sense of accomplishment, resulting in an incentive having a reduced or no effect on performance (e.g., Frey, 1997; Gneezy, Meier & Rey-Biel, 2011; Gneezy & Rustichini, 2000a; Gneezy & Rustichini, 2000b).

Monetary incentives are generally found to have a beneficial effect on performance with mundane tasks that are effort dominant, and which people lack intrinsic motivation to perform; for example, clerical tasks (Riedel, Nebeker & Cooper, 1988), and item recognition and recall tasks (e.g., Kahneman & Peavler, 1969; Libby & Lipe, 1992). As the participants are likely to derive no satisfaction or enjoyment from doing the task, the monetary incentive provides extrinsic motivation to perform. However, in this case, a monetary incentive is not always effective in motivating performance, because an increase in effort does not necessarily lead to improved performance (e.g., Fryer 2011).

Even in tasks that require basic cognitive skills, e.g., attention and memory, or tasks requiring some creative or motor skills, people sometimes produce worse performance with a higher payment than a lower one (Ariely et al., 2009). Monetary incentives may elevate the level of arousal and lead people to choke under pressure. According to the Yerkes-Dodson law (Yerkes & Dodson, 1908), performance often improves with increased arousal, but excessive arousal can lead to a decrement in performance. People in the two performance-based incentives in the experiment described below might experience a higher level of arousal than the people with a flat fee incentive. This higher level of arousal could lead either to better or worse performance, although our expectation — given that the incentives were not huge — was that the effect was more likely to be positive than negative.

1.2  The secretary problem

Many real-life decision-making situations are sequential, and such decisions often need to be made immediately and cannot be revisited (for example, finding a partner, buying or selling houses). This type of sequential decision-making situation often displays the features of the secretary problem (see Ferguson, 1989, and Freeman, 1983, for historical reviews). The basic form of the secretary problem, often referred to as the classical secretary problem, has been specified in the following way. A known number of n candidates is presented randomly in a sequence. The decision-maker must either accept or reject the presented candidate immediately and the decision cannot be recalled. (That is, the decision maker cannot withdraw or revise any previously made decision.) A positive payoff is earned only if the decision-maker chooses the best overall candidate.

The optimal strategy of this classical version of the secretary problem (henceforth the classical strategy) allows the decision-maker to maximize the probability of finding the best candidate. The classical strategy states that the decision-maker should reject the first n/e (equal to approximately 0.37 as n approaches infinity) of the candidates and then accept the first candidate who is better than all of the previously rejected candidates, where n is the total number of candidates available in the candidate pool and e is Euler’s number, approximately equal to 2.718 (Gilbert & Mosteller, 1966, p. 40). This strategy yields an approximately 37% chance of finding the best candidate (depending on the size of n; see Lindley, 1961; Gilbert & Mosteller, 1966 for detailed proofs. Note that there is also a risk of losing the best candidate because the best occurs after the choice or the best occurs in the first 37% of the candidate pool.) People who are not familiar with the classical strategy face at least two major difficulties in solving the problem. First, the distribution of the candidate quality is unknown, and the decision maker does not know which candidate may potentially be the best or better candidate to accept. Second, an immediate decision must be made without recall. If the decision maker realizes the best candidate has been rejected, she cannot go back and accept the previously rejected candidate.

Researchers have explored many different features and variations of the secretary problem. A number of assumptions have been relaxed and their implications investigated both theoretically (e.g., Gilbert & Mosteller, 1966; Lindley, 1961; Moriguti, 1993; Tamaki, 1979; Yeo, 1998) and experimentally (e.g., Bearden, Murphy & Rapoport, 2005; Bearden, Rapoport & Murphy, 2006; Seale & Rapoport, 1997, 2000; Zwick, Rapoport, Lo & Muthukrishnan, 2003).

A common empirical result for the secretary problem is that people tend to stop their search too soon (e.g., Seale & Rapoport, 1997, 2000). Experimental researchers have also found that people do not generally perform better when a monetary reward is offered (e.g., Hey, 1987; Campbell & Lee, 2006).1 Evidence from a computer simulation indicated that people’s experimental search behavior coincided better with the classical strategy when a 1% search cost was assumed (Seale & Rapoport, 1997). Hsiao (2018, Chapter Two) examined the effect of time search on search behavior in the secretary problem and indeed found that people shortened their search with a higher time cost. Other processing costs such as cognitive load may also influence the search behavior and lead to a shorter search. The incentive structure can potentially help offset these processing costs and motivate performance.

Different secretary problem situations in life often have different incentive structures. Sports tournaments often feature a winner takes all structure. Real estate agents or car salesman often received payoffs proportional to the sale price; these are commission-based. An interesting question is whether performance changes when the same incentive structure is applied to different contexts. Contextual instructions are known to have an effect in changing behavior in experimental research that helps understanding of the task and reduces confusion among participants, especially in tasks that required reasoning and cognitive abilities to perform (see Alekseev, Charness & Gneezy, 2017 for a review of when and why contextual instructions matter). Dual-processing theory that posits system one (intuitive/heuristic) and system two (analytic/executive) processes has been proposed to explain why different decisions result from the context (e.g., Kahneman, 2003; Evans, 2008). Context effects can be caused by different decision-making schemas belonging to system one. Hence we varied the context of the experiment. Half the participants performed with no context – they were simply presented numbers – and the other half were asked to consider they were selling a house.

1.3  Decision strategies

People unfamiliar with the secretary problem might employ at least three different strategies in the experiment, besides a variant of the classical strategy. They might choose randomly. In our experimental setting, this would yield only a 5% chance of finding the optimal (there are 20 offers in each round), and 95% of the time the participant would not receive any payoff when in the best only incentive. This would not be an effective strategy for people for either commission base or best only incentives, where the payoffs are based on performance. Secondly, they might choose any price above their own reservation price. This strategy is most likely to be effective when the distribution of the price offers is available to the participants, unlike in our study. The chance of finding the best candidate is enhanced drastically to approximately 58% as n approaches infinity (Gilbert & Mosteller, 1966), when information about the distribution of the quality of candidate becomes available. Thirdly, they might attempt to refine a strategy by trial and error.

We conducted a computer simulation of the effect of stopping at different points in a 20-round sequence. Details of the simulation are shown in Appendix A. However, the key result of the simulation was to show that the chance of finding the optimal offer is actually quite sensitive to how long the search continues and when it is terminated, but the average price obtained from accepting an offer is only slightly affected by stopping position. Thus the simulation indicated that the best only incentive was likely to have more effect on stopping position compared to a commission-based incentive than any measure based on the amount the participants receive. Hence the results that follow focus mostly on the stopping position and number of optimal choices made rather than on average earnings obtained.

1.4  Research questions

Based on the preceding sections, we formed the following research questions:

Research question 1:

Will people search longer and find more optimal offers when they receive a monetary reward only when the optimal offer is found (best only) than when they receive a monetary reward with every offer (commission base)?

Research question 2:

Will people search longer and find more optimal offers when their payoff is proportional to the price obtained (commission base) than when they receive a flat fee payment?

Economic experiments demonstrate that context affects incentivized behavior. Alekseev et al. (2017) surveyed the literature and concluded that context often, but not invariably, improves performance. Improvement is more likely if the task requires sophisticated reasoning. They did not, however, include secretary problem studies in their survey, but the problem seems to qualify as one requiring sophisticated reasoning. The effect of context when the earnings are proportional to the chosen prices (commission base) is explored in Hsiao, Kemp and Servátka (2020). This experiment found that framing the secretary problem as selling houses leads to better decisions than a context-free frame.

Research question 3:

Will people search longer and find more optimal offers with a house selling context than without?

2  Method

2.1  Participants

A total of 178 undergraduate students from the University of Canterbury participated individually in the experiment. All participants received two 100-level course credits for participating in the experiment (a kind of show-up payment). In addition, they could receive different cash incentives. The average actual pay for each condition is summarized in Appendix B. There was an age range from 18 to 40 years old, with the median in the range 18–21 years old. The actual time spent on the sequential search task is reported in Appendix C.

2.2  Experimental design and procedures

The experiment was a 2 × 3 between-subjects design, with two contexts, house selling and no context, and three monetary incentive structures, commission base, best only, and flat fee. Participants performed their entire experiment in one condition combination only. The number of participants that participated in each condition is presented in Appendix B. In the house selling context conditions, the task was framed as selling 10 houses; one house per each round. A description of a house, consisting of the floor area, the number of bedrooms, suburb and the year the house was built in, was presented at the beginning of the round, prior to any price offer.

In each round, a participant could review up to 20 price offers for the given house. The price offers were presented one at a time. Once a price offer was presented, the participant decided whether to reject the offer or whether to accept it. Once the decision was made, there was no recall. If the participant had not accepted an offer prior to the final (20th) offer, the participant was forced to accept the final offer regardless of its value. Similar procedures were employed in the no context conditions, except that the participants were asked to accept “numbers” and there was no mention of houses, only “rounds” instead.

The actual price offer sequences used in the experiment are presented in Appendix D. Appendix E shows the expected stopping positions if the classical strategy was used. To allow comparisons between participants, the same 10 random sequences generated prior to the experiment were employed for each participant in each session. (For a similar use of cardinal values in the secretary problem, see Teodorescu, Sang & Todd, 2018.) All conditions had two practice rounds prior to the 10 paid rounds.

The participants in the commission base incentive with house selling context received the following instructions about their payoffs.

The payoffs will be denoted in experimental currency units (ECUs). 735 ECUs = 1 NZD.

Your ECUs will be converted into NZD at this rate, and you will be paid in NZD when you leave the lab. The more ECUs you earn, the more NZD you earn.

Your payoffs are determined as follows:

Total ECUs you earn = Accepted price offer for House 1 + Accepted price offer for House 2 + ….+ Accepted price offer for House 10.

The participants in the best only incentive and house selling context received the following instructions about their payoffs.

Your will earn NZD 4.60 if you have choose the highest price offer for each house.

Your payoffs are determined as follows:

Total NZD you earn = number of houses that you have selected the highest price offer * NZD 4.60

The participants in the flat fee and house selling context simply received $9.50 for their participation. They were instructed to find the highest price they could in the flat fee condition.

Similar instructions for all three incentive structures were used in the no context conditions, except that the instructions used “number” and “round” instead of “price offer” and “house”.

The conversion rate in the commission base incentive and the cash payoff for selecting an optimal offer in the best only incentive were based on the results of previous findings (Hsiao, 2018, Chapter Two). So all conditions were intended to yield roughly the same payoff regardless of the incentive structures, which was set to be NZD 10. In Hsiao (2018, Chapter Two), participants in the condition with no search cost had an average chosen price of 7339 ECUs and they found 2.17 optimal offers. Using these results as a start point, suggested cash payoffs of NZD 4.60 for finding an optimal offer ($10 ÷ 2.17) in the best only condition and a conversion rate of 735 ECU to 1 NZD (7339 ECUs ÷ NZD 10) in the commission base condition. These were intended to yield average payoff of NZD 10.

Upon arriving at the lab, the participants were randomly assigned a cubicle to read the instructions at their own pace. Any questions were answered in private. The participants were also requested to complete a short research exercise, where they were asked what they thought the purpose of the study was and how they thought the results might be applied. They also completed a regret questionnaire (Schwartz et al., 2002). The results from these exercises turned out not to be very informative or relevant to the rest of the study and are not reported below. The participants were paid individually in private when all the tasks were completed.

3  Results

Three dependent variables are examined in this section. First, and most important, the position in the sequence where the participant accepted the offers (henceforth stopping position) was evaluated for each round. Second, we calculated the number of rounds in which the optimal offer was selected (henceforth optimal offer count). Third, we conducted a limited amount of analysis with the total sum of the 10 chosen prices in ECUs (henceforth total chosen price). (As remarked earlier, the simulation indicated that this is not a sensitive measure of performance.)

First, we test the research questions for the performance measures of overall participant decision-making obtained from the experiment. Second, we examine to what extent the effect of incentive structure on stopping position generalizes over rounds. Third, we report a path analysis that examined the relationship between the incentive structures and performance measures.


Table 1: Descriptive statistics of position in the sequence at which the offer was accepted averaging across 10 rounds (Panel A) and the number of rounds the optimal offer was selected (Panel B) for the conditions.
Panel A. Stopping position
ConditionsAverageMedianS.D.
Range (Min – max)
House selling    
Commission base8.366.2
1 – 20
Best only9.486.9
1 – 20
Flat fee7.656
1 – 20
No context    
Commission base8.667.1
1 – 20
Best only10.187.2
1 – 20
Flat fee9.186.4
1 – 20
Sequence optimal*11.1  
3 – 20
Classical optimal**13.6   
Panel B. Optimal offer count (round) 
House selling   
 
Commission base221.2
0 – 5
Best only2.931.5
0 – 6
Flat fee1.621.2
0 – 4
No context    
Commission base1.921.2
0 – 4
Best only2.531.5
0 – 5
Flat fee1.620.8
0 – 3
Sequence optimal*10   
Classical optimal**4  
 
* Sequence optimal refers to the actual optimal offer from the sequences used in the experiment. ** This is the result predicted by the classical strategy and prescribes an information set of 7 offers, see Appendix E for more details on the prediction. This classical optimal is useful as a benchmark for the best only incentive.

3.1  Performance

3.1.1  Stopping position

As shown in Table 1, Panel A the participants in the best only incentive searched the longest with both the house selling context and no context. Analysis of variance showed a significant effect of incentive structure (F(2, 172) = 4.55, MSerror = 8.08, p = 0.01, partial η2 = 0.50), and Tukey HSD post hoc tests showed a significant (p < 0.05) difference between the commission base and best only (p = 0.04, Research question 1), as well as between the best only and flat fee (p = 0.02). But there was no significant difference between the commission base and flat fee (p = 0.98, Research question 2). Participants chose to stop at a significantly later position in the sequence (M = 9.3) under the no context than the house selling (M = 8.4) context (F(1, 172) = 4.08, p = 0.05, partial η2 = 0.02). There was no statistically significant (p = 0.52) interactive effect of context and incentive structure.


Table 2: Kendall’s tau correlation coefficients for the positions in the sequence at which the offer was accepted in each round. In Panel A, the best only was dummy coded as 1 and the other two incentives as 0. Panel B, commission base was dummy coded with 1 and flat fee as 0. (Coef. shows the correlation coefficient of stopping position in each round.)
Rounds
1
2
3
4
5
6
7
8
9
10
Average position
Panel A. Best vs. not best
Coef.
0.04
0.16
0.14
0.21
−0.05
0.007
0.15
0.22
0.16
0.32
0.19
p
0.53
0.02
0.04
0.002
0.43
0.92
0.03
0.002
0.021
<0.001
0.003
Panel B. Commission base vs. flat fee
Coef.
−0.17
−0.03
−0.06
−0.05
0.2
−0.14
0.25
0.08
−0.01
0.06
0.01
p
0.04
0.71
0.46
0.58
0.02
0.11
0.002
0.38
0.93
0.46
0.87

3.1.2  Optimal offer count

Participants assigned the best only incentive on average selected the highest number of optimal offers (M = 2.7) compared to the commission base (M = 2.0) and flat fee incentives (M = 1.6); (F(2, 172) = 12.45, MSerror = 1.51, p < 0.001, partial η2 = 0.13). Tukey HSD Post Hoc tests confirmed significant differences between the best only and commission base (p = 0.002, Research question 1), and the best only and flat fee (p < 0.001). There was no significant difference between the commission base and flat fee (p = 0.31, Research question 2). There were no other statistically significant (p < 0.05) main or interactive effects. See Table 1, Panel B for more descriptive results in each condition.

Taken over both contexts, 30.6% of the participants in the best only incentive performed better than or as well as the classical theorem in finding the optimal offer (by finding 4 or more optimal offers), and 10.2% of them outperformed the classical theorem (by finding 5 or more optimal offers); 5.2% of the commission-based structure participants performed as predicted by the classical theorem and 1.9% performed better. Only 3.2% of the flat fee participants performed as well as the classical theorem and 0% outperformed.

3.1.3  Total chosen price

The house-selling context returned a higher total chosen price (M = 7160.1 ECUs) than no context (M = 6986.1 ECUs; F(1, 172) = 4.23, MSerror = 3.18 × 105, p = 0.01, partial η2 = 4.23, Research question 3). However, there was no significant (p < 0.05) effect of the incentive structure; nor was there a significant interactive effect.

3.2  Differences between rounds

To examine whether the longer search found in the best only incentive was general across rounds, Kendall’s tau correlation analysis was used to examine the relationship between the stopping position and the incentives in each round. Table 2, Panel A summarizes the results comparing the best only with the other two incentives. Table 2, Panel B compares the commission base and flat fee incentives, and excludes the best only.

Note, first, that the commission base and flat fee comparison shows little difference across the rounds. Overall the commission base incentive led to longer search in four rounds (two of them statistically significant); the flat fee incentive longer search in six (one statistically significant).

However, the best only incentive structure led to longer search in nine rounds (not round 5), and the effect was statistically significant in seven of them. A simple binomial test assesses the chances of obtaining nine or more rounds in the same direction as p = 0.011. Thus the significant effect of the best only incentive structure on stopping position found in the previous section does not appear to have arisen from unusual sequences in a few rounds.


Figure 1: Path diagram of the chosen price and the number of rounds the optimal offer is selected. The beta coefficient is shown for each path, and * shows the path is significant at 0.05 level. Commission base and best only were dummy coded with these structures as 1 and the other two incentives as 0. Context was coded as No context = 1; House selling = 0.

3.3  Path analysis

Stopping position could be seen as a mediating variable, as well as a dependent variable. To investigate the role of stopping position in determining the chosen price and optimal offer count while also considering the effect of incentive structure and context, we performed a path analysis.

The path analysis results are summarized in Figure 1. Choosing later stopping positions led to both higher chosen prices and a higher optimal offer count. There is an indirect (that is, mediated by stopping position) effect of the best only incentive on both chosen price and optimal offer count. The mediation is partial for the effect of the best only incentive on optimal offer count. Context has opposing direct and indirect effects. The direct effect shows that, if you disregard stopping position, the house selling context produces higher chosen prices and optimal offer counts. However, no context produces a later stopping position and an opposing mediating effect.

4  Discussion

This study investigated the effect of different incentive structures on a common sequential search task which requires some cognitive ability to perform well. The main results of the experiment were that the best only incentive lengthened the number of searches and produced an increased number of optimal results. The commission base incentive did not differ from a flat fee in its effect on these variables. As could be predicted from a simulation, the incentive structure had no marked effect on the total chosen price over the different rounds. However, total chosen prices were on average a little higher when a house selling context was introduced.

Overall, the participants in this study do not appear to adopt a variant of the classical strategy or a variant of the reservation price strategy consistently. The main behavioral difference found between the best only and commission base/flat fee incentives was the willingness to conduct longer searches. Perhaps the effect of the incentive structures might be understood in terms of satisficing theory: Simon (1956) suggested a bounded rationality approach to sequential search decisions. Instead of optimizing the expected outcome as suggested by traditional economic theory, people set an aspirational level and are satisfied once the aspirational level is achieved. Such an approach allows decision-makers to effectively achieve a variety of needs in situations where the optimal strategy is unknown to the decision-maker; that is, there is no need for a utility function to be postulated. For example, when a rat forages for food, it learns to choose time-conserving paths that lead to sufficient food for survival rather than paths which might obtain the maximum amount of food but potentially risk survival in the process. In this study, participants in the flat fee and commission base conditions may be satisficing. They were trying to get a “good enough” result (which does not take too much time and effort) instead of investing time and cognitive effort in figuring out the optimal, which they may or may not obtain. The participants in the best only condition, however, were encouraged to optimize, by being more careful with their searches and conducting (relatively) longer searches, even at the risk of over search and missing the optimal offers.

This study found no interactive effects of the incentive structures with the context. People with a house selling context conducted less search and had similar number of optimal offers as the people in the no context. However, path analysis found that the house selling context had a significant direct and positive effect on the chosen price. This potentially confirms a schema hypothesis (e.g., Hsiao et al., 2020), according to which we might form a fuller mental model of how to make sequential search decisions from experiencing various sequential decisions in daily life, such as finding a parking space or a partner. Such a schema could be activated when a similar situation arises, for example, selling houses, and result in applying an effective strategy to making decisions even without having any previous experience in buying or selling a house. Hsiao et al.(2020) provides evidence that schema may be activated solely with a house selling frame and that additional descriptive information (e.g., the suburb the house is located, the year it was built, etc.) is not necessary. Incentive structures may then be excess information that does not take part in activating the decision-making schema; similar to the house descriptive information, hence no interactive effects of the incentive structures with the context.

There are limitations in this experiment. First, the flat fee incentive received NZD 9.50 instead of NZD 10. The original design aimed to avoid participants in the flat fee incentive receiving a higher average payoff than the other two incentives, and the average payoff for the other two incentives was set to be approximately NZD10. However, for some reason, the predictions were slightly astray. Second, the payoffs of the best only incentive were presented in NZD directly, while in the commission base, they were presented in ECUs first then converted to NZD. However, Drichoutis, Lusk and Nayga (2015) reported that the use of experimental currency unit conversion affected behavior only when a 1 to 1 conversion to a monetary term is imposed. In practice, it is not easy to see how this difficulty could have been avoided. Either the house selling context would have employed unrealistically tiny commission sums or the numbers would have had to be different between the two contexts. Finally, the underlying mechanisms of how the best only condition produces more optimal offers, or why similar performance was found in the commission base and flat fee conditions remain unclear, thus diluting the conclusions we can draw from this study. We view these as promising future areas of research.

To conclude, the experiment showed, firstly, that having a commission-based proportional incentive did not produce a better performance than simply having a flat payment for any of the different dependent variables considered. However, paying only for the best did lead to longer searches and to more frequently obtaining the very best price. It is plausible that the effect of the best only incentive arose because the structure itself led the participants to optimize rather than satisfice, and thus led them to search more even when it is risky, and hence allow them to obtain the best price more often. A more general implication is that not all performance-based incentive structures motivate performance.

5  References

Alekseev, A., Charness, G., & Gneezy, U. (2017). Experimental methods: When and why contextual instructions are important. Journal of Economic Behavior and Organization, 134, 48–59. http://dx.doi.org/10.1016/j.jebo.2016.12.005

Ariely, D., Bracha, A., & Meier, S. (2009). Doing good or doing well? Image motivation and monetary incentives in behaving prosocially. The American Economic Review, 99, 544–555. http://dx.doi.org/10.1257/aer.99.1.544

Ariely, D., Gneezy, U., Loewenstein, G., & Mazar, N. (2009). Large stakes and big mistakes. Review of Economic Studies, 76, 451–469. http://dx.doi.org/10.1111/j.1467-937X.2009.00534.x

Bearden, J. N., Murphy, R. O., & Rapoport, A. (2005). A multi-attribute extension of the secretary problem: Theory and experiments. Journal of Mathematical Psychology. 49, 410 – 422. http://dx.doi.org/10.1016/j.jmp.2005.08.002

Bearden, J. N., Rapoport, A., & Murphy, R. O. (2006). Sequential observation and selection with rank-dependent payoffs: An experimental study. Management Science, 52(9), 1437–1449. http://dx.doi.org/10.1287/mnsc.1060.0535

Campbell, J., & Lee, M. D. (2006). The effect of feedback and financial reward on human performance solving ’Secretary’ Problems. In Proceedings of the Annual Meeting of the Cognitive Science Society, 28 (28). Retrieved from http://www.socsci.uci.edu/\~{}mdlee/seclearn\_3.pdf.

Cole, S., Kanz, M., & Klapper, L. (2015). Incentivizing calculated risk-taking: evidence from an experiment with commercial bank loan officers. Journal of Finance, 70, 537–575. http://dx.doi.org/10.1111/jofi.12233

Drichoutis, A.C., Lusk, J. L., & Nayga, R. M. (2015). The veil of experimental currency units in second price auctions. Journal of the Economic Science Association, 1, 182–196. http://dx.doi.org/10.1007/s40881-015-0014-2

Ferguson, T. S. (1989). Who solved the secretary problem? Statistical Science, 4(3), 282 – 296. http://dx.doi.org/10.1214/ss/1177012493

Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgement, and social cognition. Annual Review of Psychology, 59, 255–278. http://dx.doi.org/10.1146/annurev.psych.59. 103006.093629

Freeman, P. R. (1983). The secretary problem and its extensions: A review. International Statistical Review, 51(2), 189–206. http://dx.doi.org/10.2307/1402748

Frey, B. S. (1997). Not just for the money: An economic theory of personal motivation. Brookfield, Vt: Edward Elgar Pub.

Frey, B. S., & Oberholzer-Gee, F. (1997). The cost of price incentives: an empirical analysis of motivation crowding-out. American Economic Review, 87, 746–55. Retrieved from http://www.jstor.org/stable/2951373

Fryer, Jr., R. G. (2011). Financial incentives and student achievement: evidence from randomized trials. The Quarterly Journal of Economics, 126(4), 1755–1798. http://dx.doi.org/10.1093/qje/qjr045

Gilbert, J. P., & Mosteller, F. (1966). Recognizing the maximum of a sequence. Journal of the American Statistical Association, 61(313), 35–73. http://dx.doi.org/10.1080/01621459.1966.10502008

Gneezy, U., Meier, S., & Rey-Biel, P. (2011). When and why incentives (don’t) work to modify behavior. Journal of Economic Perspectives, 25, 191–210. http://dx.doi.org/10.1257/jep.25.4.191

Gneezy, U., & Rustichini, A. (2000a). Pay enough or don’t pay at all. The Quarterly Journal of Economics,115(3), 791–810. http://dx.doi.org/10.1162/003355300554917

Gneezy, U., & Rustichini, A. (2000b). A fine is a price. The Journal of Legal Studies, 29(1), 1–17. http://dx.doi.org/10.1086/468061

Hey, J. D. (1982). Search for rules for search. Journal of Economic Behavior and Organization, 3, 65–81.

Hey, J. D. (1987). Still searching, Journal of Economic Behavior and Organization, 3, 137–144. http://dx.doi.org/10.1016/0167-2681(87)90026-6

Hsiao, Y. (2018). An experimental investigation of the secretary problem: Factors affecting sequential search behavior (Doctoral dissertation). Retrieved from https://drive.google.com/file/d/1nVBHe42QrKV\_z\_qtl6LW5HWtDtB-9OkE/view.

Hsiao, Y., Kemp, S. & Servátka, M. (2020). On the importance of context in sequential search. Southern Economics Journal (Early view). https://doi.org/10.1002/soej.12422.

Kahneman, D. (2003). A perspective on judgment and choice: Mapping bounded rationality. American Psychologist, 58, 697–720. http://dx.doi.org/10.1037/0003–066X.58.9.697

Kahneman, D., & Peavler, W. S. (1969). Incentive effects and pupillary changes in association learning. Journal of Experimental Psychology, 79, 312–318. http://dx.doi.org/10.1037/h0026912

Libby, R. & Lipe, M. G. (1992). Incentives, efforts and the cognitive processes involved in accounting-related judgements. Journal of Accounting Research, 30, 249–273. http://dx.doi.org/10.2307/2491126

Lindley, D. (1961). Dynamic programming and decision theory. Journal of the Royal Statistical Society. Series C (Applied Statistics), 10, 39–51. http://dx.doi.org/10.2307/2985407

Moriguti, S. (1993). Basic theory of selection by relative rank with cost. Journal of the Operations Research Society of Japan, 36, 46–61

Riedel, J. A., Nebeker, D. M., & Cooper, B. L. (1988). The influence of monetary incentive on goal choice, goal commitment and task performance. Organizational Behavior and Human Decision Processes, 42, 155–180. http://dx.doi.org/10.1016/0749-5978(88)90010-6

Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K. & Lehman, D. R. (2002). Maximizing versus satisficing: Happiness is a matter of choice. Journal of Personality and Social Psychology, 83(5), 1178-1197. http://dx.doi.org/10.1037/0022-3514.83.5.1178

Seale, D. A., & Rapoport, A. (1997). Sequential decision making with relative ranks: An experimental investigation of the "secretary problem". Organizational Behavior and Human Decision Processes, 69, 221–236. http://dx.doi.org/10.1006/obhd.1997.2683

Seale, D. A., & Rapoport, A. (2000). Optimal stopping behavior with relative ranks: the secretary problem with unknown population size. Journal of Behavioral Decision Making, 13(4), 391–411.

Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review. 63(2), 129–138. http://dx.doi.org/10.1037/h0042769

Tamaki, M. (1979). A secretary problem with double choices. Journal of the Operations Research Society of Japan, 22, 257–264. https://pdfs.semanticscholar.org/6ab6/4613367117868f0e78a6af058a789b708284.pdf

Teodorescu, K., Sang, K., & Todd, P. (2018). Post-decision search in repeated and variable environments. Judgment and Decision Making, 13(5), 484–500.

Yeo, G. F. (1998). Interview costs in the secretary problem. Australian & New Zealand Journal of Statistics, 40(2), 215–219. http://dx.doi.org/10.1111/1467-842X.00024

Yerkes, R. M., & Dodson, J. D. (1908). The relation of strength of stimulus to rapidity of habit‐formation. Journal of Comparative Neurology and Psychology, 18(5), 459–482. http://doi:10.1002/cne.920180503

Zwick, R., Rapoport, A., Lo, A. K. C., & Muthukrishnan, A. V. (2003). Consumer sequential search: Not enough or too much? Marketing Science, 22, 503–519. Retrieved from http://www.jstor.org/stable/4129735



Appendix A: Simulation



To derive a prediction for the impact of incentive structures on search behavior and performance outcomes, we conducted a simulation that allows for evaluating the performance of different variants of the classical strategy. Each simulation compares the performance measures resulting from 20 different decision strategies (as there was a maximum of 20 offers; each decision strategy prescribes how many offers to reject, followed by accepting the next highest offer), which contain all possible stopping positions (i.e., a decision strategy stops the search by accepting the nth offer in an iteration; where 1 ≤ n ≤ 20). Each simulation iteration generated a set of 20 random offers using the mean and standard deviation for each round from the values used in the experiment. The simulation ran separately for each round (10 rounds in total) and with 1.2 million iterations for each round.

To compare the performance of decision strategies across incentive structures, we compared the performance of all 20 possible decision strategies using both the chosen price (in Experimental Currency Units - ECUs) they yield in total, and the frequency of which the decision strategy found the optimal (highest) offer (in %). The chosen price statistic indicates which decision strategy yields the highest payoff. The optimal offer count statistic shows which decision strategy finds the greatest number of optimal offers.

The results are shown in Figures A1 and A2 (next page). Perhaps the key feature to emerge from the simulation is that the optimal offer count is much more sensitive to the actual stopping position than is the chosen price, and hence the value of any commission received on the basis of total price.


Figure A1. The frequency of finding the optimal offer and the stopping position for all decision strategies.


Figure A2. The chosen price and the stopping position for all decision strategies.



Appendix B


Table B1. Participant data for the condition groups.
Conditions
Number of Participants
Average cash payoff (NZD)
House-selling  
Commission base
26
9.9
Best only
30
13.2
Flat fee
31
9.5
No Context  
Commission base
31
9.4
Best only
29
11.4
Flat fee
31
9.5



Appendix C


Table C1: Time taken (minutes).
ConditionsAverageMedianS.D.
Range (Min – max)
House selling   
Commission base6.66.32.9
1.7 – 16.5
Best only7.372.3
4.2 – 13.2
Flat fee5.74.91.9
2.7 – 9.8
No context    
Commission base4.54.21.7
1.3 – 9.2
Best only55.11.8
1.6 – 9.0
Flat fee4.74.41.5
2.4 – 8.7

Table C1 shows the actual time taken to complete the search task (in minutes). The result shows that participants in the best only incentive averaged across both house selling context and no context, spent a longer time in making decisions (M = 6.2) than the commission base incentive (M = 5.5) and flat fee incentive (M = 5.2).

A 2 x 3 factorial (ANOVA) found a significant main effect of the incentive structures on the average time taken (F(2, 172) = 3.70, MSerror = 4.20, p =0.03, partial η2 = 0.04). Tukey HSD post hoc tests showed a significant difference between the best only and flat fee incentives (p = 0.02), but no significant difference between the commission base and best only (p = 0.13), or the commission base and flat fee (p = 0.73). Analysis of variance results also showed that participants spent significantly less time (M = 4.7) under the no context than the house selling (M = 6.5) context (F(1, 172) = 35.05, p < 0.001, partial η2 = 0.17). There was no statistically significant (p = 0.17) interactive effect of context and incentive structure.



Appendix D


Table D1. The actual price offer sequences used in the experiment.
Round 12345678910
Offer           
1 388739310420292494522252789341
2 488803290637264225252709829459
3 683221637727344272562966996453
4 321729372561266994255885241625
5 625159619643396602370737799504
6 744150207663445987292449722387
7 2792994555682665235339101088250
8 848818400636241683237250876308
9 2765852514223701400262933503492
10 6788757083364841574343491650455
11 4081304524142641413220450890353
12 4357955164791861844603941264588
13 6794814203325781081294899645438
14 46526074942445585353721740408
15 3935254105461892732975051179481
16 3974293247245651182452608250467
17 58862214411271305284827840418
18 358459480267235661436712272273
19 644748463357350785581838449554
20 49537461773337389197541105553



Appendix E


Table E1. Predicted stopping position for each round after applying the classical strategy. Note. For 20 offers, the classical strategy indicates participants should choose none of the first 7 offers and from offer 8 onwards choose the first offer that exceeds any seen so far.
RoundStopping positionBest?
18Yes
28No
310Yes
420Yes
510No
69No
719Yes
820No
912No
1020No


*
University of Canterbury & Macquarie Graduate School of Management. Correspondence concerning this article should be addressed to Yu-Chin Hsiao, Psychology Department, University of Canterbury, Private Bag 4800, Christchurch, New Zealand. Email: hsiao.annie@gmail.com.
#
University of Canterbury

This paper is based on a chapter from Yu-Chin Hsiao’s dissertation written jointly at the University of Canterbury and Macquarie Graduate School of Management (Hsiao, 2018). Financial support was provided by the University of Canterbury, College of Business and Economics and Macquarie Graduate School of Management. We thank Matt Ward and colleagues who provided insight and expertise that greatly assisted the research.

Copyright: © 2020. The authors license this article under the terms of the Creative Commons Attribution 3.0 License.

1
Hey (1987) investigated the secretary problem varying the ability to recall rejected offers, the availability of the distribution of the offers and the presence of monetary incentive. The payoff is the accepted offer minus the search cost. The participants also received a bonus regardless of their performance, where the bonus ranged from £1-3. The final payment is randomly chosen in one of the five rounds. The result of this experiment was compared to a previous experiment (Hey, 1982) where no financial incentive was available. The performances were very similar, suggesting that the financial incentives incorporated in the later experiments had a relatively small impact on search behavior.

Campbell and Lee (2006) investigated a variant of the secretary problem that manipulated feedback and financial incentive. In particular, in their experiment, the participants were informed about the distribution of offers before making decisions and were required to choose the maximum value. Participants in the no financial incentive conditions were told to find as many correct answers as possible. For participants in the financial incentive conditions, the financial rewards were (partly) based on a quota-piece rate scheme. The participants received $5 regardless of their performance, and an additional $5 reward for every 12 correct responses once the participant had answered 40% of the problems correctly. They found that people had similar performances in finding the correct answers with or without the presence of a monetary incentive.


This document was translated from LATEX by HEVEA.