[ View menu ]

March 15, 2005

Worried about AMA interviews?

Filed in Jobs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

EVERYTHING YOU EVER WANTED TO KNOW ABOUT THE AMA INTERVIEWS FOR THE ACADEMIC MARKETING JOB MARKET

cook_interrogation.JPG

by Dan Goldstein March 2005

WHY AM I WRITING THIS?
I’ve seen the Marketing job market turn happy grad students into quivering masses of fear. I want to share my experiences and provide a bit advice to make the whole process less mysterious.

WHAT DO I KNOW?
I’ve been on the AMA job market twice and the Psychology market once. I’ve had about 40 AMA interviews, as well as numerous campus visits, face-to-face interviews, offers, and rejections. I’m an outsider to Marketing who went on the market older and with more experience than the average rookie (35 years of age, with 8 years of research scientist, postdoc, visiting scholar, and industry positions). I’ve hired many people for many academic posts, so I know both sides.

HOW TO SET UP AMA INTERVIEWS
First, at least a couple months before the conference, find where it will be. It’s called the Summer Educator’s Conference. Strange name, I know. Got a room at the conference hotel, preferably on the floor where the express elevator meets the local elevator for the upper floors. You’ll be hanging out on this floor waiting to change elevators anyway, so you might as well start there.

Next, get your advisor / sponsor to write a cover letter encouraging people to meet with you at AMA. It helps if this person is in Marketing. Get 1 or 2 other letters of recommendation, a CV, and some choice pubs. Put them in an envelope and mail them out to a friend of your sponsor at the desired school. It should look like the letter is coming from your sponsor, even though you are doing the actual assembly and mailing. Repeat this process a bunch of times. It’s a good idea to hit a school with 2 packets, 3 if you suspect they’re a little disorganized. Certainly send one to the recruiting coordinator (they may send letters to your department’s secretary telling you they are hiring) and one to your sponsor’s friend. Mail to schools regardless of whether they are advertising a position or not. This is academia: nobody knows anything. This means you may be sending 50 or more packets.

THEN WHAT?
Wait to get calls or emails from schools wishing to set up AMA interviews with you. These calls may come in as late as one week before the conference. Some schools will not invite you for totally unknown reasons. You may get interviews from the top 10 schools and rejected from the 30th-ranked one. Don’t sweat it. Again, this is a land of total and absolute nonsense that you’re entering into. Also, know that just because you get an interview doesn’t mean they have a job. Sometimes schools don’t know until the last minute if they’ll have funding for a post. Still, you’ll want to meet with them anyway. After the AMA, you’ll hopefully get “fly-outs,” that is, offers to come and visit the campus and give a talk. This means you’ve made the top five or so. Most offers go down in December, but sometimes later. There’s a second market that happens after all the schools realize they’ve made offers to the same person. Of course, some schools get wise to this and don’t make offers to amazing people who would have come. We need some kind of point market to work out this part of the system perhaps.

THE “IT’S ALL ABOUT FRIENDSHIP” RULE
Keep in mind that you will leave this process with 1 or 0 jobs. Therefore, when talking to a person, the most likely thing is that they will not be your colleague in the future. Therefore, you’d better work hard to make them your friend. You’ll need friends to collaborate, to get tenure, get grants, and to go on the market again if you’re not happy with what you get.

HOW DO THE ACTUAL AMA INTERVIEWS GO?
At the pre-arranged time you will knock on the hotel room door. You will be let into a suite (p=.4) or a normal hotel room (p=.5, but see below). In the latter case, there will be people you once imagined as dignified sitting on beds. The other people in the room may not look at you when you walk in because they will be looking for a precious few seconds at your CV.

THE SEAT OF HONOR
There will be an armchair. Someone will motion towards the armchair, smile, and say, “You get the seat of honor!” This will happen at every school, every time, for three days. I promise.

THE TIME COURSE
There will be two minutes of pleasant chit-chat. They will propose that you talk first and they talk next. There will be a little table next to the chair on which you will put your flip book of slides. You will present for 30 minutes, taking their questions as they come. They will be very nice. When done, they will ask you if you have anything to ask them. You of course do not. You hate this question. You make something up. Don’t worry, they too have a spiel, and all you need to do is find a way to get them started on it. By the time they are done, it’s time for you to leave. The whole experience will feel like it went rather well.

PREDICTING IF YOU WILL GET A FLY-OUT
It’s impossible to tell from how it seems to have gone whether they will give you a fly-out or not. Again, this is total and absolute madness. They might not invite you because you were too bad (and they don’t want you), or because you were too good (and they think they don’t stand a chance of getting you).

DO INTERVIEWS DEVIATE FROM THAT MODEL?
Yes.

Sometimes instead of a hotel room, they will have a private meeting room (p=.075). Sometimes they will have a private meeting room with fruit, coffee, and bottled water (p=.025). Sometimes, they will fall asleep while you are speaking (p=.05). Sometimes they will be rude to you (p=.025). Sometimes, if it’s the end of the day, they will drink alcohol with you (p=.18, given that it’s the end of the day).

HOW YOU THINK THE PROCESS WORKS
The committee has read your CV and cover letter and looked at your pubs. They know your topic and can instantly appreciate that what you are doing is important. They know the value of each journal you have published in and each prize you’ve won. They know your advisor and the strengths she or he instills into each student. They ignore what they’re supposed to ignore and assume everything they’re supposed to assume. They’ll discount the interview and fly you out based on your record.

HOW THE PROCESS REALLY WORKS
The interviewers will have looked at your CV for about one minute a couple months ago, and for a few seconds as you walked in the room. They will never have read your entire cover letter, and they will have forgotten most of what they did read. They could care less about your advisor and will get offended that you didn’t cite their advisor. They’ll pay attention to everything they’re supposed to ignore and assume nothing except what you repeat five times. Flouting 50 years of research in judgment and decision-making, they’ll discount your CV and fly you out based on your interview.

TWO WAYS TO GIVE YOUR SPIEL
1) The plow. You start and the first slide and go through them until the last slide. Stop when interrupted and get back on track.

2) The volley. Keep the slides closed and just talk with the people about your topic. Get them to converse with you, to ask you questions, to ask for clarifications. When you need to show them something, open up the presentation and show them just that slide.

I did the plow the first year and the volley the second year. I got four times more fly-outs the second year. Econometricians are now trying to determine if there was causality.

HOW TO ACT
Make no mistake, you are an actor auditioning for a part. You have to be like Santa Claus bringing the energy into the room with you. There will be none awaiting your arrival, I promise you. These people are tired. They’ve been listening to people in a stuffy hotel room from dawn till dusk for days. If you do an average job, you lose: You have to be two standard deviations above the mean to get a fly-out. So audition for the part, and make yourself stand out. If you want to learn how actors audition, read Audition by Michael Shurtleff.

HAVE A QUIRK
One of the biggest risks facing you is that you will be forgotten. Make sure the interviewers know something unusual about you. My quirk is that I’ve worked all over the world as a theater director for over 10 years. It’s got nothing to do my research, but I can’t tell you the number of people who bring up this odd little fact when I do campus visits.

DON’T GIVE UP
Never think it’s hopeless. Just because you’re not two SDs above the mean at the school of your dreams, doesn’t mean you’re not the dream candidate of another perfectly good school. The students are competing for schools and the schools are competing for students. If you strike out, you can just try again next year. I know a person in Psychology who got 70 rejections in one year. I know a person in Marketing who was told he didn’t place in the top 60 candidates at the 20th ranked school. The subsequent year, both people got hired by top 5 departments. One of them is ridiculously famous!

RUMORS
Don’t gossip. All gossip can mess with your chances. Gossip that you are doing well can hurt you because schools will be afraid to invite you if they think you won’t come. Gossip that you are doing poorly can hurt you because schools that like you will be afraid to invite you if they think no one else does. Sometimes people will ask a prof at your school if you would come to their school, and the prof will then ask you. To heck with that. Just say that if they want to talk to you, they should deal with you directly.

The danger of rumors can be summed up by the following story. At ACR in 2003, I was having a beer with someone who confessed, “you know, my friend X at school Y told me that they want to hire you, but they’re afraid your wife won’t move to Z”. I was single.

ADDENDUM
Have your own advice to add? Want more detail on specific parts of the process? Let me know. dan at dangoldstein dot com.

March 14, 2005

Trust has a substitute?

Filed in Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

TRUST BUT VERIFY: MONITORING IN INTERDEPENDENT RELATIONSHIPS

penny.gif

Effective organizations depend upon employees who will rely upon each other even when they do not trust each other. How then can managers promote trustworthy behavior to maintain the effectiveness of their organizations? Managers, employees, and customers routinely rely upon others to choose trustworthy actions. Managers trust that employees will complete work, employees expect that they will be paid, and customers expect goods and services to be delivered on time. Trust reduces transaction costs, improves efficiency of economic transactions, and enables managers to negotiate more efficiently and lead more effectively. A trusting environment, though, is not necessarily the norm. In many organizational settings, including negotiations and accounting, people routinely engage in untrustworthy and unethical behavior. The effect of this kind of behavior on trust has been demonstrated in numerous studies, which identify various individual and contextual factors that influence trust. One such study employing repeated prisoner’s dilemma and ultimatum games found that responders were more likely to reject offers proposed by those that had previously used deception in the past. What then can a manager do when trust is low? Monitoring is one approach. For example, some managers randomly administer drug tests, conduct audits and even listen in on phone calls. Between 1990 and 1992 over 70,000 US companies purchased surveillance software at a cost of more than $500 million (Aiello, 1993). Although popular, what are the relationships between different monitoring systems and the actions people choose? A recent article by Maurice E. Schweitzer and Teck H. Ho examines the dynamics of monitoring. They find that frequent monitoring generally increases trust-like behavior. When monitoring is anticipated, trust-like behavior is increased: monitoring is a substitute for trust. However, the study finds that people are particularly untrustworthy when monitoring is unanticipated. In short, monitoring is difficult to turn off once initiated.

ABSTRACT:

“For organizations to be effective, their employees need to rely upon each other even when they do not trust each other. One tool managers can use to promote trust-like behavior is monitoring. In this article we report results from a laboratory study that describes the relationship between monitoring and trust behavior. We randomly and anonymously paired participants (n=210) with the same partner, and had them make 15 rounds of trust game decisions. We find predictable main effects (e.g., frequent monitoring increases trust behavior) as well as interesting strategic behavior. Specifically, we find that anticipated monitoring schemes (i.e., when participants know before they make a decision that they either will or will not be monitored) significantly increase trust behavior in monitored rounds, but decrease trust behavior overall. Participants in our study also reacted to information they learned about their counterpart differently as a function of whether or not monitoring was anticipated. Participants were less trusting when they observed trustworthy behavior in an anticipated monitoring period than when they observed trustworthy behavior in an unanticipated monitoring period. In many cases, participants in our study systematically anticipated their counterpart’s untrustworthy behavior. We discuss implication of these results for models of trust and offer managerial prescriptions.”

QUOTES:

“While prior work has argued that trust is an essential ingredient for managerial effectiveness (Atwater, 1988), many relationships within organizations lack trust. For example, managers within large organizations with high turnover may not have sufficient time to develop trust among all of their employees. Even in the absence of actual trust, however, managers need to induce trust-like behavior. In this article we conceptualize monitoring as a substitute for trust, and we demonstrate that monitoring systems significantly influence trust-like behavior.”

“We conceptualize monitoring as a tool to produce trust-like behavior, and we consider ways in which monitoring changes incentives for engaging in trust behavior. We adopt a functional view of trust by assuming individuals calculate the costs and benefits of engaging in trust behavior in a manner consistent with Ajzen’s (1985, 1987) theory of planned behavior. In particular, we focus on the role of monitoring systems in altering the expected costs and benefits of choosing trust-like actions. This conceptualization matches our experimental design, because participants in our study are paid for their outcomes and remain anonymous to their counterpart. There is no way for participants to find out who their partners are, and as a result, participants in our study face a well-defined “shadow of the future” defined by the future rounds of a repeated trust game.”

“Consider monitoring regimes that range from one extreme, complete monitoring, to another extreme, no monitoring. Under complete monitoring, participants have an incentive to exhibit trust-like behavior, especially in early rounds, because their counterpart will observe their actions and can reciprocate in future rounds. On the other hand, under no monitoring, participants have little incentive to engage in trust-like behavior, because their counterpart cannot observe their actions. In this case, the expected benefits of choosing untrustworthy actions exceed the expected benefits of choosing trust-like actions. If players are self-interested, opportunism is likely to prevail. The more frequent the monitoring, the greater the expected benefits of choosing trust-like actions. Consequently, we hypothesize that monitoring frequency will be positively correlated with trust-like behavior. With more frequent monitoring, untrustworthy behavior is more likely to be detected and subsequently punished or reciprocated. As a result, if players are calculative and maximize their total payoff for the entire interaction, they will exhibit more trust-like behavior when they experience more frequent monitoring.”

“[A]nticipated monitoring schemes decrease trust- like behavior in periods of no monitoring by harming trust development. Players who observe others choosing trustworthy actions in anticipated monitoring periods may attribute the trustworthy behavior they observe to the monitoring scheme rather than the trustworthiness of the individual. As a result, players who observe trustworthy behavior in anticipated monitoring periods may be less likely to assume that their counterpart will choose trustworthy actions when they are not monitored. As a result, trustworthy actions are less diagnostic of true trustworthiness and less effective in building trust when the observed trust behavior occurs in an anticipated period of monitoring. For both of these reasons, we hypothesize that anticipated monitoring will decrease trust-like behavior in non-monitored rounds.”

“First, we find that frequent monitoring increases overall trust-like behavior. Second, we find that anticipated monitoring harms overall trust-like behavior, but significantly increases trust-like behavior for periods in which monitoring is anticipated; in our study, Even players were particularly trustworthy in anticipated monitoring rounds, but were particularly untrustworthy when they anticipated no monitoring.”

ABOUT THE AUTHORS:

Maurice E. Schweitzer

schweitzer.gif

Maurice E. Schweitzer is assistant Professor of Operations and Information Management at The Wharton School, University of Pennsylvania. He received his PhD the University of Pennsylvania in 1993. His research interests include deception and trust, negotiations and behavioral decision research. Currently he is working on projects involving the influence of emotions on trust, trust recovery, envy and unethical behavior and Insurance fraud.

Maurice E. Schweitzer Home Page at Wharton

Teck H. Ho

Teck ho.jpg

Teck H. Ho is the William Halford Jr. Family Professor of Marketing Executive Director at the Berkeley Experimental Social Sciences Laboratory (Xlab). He received his PhD from in Decision Sciences from The Wharton School, University of Pennsylvania in 1993. His research interests current projects involve B2B contract design, Strategic IQ and trust building.

Teck H. Ho Homepage at The University of California at Berkeley.

March 9, 2005

Honesty, No longer the best policy?

Filed in Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

THE DIRT ON COMING CLEAN: PERVERSE EFFECTS OF DISCLOSING CONFLICTS OF INTEREST

Advice-daylight-smaller.gif

Conflicts of interest have been at the heart of many business scandals. Investment bankers who get paid more when their clients trade more, or doctors who get paid more when their patients require more care, are both examples of advisors with a conflict of interest. The solution usually is considered to be a full disclosure of each party’s conflict of interest. A recent article out of Carnegie Mellon University by Daylian Cain, Don Moore, George Loewenstein investigates to what extent the receivers of advice use this knowledge to counteract any given biases and how disclosure of conflicts affects the advice given. The authors argue such disclosure may not remedy conflicts but could possibly aggravate them.

ABSTRACT

“Conflicts of interest can lead experts to give biased advice. While disclosure has been proposed as a potential solution to this, we show that disclosure can have perverse effects, and might even increase bias. Disclosure may increase bias because advisors feel morally licensed and strategically encouraged to exaggerate their advice even further from the truth. As for those receiving the advice, proper use of the disclosure depends on understanding how the conflict of interest biased the advice and how that advice impacted them. Because people lack this understanding, disclosure can fail to solve the problems created by conflicts of interest.”

QUOTES

“Perhaps, however, the benefits of disclosure should not be accepted quite so quickly. For disclosure to be effective, the recipient of advice must understand how the conflict of interest has influenced the advisor and must be able to correct for that biasing influence. In many important situations, however, this understanding and ability may be woefully lacking. For example, imagine a patient whose physician advises, “Your life is in danger unless you take medication X,” but who also discloses, “The medication’s manufacturer sponsors my research.” Should the patient take the medication? If not, what other medication? How much should the patient be willing to pay to obtain a second opinion? How should the two opinions be weighed against each other? The typical patient may be hard-pressed to answer such questions.”

“And what is the impact of disclosure on providers of advice? In the example just given, is it possible that the physician’s behavior might be affected by disclosure? For example, might the physician be more likely to exaggerate the danger of not taking the medication in order to neutralize the anticipated “warning” effect of the disclosure?”

“First, estimating the impact of a conflict of interest on an advice giver is an extraordinarily difficult problem that requires both economic and psychological insight. To properly estimate the degree to which a particular advisor is biased by a conflict of interest, one would want to know the extent to which the advisor embraces professional norms or is instead corrupt. One would also want to know how tempting the advisor finds the incentives for providing biased advice, and one would want to have an accurate mental model of how such incentives can bias advice. However, prior research suggests that most people have an incorrect understanding of the psychological mechanisms that transform conflicts of interest into biased advice.”

“Research on what has been called the “failure of evidentiary discreditation” shows that when the evidence on which beliefs were revised is totally discredited, those beliefs do not revert to their original states but show a persistent effect of the discredited evidence (Skurnik, Moskowitz, and Johnson 2002; Ross, Lepper, and Hubbard 1975). Furthermore, attempts to willfully suppress undesired thoughts can lead to ironic rebound effects, in some cases even increasing the spontaneous use of undesired knowledge (Wegner 1994).”

“Disclosure, at least in the context of the admittedly stylized experiment discussed in this paper, benefited the providers of information but not its recipients. To the extent that a similar effect occurs outside the experimental laboratory, disclosure would supplement existing benefits already skewed toward information providers. In particular, disclosure can reduce legal liability and can often forestall more substantial institutional change. We do not believe that this is a general result—that is, that disclosure always benefits providers and hurts recipients of advice—but it should challenge the belief that disclosure is a reliable and effective remedy for the problems caused by conflicts of interest.”

EXPERIMENTAL METHOD

“The estimation task involved estimating the values of jars of coins. Estimators were paid according to the accuracy of their estimates, and advisors were paid, depending on the experimental condition, on the basis of either how accurate or how high (relative to actual values) the estimators’ estimates were.

In each round, advisors took turns at closely examining a jar of coins and then completed an advisor’s report. Each advisor’s report contained the advisor’s suggestion of the value of the jar in question and provided a space in which the estimator would respond with an estimate of the jar’s worth.

After seeing the reports, the estimators saw the jar in question — but only from a distance of about 3 feet and only for about 10 seconds: the experimenter held the jar in front of estimators, turning the jar while walking along a line across the room and back. Estimators then attempted to estimate the value of the coins in the jar.

The amount of money in each of the six jars (M, N, P, R, S, and T) was determined somewhat arbitrarily to lie between $ 10 and $ 30, and advisors were informed of this range. Estimators were told that advisors had information about the range of actual values but were not given this range of values themselves. In fact, the values of the jars were M = $ 10.01, N = $ 19.83, P = $ 15.58, R = $ 27.06, S = $ 24.00, and T = $ 12.50. In the first three rounds, neither estimators nor advisors received feedback about their actual payoffs or about actual jar values. In each of the last three rounds, however, after advisors had given their advice and estimators had made their estimates, each advisor was shown the estimate of the estimator to whom their advice was given on the previous jar and, for each of the feedback rounds, the actual value of the jar in question was announced to everyone at the end of the round. Since payoff schedules were provided to all participants at the beginning of the experiment, feedback allowed both advisors and estimators to calculate how much money they had made in the previous round before continuing on to the next round. While estimators did not see the advisor’s instructions, advisors saw a copy of the estimator’s instructions and thus could also use feedback to calculate their estimator’s payoffs.

Both estimators and advisors were paid on the basis of the estimator’s estimates. Estimators were always paid on the basis of the accuracy of their own estimates. Advisors’ remuneration depended on the condition to which they were assigned. In the “accurate” condition, each estimator was paid according to how accurate the estimator’s estimate was, and this was disclosed prominently on the advisor’s report immediately under the advisor’s suggestion. (“Note: The advisor is paid based on how accurate the estimator is in estimating the worth of the jar of coins.”) In the “high/undisclosed” and “high/disclosed” conditions, each advisor was paid on the basis of how high the estimator’s estimate was. This conflict of interest was not disclosed in the high/undisclosed condition but was prominently disclosed in the high/disclosed condition, immediately under the advisor’s suggestion. (“Note: The advisor is paid based on how high the estimator is in estimating the worth of the jar of coins.”) In addition to being remunerated on the basis of their estimators’ estimates, all advisors had an additional opportunity to earn money: after they had completed the report for each jar, advisors were asked to give their own personal best estimates of the true value of the coins in the jar and were rewarded on the basis of accuracy.”

Figure-one.gif

Advice provided for each jar, by condition

ABOUT THE AUTHORS

Daylian Cain

Dmc.jpg

“I am a 4th year doctoral student at Carnegie Mellon University’s Tepper School of Business, (Pittsburgh, PA) where I concentrate on Organizational Behavior & Theory. My main area of interest is decision making — both the normative aspects (as informed by Ethics and Economics) and the descriptive aspects (as informed by Social Psychology and Behavioral Decision Research). I also earned an academic background in Philosophy, where I was interested in theories of rationality and morality. These combined interests lead me to conduct research and lecture on topics such as managerial decision making, business ethics, and negotiations. ”

Daylian Cain Home Page

Don Moore

Don Moore cropped.jpg

Don Moore is an Assistant Professor in the Organzational Behavior and Theory group at the Tepper School of Business at Carnegie Mellon University. “The human mind, for all its miraculous power, does not come with an instruction manual. This ignorance has tremendous consequences, not only for the way we make decisions as individuals, but also for interactions (including negotiations), and for the organizations we create. My work seeks to fill important gaps in our understanding of ourselves and document the implications of these discoveries on social, organizational, and market outcomes.”

Don Moore’s Home Page

George Loewenstein

George Loewenstein.jpg

George Loewenstein is a professor in the Department of Social and Decision Sciences at Carnegie Mellon University. “My primary research focus is on intertemporal choice–decisions involving trade-offs between costs and benefits occurring at different points in time. Because most decisions have consequences that are distributed over time, the applications of intertemporal choice are numerous (e.g. saving behavior, consumer choice, labor supply).

In the past, formal analyses of intertemporal choice in economics and other social science disciplines have been dominated by a single model–the discounted utility model. I try to identify deficiencies with this model, explain these deficiencies in psychological terms and propose alternative models.”

George Loewenstein Home Page

March 1, 2005

Duncan Luce

Filed in Profiles
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

DECISION SCIENCE RESEARCHER PROFILE: DUNCAN LUCE AWARDED 2003 NATIONAL MEDAL OF SCIENCE

Duncan Luce 02.jpg

R. Duncan Luce is a Distinguished Research Professor of Cognitive Science and Economics at UC Irvine. At the age of 79 Luce, in recognition of his work in the behavioral and social sciences, has been named one of eight U.S. scientists and engineers to receive the 2003 National Medal of Science; the highest scientific honor in the United States. On March 14, 2005 President George W. Bush will present them with the honor at the Whitehouse. Why it is awarded two years after 2003 is a mystery to us.

The National Medal of Science honors individuals both for pioneering and innovating scientific research that has led to a better understanding of the world, and for contributing to innovations and technologies that give the United States a global economic edge. The award was established in 1959 and is administered by the National Science Foundation.

Luce received his PhD in mathematics from the Massachusetts Institute of Technology in 1950. He feels there is an inherent link between math and psychology. “Given the fact that people manage to live together in a fairly reasonable way most of the time, there have to be behavioral regularities,” he says. “Mathematical behavioral science attempts to formulate such regularities.”

Professor Luce’s contributions to mathematical psychology have had a great influence on how the field examines decision making and sensory psychology. Luce combines mathematical theory and experiments, in efforts to understand features of individual behavior and orientation to the world. His method includes the development of formal mathematical models that consequently contribute to the shaping of contemporary economics.

As a child, Duncan Luce liked painting pictures and was fascinated with airplanes. His parents swayed him against an artistic career and an astigmatism kept him from becoming a military pilot. But Luce has been flying high for years and continues to create new ideas in his field.- from Luce’s homepage at UC Irvine.

Luce served as advisor to many successful graduate students, including Columbia University Center for the Decision Sciences Director, and past President of the Society for Judgment and Decision Making Elke Weber.

RESEARCH INTERESTS:

“The representational theory of measurement concerns the types of data that can be summarized in some numerical way. Much general theory has been developed (Foundations of Measurement, vols. 1,2,3, 1971, 1989, 1990, with D.M. Krantz, P Suppes, & A. Tversky) and still is being actively explored. Although some of my efforts lie in this general area, over the last seven years I have mostly been applying some of these ideas to individual decision making, where the numerical measures are called utility and subjective probability or weights. Recently I have been examining the kinds of behavioral laws that link riskless utility and risky utility. Accompanying the theoretical work is an empirical program in which these plausible behavioral properties are tested in computer-based, laboratory experiments. There are many tricky and interesting questions of how best to evaluate these properties. The results to date exhibit highly regular patterns that are nicely summarized by numerical models.” -from Professor Luce’s Homepage

RECENT BOOKS

(2000) Utility of Gains and Losses: Measurement-Theoretical and Experimental Approaches. Mahwah, NJ: Lawrence Erlbaum Association.

(1997) Choice, Decision, and Measurement: Essays in Honor of R. Duncan Luce

(1993) Sound & Hearing: A Conceptual Introduction. Hillsdale, NJ: Erlbaum.

(1991) Response Times: Their Role in Inferring Elementary Mental Organization: Oxford.

(1990) With D.H. Krantz, P. Suppes & A. Tversky. Foundations of Measurement, Vol. III, Academic Press.

February 28, 2005

ACR 2005 Call For Papers

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

2005 NORTH AMERICAN CONFERENCE OF THE ASSOCIATION FOR CONSUMER RESEARCH CALL FOR PAPERS

The 2005 North American Conference of the Association for Consumer Research will be held at the Crowne Plaza Hotel located on the Riverwalk in San Antonio, Texas, from Thursday September 29 through Sunday October 2, 2005.

sanan_pm.jpg

Submission and Decision Deadlines
Submissions for competitive papers, working papers, roundtables, and special topic session proposals must be received no later than Friday, March 18, 2005. Notification of acceptance in these four categories will be made by Friday July 15, 2005. The entry deadline for film festival submissions is July 1, 2005. Notification of accepted films will be August 1, 2005.

To give as many people as possible the opportunity to participate in ACR 2005, note the requirement that each ACR participant may present in Special Topic and/or Competitive Paper sessions no more than twice during the duration of the conference.

General Submission Requirements and Procedures
Except for film festival submissions, all submissions, reviewing, and notification regarding ACR 2005 will be conducted electronically through the web site. The ACR web site (http://www.acrwebsite.org/) will contain a link to the 2005 conference site, which will be updated to accept all the required information through an interface that eliminates the need for e-mail submissions. The 2005 ACR conference web site will be available for submissions between Monday, February 7 and midnight PST of the deadline, Friday, March 18, 2005.

More detail below. This info culled from http://www.acrwebsite.org/conference/info.asp
Continue reading ACR 2005 Call For Papers

February 13, 2005

Computer-savvy postdoc wanted @ Columbia

Filed in Jobs
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

POSTDOCTORAL FELLOWSHIP – COLUMBIA UNIVERSITY CENTER FOR THE DECISION SCIENCES

ADDENDUM: This post has been filled, 4/2005.

Columbia University’s Center for the Decision Sciences anticipates hiring a computer-savvy postdoctoral Associate Director for a period of one to two years, with a starting date of August 2005. The Associate Director will carry out research, coordinate a year-long speaker series, administer the Center and run the CDS Online Virtual Laboratory server.

The Center for the Decision Sciences at Columbia University is directed by Professors Eric Johnson, David Krantz, and Elke Weber and includes researchers from psychology, marketing, management, medicine, law and beyond. Please visit the website for more information: http://cebiz.org/cds

This position is open to candidates with excellent computer skills and training in cognitive psychology or related disciplines who have recently earned their Ph.D., or who are expecting their degree in 2005 on a topic relevant to the psychology of decision making.

The candidate should be comfortable running a Linux Web server as well as coding HTML and dynamic scripting languages such as PHP and JavaScript. Experience with SQL, databases, SAS and lightweight UNIX systems administration and security is very much recommended but not essential.

One of the current Center foci is a large Preferences as Memory project, which looks at the role of psychological memory processes on the formation of preferences, inferences, and choice. Experience and interest in the psychology of memory would be a large asset.

To apply, please send a CV, two letters of recommendation, up to 3 reprints, and a cover letter describing research interests. In your letter, please describe computer skills, (memory) research expertise, and experience carrying out experimental research.

Review of applications will continue until the position is filled.

Applications should be sent to

Daniel Goldstein, Ph.D. Associate Director
Columbia University, Center for the Decision Sciences
420 W. 118th #805A MC3355, New York, NY 10027

Columbia University is an Affirmative Action, Equal Opportunity Employer.

January 23, 2005

Egon Brunswik

Filed in Profiles
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

DECISION SCIENCE RESEARCHER PROFILE: EGON BRUNSWIK

Egon Brunswik (1903-1955), a pioneer of cognitive psychology, devoted much of his work to the extension and elaboration of the fundamental point that psychology should focus as much on the properties of the organism’s environment as it does to the organism itself. The environment that an organism comes into contact with is uncertain and probabilistic. Adaptation to a probabilistic world requires an organism to adopt a probability-based strategy for survival. The organism must learn to rely upon uncertain information about the world. His “probabilistic functionalism” was the first behavior system to be founded on Probabilism, which today increasingly attracts the attention of learning, thinking, decision process, perception, and communication theorists.

In 1921 he graduated from the Theresianische Akademie after receiving training in mathematics, science, classics, and history. He then studied engineering and passed the state examinations but afterward enrolled as a student of psychology at the University of Vienna. Here he became an assistant in Karl Buehler’s Psychological Institute (among his student colleagues were Paul F. Lazarsfeld and Konrad Lorenz) and received a PH.D. in 1927. While a graduate student in psychology, he also passed the state examination for Gymnasium teachers in mathematics and physics.

His historical as well as theoretical analysis also led him to criticize orthodox methods of experimental design (particularly the “rule of one variable”) and to suggest methods for avoiding what he believed to be an unfortunate artificiality inherent in classical experimental procedures. His main field of empirical research was perception, but he also brought his probabilistic approach to bear on problems of interpersonal perception, thinking, learning, and clinical psychology.

Brunswik’s life did not come to a happy end. He committed suicide in 1955.

-from The Brunswik Society homepage

Egon Brunswik redefined the fundamental task of psychology (focus on distal / distal relations) based on thorough study of the history of psychology as a science, came up with a basic theoretical framework (the lens model and probabilistic functionalism) and original methodology of psychological research (representative design of experiments). His papers on perception introducing the concept of ecological validity of perceptual cues) predated the coming of “ecological psychology”. Built on his ideas the social judgment theory and cognitive continuum theory of Kenneth R. Hammond University of Colorado) and a others inspired a broad scope of research on perception, cognitive conflict resolution, decision making, policy formation and analysis which blooms on international scale today. Thousands of empirical studies, techniques using computer assisted counseling were developed since the beginning

-from the University of Chicago Press.

QUOTE:

“Perception, then, emerges as that relatively primitive, partly autonomous, institutionalized, ratiomorphic subsystem of cognition which achieves prompt and richly detailed orientation habitually concerning the vitally relevant, mostly distal aspects of the environment on the basis of mutually vicarious, relatively restricted and stereotyped, insufficient evidence in uncertainty-geared interaction and compromise, seemingly following the highest probability for smallness of error at the expense of the highest frequency of precision.That’s a simplification. Perception is standing on the sidewalk, watching all the girls go by.”

–Frank Rosenblatt, quoting Egon Brunswik’s “Perception and the Representative Design of Psychological Experiments,” and The New Yorker, December 19, 1959, in “Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms”, 1962.

WORKS BY BRUNSWIK:

*1934 Wahrnehmung und Gegenstandswelt: Grundlegung einer Psychologie vom Gegenstand her. Leipzig: Deuticke.
*1937 Psychology as a Science of Objective Relations. Philosophy of Science 4:227-260.
*1943 Organismic Achievement and Environmental Probability. Psychological Review 50:255-272.
*(1947) 1956 Perception and the Representative Design of Psychological Experiments. 2d ed., rev. & enl. Berkeley: Univ. of Calif. Press.
*1952 The Conceptual Framework of Psychology. Univ. of Chicago Press.
*1955 Representative Design and Probabilistic Theory in a Functional Psychology. Psychological Review 62:193-217.

January 13, 2005

Are the two birds in the bush a better choice than the one in your hand?

Filed in Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

SEPERATE NEURAL SYSTEMS VALUE IMMEDIATE AND DELAYED MONETARY REWARDS

Bird-shadow-04.gif

People seem to alternate between choosing to indulge in what is immediately available and choosing to decide based upon the knowledge that patience often wins in the long run. If you offer someone $10 today and $11 tomorrow, it’s likely they will choose the $10 today. But if you offer someone $10 in a year and $11 in a year and one day, they’ll probably choose the $11. The relative values of options may be discounted according to how long one expects to wait until payoff. To date little research has investigated the source of the tension between short-term and long-term preferences. A recent study in Science by Samuel McClure, David Laibson, George Loewenstein and Jonathan Cohen suggests that this discrepancy is due to the fact that two separate neural systems value immediate and delayed monetary rewards.

ABSTRACT:
“When humans are offered the choice between rewards available at different points in time, the relative values of the options are discounted according to their expected delays until delivery. Using functional magnetic resonance imaging, we examined the neural correlates of time discounting while subjects made a series of choices between monetary reward options that varied by delay to delivery. We demonstrate that two separate systems are involved in such decisions. Parts of the limbic system associated with the midbrain dopamine system, including paralimbic cortex, are preferentially activated by decisions involving immediately available rewards. In contrast, regions of the lateral prefrontal cortex and posterior parietal cortex are engaged uniformly by intertemporal choices irrespective of delay. Furthermore, the relative engagement of the two systems is directly associated with subjects’ choices, with greater relative fronto-parietal activity when subjects choose longer-term options.”

QUOTES:
“In Aesop’s classic fable, the ant and the grasshopper are used to illustrate two familiar, but disparate, approaches to human intertemporal decision making. The grasshopper luxuriates during a warm summer day, inattentive to the future. The ant, in contrast, stores food for the upcoming winter. Human decision makers seem to be torn between an impulse to act like the indulgent grasshopper and an awareness that the patient ant often gets ahead in the long run. This research is unified by the idea that consumers behave impatiently today but prefer/plan to act patiently in the future.”

“Impulsive preference reversals are believed to be indicative of disproportionate valuation of rewards available in the immediate future. Some authors have argued that such dynamic inconsistency in preference is driven by a single decision-making system that generates the temporal inconsistency, while other authors have argued that the inconsistency is driven by an interaction between two different decision-making systems. Specifically, we hypothesize that short-run impatience is driven by the limbic system, which responds preferentially to immediate rewards and is less sensitive to the value of future rewards, whereas long-run patience is mediated by the lateral prefrontal cortex and associated structures, which are able to evaluate trade-offs between abstract rewards, including rewards in the more distant future.”

Fig-1-temporal-unequal-rewa.gif

Brain regions that are preferentially activated for choices in which money is available immediately (Beta areas). (A) A random effects general linear model analysis revealed five regions that are significantly more activated by choices with immediate rewards, implying d 0 0 (at P G 0.001, uncorrected; five contiguous voxels). These regions include the ventral striatum (VStr), medial orbitofrontal cortex (MOFC), medial prefrontal cortex (MPFC), posterior cingulate cortex (PCC), and left posterior hippocampus (table S1). (B) Mean event-related time courses of Beta areas (dashed line indicates the time of choice; error bars are SEM; n 0 14 subjects). BOLD signal changes in the VStr, MOFC, MPFC, and PCC are all significantly greater when choices involve money available today (d 0 0, red traces) versus when the earliest choice can be obtained only after a 2week or 1-month delay (d 0 2 weeks and d 0 1 month, green and blue traces, respectively).

Fig-2-temporal-unequality-.gif

Brain regions that are active while making choices independent of the delay (d) until the first available reward (delta areas). (A) A random effects general linear model analysis revealed eight regions that are uniformly activated by all decision epochs (at P Fig-3-Temporal-unequal-rewa.gif

Differences in brain activity while making easy versus difficult decisions separate delta areas associated with decision making from those associated with non-decision-related aspects of task performance. (A) Difficult decisions were defined as those for which the difference in dollar amounts was between 5% and 25%. (B) Response times (RT) were significantly longer for difficult choices than for easy choices (P fig-4-temporal-unequal-rewa.gif

Greater activity in delta than beta areas is associated with the choice of later larger rewards. To assess overall activity among beta and delta areas and to make appropriate comparisons, we first normalized the percent signal change (using a z-score correction) within each area and each subject, so that the contribution of each brain area was determined relative to its own range of signal variation. Normalized signal change scores were then averaged across areas and subjects separately for the beta and delta areas (as identified in Figs. 1 and 2). The average change scores are plotted for each system and each choice outcome. Relative activity in beta and delta brain regions correlates with subjects’ choices for decisions involving money available today. There was a significant interaction between area and choice (P Homepage.

David Laibson

Professor is a professor of Economics at Harvard University. He received his PhD in economics from the Massachusetts Institute of Technology in 1994. His Primary fields of interest are Macroeconomics (particularly consumption and savings), intertemporal choice, psychology and experimental economics.

David Laibson Homepage

George Loewenstein

George Loewenstein has been a professor at Carnegie Mellon University since 1990.
He received his PhD with distinction in economics from Yale University in 1985. “My primary research focus is on intertemporal choice–decisions involving trade-offs between costs and benefits occurring at different points in time. Because most decisions have consequences that are distributed over time, the applications of intertemporal choice are numerous (e.g. saving behavior, consumer choice, labor supply).” From George Loewenstein Homepage

Jonathan D. Cohen

Jonathan Cohen has joint appointments in Psychiatry in the Western Psychiatric Institute and Clinic at the University of Pittsburgh and Psychology at Carnegie Mellon University. He received his PhD in cognitive psychology from Carnegie Mellon University in 1990. “Research in my laboratory focuses on the neurobiological mechanisms underlying cognitive control, and their disturbance in psychiatric disorders such as schizophrenia and depression. Cognitive control is the ability to guide attention, thought and action in accord with goals or intentions. One of the fundamental mysteries of neuroscience is how this capacity for coordinated, purposeful behavior arises from the distributed activity of many billions of neurons in the brain.” From the Jonathan Cohen Homepage

December 20, 2004

Society for Consumer Psychology 2005 Winter Conference

Filed in Conferences
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

SCP 2005 WINTER CONFERENCE

scp-logo.gif

The Upcoming Society for Consumer Psychology SCP winter conference will be held from Thursday February 24th to Monday the 28th 2005 at the TradeWinds Island Grand Resort in St. Pete Beach, Florida.

“The Society for Consumer Psychology represents the interests of behavioral scientists in the fields of psychology, marketing, advertising, communication, consumer behavior, and other related areas. Some members of the Society are mainly interested in generating applied knowledge to solve specific marketing related problems, while others focus on generating basic knowledge to contribute to theoretical and conceptual foundations of consumer psychology. The Society encourages all members to share their knowledge and contribute to the discipline of consumer psychology as a whole through contributions in conferences, journal articles, and book chapters”. -From the Society for Consumer Psychology Home Page

To register for the winter conference go to the SCP home page and click on “Winter 2005 SCP Conference Online Registration” or click here to link directly. A preliminary program for the SCP winter conference is availbale here or at the SCP home page.

Conference Chairs:

Anne Brumbaugh, Wake Forest University Home Page
Geraldine R. Henderson, Univerity of Texas at Austin Home Page

Conference Co Chairs:

Amar Cheema, Washington University in St. Louis Home Page
Scott A. Hawkins, University of Toronto Home Page
Joydeep Srivastava, University of Maryland Home Page

December 13, 2004

How can somebody make a decision without all the facts? Well, there’s actually no other way.

Filed in Books
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

BOUNDED RATIONALITY: THE ADAPTIVE TOOLBOX by Gerd Gigerenzer and Rienhard Selten.

Bounded-Rationality.gif

How do real people make decisions in an uncertain world? In the book Bounded rationality: The adaptive tool box, Gigerenzer and Selten (et al.) investigate the constraints of limited information and time upon human logic and reasoning in the decision making process. The authors view Bounded Rationality neither as the optimization of limited resources under constraint nor as a study of the failings of human reasoning capability.

QUOTES:
“Visions of rationality do not respect disciplinary boundaries. Economics, psychology, animal biology, artificial intelligence, anthropology, and philosophy struggle with models of sound judgment, inference and decision making. These models evolve over time, just as the idea of rationality has a history, a present and a future (Daston 1988). Over the last centuries, models of rationality have changed when they conflicted with actual behavior, yet, at the same time, they provided prescriptions for behavior. This double role-to describe and prescribe- does not map easily onto a sharp divide between descriptive and normative models, which plays down the actual exchange between the psychological and the rational (Gigerenzer et al. 1989). Herbert Simon’s notion of bounded rationality was proposed in the mid-1950’s to connect, rather than to oppose, the rational and the psychological (Simon 1956). The aim of this book is to contribute to the process of coevolution, by inserting more psychology into rationality, and vice versa.”

“In a complex and uncertain world humans and animals make decisions under the constraints of limited knowledge, resources, and time. Yet models of rational decision making in economics, cognitive science, biology and other fields largely ignore these real constraints and instead assume agents with perfect information and unlimited time. About forty years ago Herbert Simon challenged this view with his notion of “bounded rationality”. Today, bounded rationality has become a fashionable term used for disparate views of reasoning.”

ABOUT THE AUTHORS:

Gerd Gigerenzer

Gigerenzer.gif

Gerd Gigerenzer is Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin and former Professor of Psychology at the University of Chicago. He won the AAAS Prize for the best article in the behavioral sciences.

Reinhard Selten

selten.gif

Reinhard Selten received his PhD in mathematics at the University of Frankfurt am Main. Reinhard Selten is Fellow of the Econometric Society, President of the European Economic Association, a Honorary Member of the American Economic Association, a Member of the Nordrhein-Westfälische Akademie der Wissenschaften, and a Foreign Honorary Member of the American Academy of Arts and Sciences. He is also a Honora Patrona Komitato at Universala Esperanto Asocio. His main areas of interest are Game Theory and its applications as well as Experimental Economics and the Theory of Bounded Rationality. In 1994 he won the Nobel Memorial Prize in Economics, together with John C. Harsanyi and John F. Nash.