From: susupply@aol.com (SUSUPPLY) Newsgroups: sci.econ Subject: Beware of geeks bearing gifts, says Landsberg Date: 12 Feb 2001 19:02:39 GMT Organization: AOL http://www.aol.com Message-ID: <20010212140239.02919.00000522@ng-fx1.aol.com> http://www.reason.com/0103/fe.sl.stuffing.html << Enter Vernon Smith. Forty years ago, as a young assistant professor of economics at Purdue University, Smith championed the then-unheard-of notion that economists who theorize about human behavior could learn something useful by actually observing some human behavior, preferably in controlled experimental settings. Today, Smith presides over the renowned Economic Science Laboratory at the University of Arizona, where he and his colleagues recently set out to study altruism -- and ended up discovering something dark and disturbing about human nature instead. << Here's one of Smith's experiments: Two total strangers are placed in separate rooms. They never meet, they never learn each others' names, and they come and go by separate entrances. One of them is selected randomly to receive 10 one-dollar bills and an envelope. He can put any number of bills in the envelope and send it by messenger to the other subject. Then everyone takes his money and goes home. << Simple economics predicts that no money ever goes in the envelope. And that prediction is borne out about two-thirds of the time. The remainder of the time, the prediction is still not far off. When there's anything in the envelope, it's most often a single dollar bill. [snip] Now we come to the dark and unsettling part. James Cox, one of Smith's colleagues at the University of Arizona, has been running a variant of this experiment where subjects know that everything they put in the envelope will get tripled by the experimenter before it's sent to the other room. If they give up a dollar; the other guy gets three. If they give up 10, he gets 30. In the Cox experiment, even with elaborate anonymity procedures, subjects gave up a lot more money. In fact, virtually all of the subjects put at least a dollar in the envelope, and instead of $1.08, the average envelope contained $3.63 (so the other guy got $10.89 on average). In other words, subjects give more generously when they can get a bigger bang for their buck. You might think that's a pretty heartening result. It looks a lot like altruism -- a willingness to make sacrifices as long as the gain to others substantially exceeds the loss to one's self. It's as if the subjects were saying, "I'm willing to give away money as long as I can make the world a richer place in the process." But that's not what they're saying, and this isn't altruism. Altruism means personally paying for the privilege of enriching a total stranger. That's not what these people are doing at all. Instead, they're paying for the privilege of taking money away from one total stranger -- namely the taxpayer who's funding the experiment (through the University of Arizona and the National Science Foundation) -- and giving it to another total stranger who happens to be in the next room. There's no sense in which that makes the world a richer place. And the subjects do all this without knowing anything at all about either stranger or having any reason to believe that one is more deserving than the other. In the words of University of Rochester economist Mark Bils, "That's a pretty ugly instinct. It scares me to think I'm living in the same world with these people." It's not like they're taking from the rich to give to the poor; they're just randomly taking from some people so they can give to others. It's hard to imagine their motive, unless they just plain enjoy the capricious exercise of power, bestowing good fortune on some and bad fortune on others without any need for a rhyme or reason. In a world where people get a kick out of being arbitrary, no property right is ever safe. Taken at face value, the Cox experiments suggest that the reason we have a redistributive tax system is not because people want to help the poor or the unfortunate or the incapacitated; it's because people enjoy moving other people's money around just to make mischief. Now you might say: "Wait a minute. These are good people. They are trying to do the world some good. They're just not conscious of the fact that the money they transfer has to come from somewhere." As Bils points out, that's even scarier. These subjects are mostly university students, and they don't realize that when you give away money, it has to come from somewhere? And we allow these people to vote? >> COMMENTS: On Mon, 12 Feb 2001 19:35:20 GMT, Ron Hardin wrote: >>Why do you believe there's anybody in the next room in these >>experiments? Brian Paul wrote: > >For that matter, why believe that there is a next room? > Hi, A very interesting experiment. And an even more interesting intrepertation of the result. The people putting money into the envelope must have THOUGHT that there was a next room with someone in it who would get the money, or else why would they put any money in the envelope? My guess: they never considered that the money in the experiment actually came FROM anyplace or anyone. It was "just there" to be divided. Perhaps another example of the lack of understanding of economics in US education? To be added to the 3 part section "Economic and Civic Education in the US" at: http://www.geocities.com/capitolhill/4834/edu.htm On the other hand, maybe the subjects were smarter than we think. Maybe they decided that they (and the subject in the other room) could put that money to better use than those conducting the experiment. Or than the University of Arizona. So that the world would be better off by transfering as much money as possible to those who would make better use of it. And maybe they were right? ,,,,,,, _______________ooo___(_O O_)___ooo_______________ (_) jim blair (jeblair@facstaff.wisc.edu) Madison Wisconsin USA. This message was brought to you using biodegradable binary bits, and 100% recycled bandwidth. For a good time call: http://www.geocities.com/capitolhill/4834 And PS: From the standpoint of those conducting the experiment, it would make more sense (and cents) if there was no "other room" with another subject in it. There need not be one as long as the subject given the money and the envelope THINKS there is. Their response would be the same either way. But the researcher would need only half the number of subjects, would be putting out less money for the experiment, and would be better able to maintain the desired anonymity. ANOTHER ECON/PSY GAME: From a post by William F. Hummel: Subject: Money and Irrationality Date: Mon, 28 Jan 2002 19:42:03 GMT From: William F Hummel Organization: Road Runner Newsgroups: sci.econ >As economic actors, people are as likely to be governed by their >emotions as by reason, by prejudices as by careful cost-benefit >analysis. Their rationality is bounded by limits on their time, >intelligence and the information at their disposal. The January 2002 issue of Scientific American has an article with the title "The Economics of Fair Play" by Karl Sigmund, describing what they call "the Ultimatum game". Imagine that you are asked to make one YES or NO decision. You are offered X dollars. X may be one or 10 or 100 or whatever. Your only decision is to take the offer (Yes) or to reject it (No). If you decide YES, you get the money. If you say NO you don't. Period. If people are motivated by self interest, the reply will be YES, no matter the size of X. Now modify the game, but in a way that does not change it for you. Now if you say YES, you get the X dollars, but someone else in the study that you don't know (and whose identity you will never be told) will get Y dollars. Y may be greater or less than X. If you say NO then neither of you get anything. Period. Will people say YES to $10 if they know that the other person will get $20? Or $50? Or will they say NO to the $10 to stop that other guy from getting more than they get? The Ultimate Game is a variation on the above. That "other guy" is given W dollars and told to offer you X (whatever he decides). If you say YES, you get X and he keeps W-X. But if you say NO you both get nothing. It is a one time offer, no discussion: just your YES or NO. Question: when this Ultimatum Game is played thousands of times does the decision maker say YES to any offer? (as would be the case if personal gain were the only motive?) The surprise (to me) answer is that most people will say NO to an X that is less than 40% of W. They would pass up the chance to gain X if it means the other guy gets "too much". I see several ways to interpret this result. First that economic gain is not the only motive most people have. If it were, they would never say NO. The NO's can be see in different ways. Maybe people want to punish any one why is trying to give them less than they think they deserve? Maybe they don't want someone else to get more than they get? I remember the joke about the difference between the American farmer and the Eastern European (Russian or Polish?) farmer. Farmer #1 has poor crops and sick cows while farmer #2 has good crops and healthy cows. A genie appears to farmer #1 and says he will grant one wish. In America farmer #1 wishes for things to be as good for him as they are for farmer #2. But in Eastern Europe, farmer #1 wishes that things for farmer #2 would be as bad as they are for him. But a more favorable explanation is given by evolutionary biologists. Why did humanity evolve to reject gains? There must be some advantage to the species in rejecting personal gain to punish those who would make a division of goods that others judge to be unfair. So does this experiment show that most people are illogical? Or that short term personal gain does not have as much long term survival value as a community wide sense of fairness? REPLY: >Jim Blair wrote: >>.... >> >> So does this experiment show that most people are illogical? >> >> Or that short term personal gain does not have as much long term >> survival value as a community wide sense of fairness? Mike Wooding wrote: > > Thanks for the fascinating post. Hi, :-) >....The question that > arises in my mind when you begin speculating about > "evolutionary" gains (or losses?) is that it must > not be a one-time choice, but rather a game that's > played frequently and ... Yes, of course. If the situation were repeated, the 2 players would soon reach an understanding of how to maximize the gain of both. (Probably a 50-50 split). Given that it is a one time only event for each person, YES is the only rational reply, no matter the split. One conclusion the author reached is that no matter that the players were TOLD that this was a one time only event for them, most people did not really incorporate that fact into their response. >....Does the nature of the > game change upon repetition? Yes. >....If so, then perhaps > the responses (even in the one-time version) are > learned? E.g. When I was a child I could get my > way by refusing to co-operate and being a brat? > >-- > Mike.Wooding@NexWatch.cOm (Mike Wooding) Even if you learn something from a "one time only" event, you will never have the opportunity to apply what you learned :-( ,,,,,,, _______________ooo___(_O O_)___ooo_______________ (_) jim blair (jeblair@facstaff.wisc.edu) Madison Wisconsin USA. This message was brought to you using biodegradable binary bits, and 100% recycled bandwidth. For a good time call: http://www.geocities.com/capitolhill/4834 AND this from: http://www.ags.uci.edu/~jalex/egt.html The ultimatum game The ultimatum game is another a two-player noncooperative game where two players attempt to divide a good, again, say a cake, between them. However, we assume that one player (the proposer) has sole possession of the cake and offers a certain amount of the cake to the second player (the receiver), keeping the rest for himself. The second player has only two choices: take the offer or leave it. If player two takes the offer, each player receives the amount of cake due. If player two chooses to leave it, each player receives nothing. Compared to the Nash bargaining game, the ultimatum game has a significantly larger strategy space. Each strategy has two components, prescribing what demand the player will make as a proposer and what demands the player will accept as a receiver. If the cake divides into N pieces and we forbid purely altruistic behavior (demanding nothing) and completely greedy behavior (demanding everything) the game has 2^(N-1)*(N-1) possible strategies. Most treatments of the ultimatum game consider only a small subset of the possible strategies. According to von~Neumann-Morgenstern game theory, if the good can divide into infinitely many pieces, an infinite number of Nash equilibria exist. When talking about the ultimatum game, though, it proves fruitful to use another solution concept, that of subgame perfection. We say an equilibrium is subgame perfect if the strategies present in that equilibrium are also in equilibrium when restricted to any subgame. Consider a population of players where all make fair offers (half of the cake) and only accept fair offers, a strategy typically called "Fairman." Although this strategy is a Nash equilibrium (no player can do better by changing her strategy), it is not subgame perfect: in a mixed population containing players of all strategies, Fairman does not do as well as the strategy which makes a fair offer but accepts any offer. Consequently, if one thinks a credible equilibrium of a game must be subgame perfect, the number of credible equilibria shrink. If players act to maximize expected utility, then proposers should demand the entire cake minus epsilon (if the cake is infinitely divisible) or N-1 pieces (if the cake has N pieces). Receivers, on the other hand, should accept any nonzero offer.