tag:blogger.com,1999:blog-7905515.post2514505824708731226..comments2024-03-14T11:09:32.759-05:00Comments on Falkenblog: Envy Solves The Allais ParadoxEric Falkensteinhttp://www.blogger.com/profile/07243687157322033496noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-7905515.post-58147992534432682882011-10-07T16:12:11.196-05:002011-10-07T16:12:11.196-05:00I was going to suggest that the wealth preference ...I was going to suggest that the wealth preference may be logarithmic, that is say 10 million has twice has much utility as 1 million, 100 million three times, etc, rather than linear.<br /><br />The interesting thing then, and I only realised it when trying to calculate the expected values, is that LOG(0) = -oo, that is the "nothing" case has infinitely negative utility (compatible with common sense: something is usually much better than nothing).<br /><br />The standard utility function then works: 1B has infinitely more risk than 1A, hence everybody picking the latter. 2A and 2B both have infinitely negative expected utilities, but you can still rank them in that 2A < 2B, hence everybody picking the least infinitely negative utility, 2B. That holds for all risk aversion coefficients.Blixanoreply@blogger.comtag:blogger.com,1999:blog-7905515.post-57289592055160497802011-10-06T12:15:15.113-05:002011-10-06T12:15:15.113-05:00Eric wrote "I would make those choices too, a...Eric wrote "I would make those choices too, and haven't met anyone who wouldn't. "<br /><br />Yes you have!Tom Coopernoreply@blogger.comtag:blogger.com,1999:blog-7905515.post-8513880729220431202011-10-05T15:04:47.712-05:002011-10-05T15:04:47.712-05:00I agree there are intuitive reconciliations, but w...I agree there are intuitive reconciliations, but when you mathematize them they fall flat. It's interesting that a relative status function works.Eric Falkensteinhttps://www.blogger.com/profile/07243687157322033496noreply@blogger.comtag:blogger.com,1999:blog-7905515.post-50236672705853453722011-10-05T14:27:45.141-05:002011-10-05T14:27:45.141-05:00Eric, I am fully on board with your views on risk ...Eric, I am fully on board with your views on risk and return in investment, but I think zby is on to something and I don't follow your response. <br /><br />I don't need to rely on envy to generate preferences for 1A and 2B. When I think of what $1 million would do to my life, I think of paying all my debts and putting something towards retirement. It wouldn't mean I would never have to work again, but it would certainly eliminate any financial headaches I might have. So, in experiment 1, to take even a 1% chance of (effectively) losing $1 million causes me to take pause. (BTW, I might take gamble 1B as I am reasonably financially comfortable and don't mind working, but I can see why most people would choose 1A).<br /><br />On the other hand, with experiment 2, the probability of me getting anything at all is relatively low, so I don't regard myself as 'having' $1 million to 'lose'. In that case, for a small drop in probability, I have a small but significant chance of getting 5 times more, which would really change my life. So, like most people, I would take it.<br /><br />I think the way I have put it is how most people think and I think invoking this thought experiment of "what would my neighbour do" is a bit contrived. Tell me if this scenario (experiment 3) helps clarify the issue: what if gamble 3A is a 99% chance of $1 million and a 1% chance of zero. And gamble 3B is a 98% chance of $5 million and a 2% chance of zero. I reckon most people would take 3B because once you accept a loss of certainty over that $1 million, more 'normal' attitudes over risk and return can take place until you get to much smaller probabilities of winning when speculative urges take hold.Rajatnoreply@blogger.comtag:blogger.com,1999:blog-7905515.post-21134317235257104502011-10-04T15:20:30.392-05:002011-10-04T15:20:30.392-05:00I think the reason people make the choices isn'...I think the reason people make the choices isn't due to envy, but stupidity. Humans, without a great deal of training, don't estimate probabilities correctly at all. It's a cognitive illusion.<br /><br />Hell, trained scientists screw up probability too:<br /><br />http://www.badscience.net/2011/10/what-if-academics-were-as-dumb-as-quacks-with-statistics/Mr. Nosuchhttp://www.nosuch.orgnoreply@blogger.comtag:blogger.com,1999:blog-7905515.post-26657254006257483982011-10-04T14:04:18.714-05:002011-10-04T14:04:18.714-05:00I would choose 1A.
One way to look at this is to a...I would choose 1A.<br />One way to look at this is to assume 100 independent outcomes - in 89 of them, the result is likely to be the same - $1mm. So if we focus only on the other 11 outcomes: the choice is now 100% chance of $1MM in A, compared to 9.1% chance of zero and a 90.9% chance of $5MM. Obviously the probablity weighted value of this is $4.54MM - almost 5x. I would take my chances. Now if the number was $100MM or $500mm, that would be different - if I lost it would result in a suicide.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7905515.post-51020072969373395662011-10-04T12:33:41.597-05:002011-10-04T12:33:41.597-05:00Both Experiments 1 and 2 are basically personality...Both Experiments 1 and 2 are basically personality tests. Chossing "B" in either instance probably won't change the outcome 99 times out of 100 and experience from one's own life likely demonstrates that a great many (perhaps a huge majority) of otherwise very rational people, would go for "B".<br /><br />Look at it this way - You're at a party and there are a couple of plain but pretty looking girls that are signaling "sure thing" when the subject of going back to your place comes up. Then there is Gisele Bundchen's twin sister standing in the corner who (this is an academic thought experiment!) is also signaling "sure thing"....minus a small risk that something might be getting lost in translation.<br /><br />Where are you going to place your marker?Mercurynoreply@blogger.comtag:blogger.com,1999:blog-7905515.post-27081262406487189252011-10-04T12:03:51.396-05:002011-10-04T12:03:51.396-05:00You would seriously pick 1A over 1B? What planet a...You would seriously pick 1A over 1B? What planet are you on?McMathnoreply@blogger.comtag:blogger.com,1999:blog-7905515.post-69809088591396779042011-10-04T07:58:48.956-05:002011-10-04T07:58:48.956-05:00zby: I agree with your intuition, but if you take ...zby: I agree with your intuition, but if you take the probabilities as accurate, this doesn't work. That is, basically you are stating nothing can have a probability less than 1 or 2% because 'we know' that this means 20% in practice...that runs into problems.<br /><br />JohnH: I think you'd have to generalize that, and ask if it makes sense when applied to other issues. I doubt it would. As per the Ellsberg thing, you have to think about compound probabilities as being identical to simple probabilities; the sampling N times is just confusing you.Eric Falkensteinhttps://www.blogger.com/profile/07243687157322033496noreply@blogger.comtag:blogger.com,1999:blog-7905515.post-77869175329475025472011-10-04T04:20:20.927-05:002011-10-04T04:20:20.927-05:00I think it goes something like this: in the first...I think it goes something like this: in the first case I can get $1 million for sure or gamble to get $5 million - in this case I would prefer to be sure. In the second case I have to gamble anyway and then for a small probability decrease I can have a chance to get 5 times more - the decrease seems like something I can ignore because it is so small so why not try this 5 times more score? I think this is pretty common way of thinking. This algorithm might seem non-optimal in terms of any utility function if we assume that the probabilities are exactly what we are told - but this is a very artificial situation so our minds are not necessarily evolved for this (just like they are not evolved to make rational choices about certain foods consumption).zbyhttps://www.blogger.com/profile/04636763782334128869noreply@blogger.comtag:blogger.com,1999:blog-7905515.post-69248565804272452852011-10-03T23:03:46.515-05:002011-10-03T23:03:46.515-05:00This is even less of a paradox than the Ellsberg p...This is even less of a paradox than the Ellsberg paradox. <br /><br />My answer would be similar to yours, but I wouldn't really need to invoke envy. <br /><br />If I have preferences on different parts of the distribution, for instance I have preferences on the upside and downside conditional Value at Risk, then that will impact how the decisions are made. So let's say that my preference is mu+lambda_d*CVaR_d+lambda_u*CVaR_u. At the 1% level, CVaR_d is 1mn for 1A and 0mn elsewhere and CVaR_u is 1mn for the As and 5mn for the Bs. To prefer 1A to 1B, lambda_d>0.44+4*lambda_u. To prefer 2B to 2A, lambda_u>-0.39/4. In other words, you will almost always pick 2B at the 1% level (you have to hate extreme positives, which is sort of rare). And there are many cases where people will prefer 1A to 1B (also this result holds for CVaR_u up to 10%, but is less true as CVaR_d beyond 1%)John Hallnoreply@blogger.comtag:blogger.com,1999:blog-7905515.post-79924942908368005932011-10-03T21:50:33.704-05:002011-10-03T21:50:33.704-05:00to succeed in a given environment, one needs many ...to succeed in a given environment, one needs many things. let's call them A, B, C, D… I think it's fair to assume that more of A can partially compensate for a lack of B, C etc, and the relationship probably has a lot of negative convexity. like having a bit more IQ or money can help compensate for the lack of other desirable traits and push one up in the pecking order, but probably at a decreasing rate. a bit more of what you really lack, if it were obtainable, would be a much more efficient way to get an improvement (I remember this footballer who paid a fortune some 20 years back to get a hair implant)<br />therefore utility for A is a function of how much A you have, but also how much B, C, D etc. trying to find a "true" utility function for each of A, B and C can be pretty tough.<br />I think it is fair to assume that our psychological adaptations are smarter than our theories, and somehow account for the above. say I feel "above par" on A, B and C and below par on D. it is easier to be generous on "low delta" A, B or C due to little perceived loss of utility, or even in the hope for some subtle kind of trade-off for something more useful, like goodwill of others. envy is maybe something that starts kicking in when incurring a loss on things we perceive as "high delta". if that loss is serious enouogh, envy it is pushing us to do something about it even at a high risk to ourselves, which I guess makes sense.B. A.noreply@blogger.com