Monday, October 03, 2011

Envy Solves The Allais Paradox

Maurice Allais won the Nobel prize for stuff that is never really read anymore, but his curious Allais paradox has endured because it's both simple and baffling.

It arises when comparing participants' choices in two different experiments, each of which consists of a choice between two gambles, A and B. The payoffs for each gamble in each experiment are as follows:
Experiment 1Experiment 2
Gamble 1AGamble 1BGamble 2AGamble 2B
WinningsChanceWinningsChanceWinningsChanceWinningsChance
$1 million100%$1 million89%Nothing89%Nothing90%
Nothing1%$1 million11%
$5 million10%$5 million10%

Several studies involving hypothetical and small monetary payoffs, and recently involving health outcomes, have supported the assertion that when presented with a choice between 1A and 1B, most people would choose 1A. Likewise, when presented with a choice between 2A and 2B, most people would choose 2B. Personally, I would make those choices too, and haven't met anyone who wouldn't.

This is inconsistent with expected utility theory. According to expected utility theory, the person should choose either (1A and 2A) or (1B and 2B).

The problem comes from basic utility theory, because if you plug in a utility function like

U(x)=-exp(-aW) or W^(1-a)/(1-a)

Where W is your wealth and 'a' is your relative risk aversion coefficient, it never works; that is, they can't generate the preferences for 1A and 2B. The Wikipedia page on this outlines the simple proof, which is irrefutable. Something is wrong, and there have been several solutions, all ad hoc. For example, Kahneman and Tversky's Prospect theory allows one to weight outcomes arbitrarily, and so can accomodate this, but the weightings that solve one paradox imply nonsensical other outcomes, such as the simultaneous desire to prefer gambling and insurance, which was the initial motivation for prospect theory (see here).

Allais highlighted that the problem was probably with the independence axiom of von Neumann-Morgenstern utility functions, which basically is the one that implies we don't care about our peers, just our own wealth. For fun, I tried applying a relative utility function to the gambles in the Allais paradox (back to the envy meme). The key is that decision makers imagine the gamble in a world where their neighbor is presented with the same option, so you have to imagine them having been given this absurdly generous gamble as well, and contemplating the relative squalor or richness in the various states. Basically, applying U(x/y), where y is your neighbor's wealth (who is offered the same gamble), as opposed to U(x), where your utility is independent of your neighbor.

If you assume you and your neighbor start with $100k in wealth, then at a certain level of risk aversion (>3.02), with exponential utility, your preferences are for 1A and 2B! This applies to either averaging the two states, or a 'maximin' approach that maximizes the minimum utility among all the states. At really low risk aversion, you prefer the higher expected value choice 'B' in both gambles, and with sufficiently large risk aversion you prefer choice 'A' in both gambles.

It's not really important what the precise numbers are, but it's useful to understand this paradox has a solution within relative utility that is not as ad hoc as prospect theory. Exponential utility is very common because the expected utility for normally distributed wealth has a closed form solution, and this is very convenient, an innocuous simplification for exposition in many applications. However, as it implies 'constant absolute risk aversion', where everyone treats $1 of wealth variance the same, this is not desirable, because then the rich and poor would allocate the same dollar amount to risky assets, which we don't find realistic. Thus, most researchers like to use 'constant relative risk aversion utility functions' like x^(1-a)/(1-a), and there the trick does not work--same paradox as before.

So, it's not as if relative utility neatly provides an insanely robust solution to a prominent game-theoretic paradox, but it does provide a reasonable solution. The exponential utility function is pretty common, and I suspect this result holds across many other specifications. Further, it highlights Maurice Allais's initial intuition was correct, that the wrong assumption was that utility is independent of what others have.

You can download my Excel sheet with the example worked out here.

12 comments:

B. A. said...

to succeed in a given environment, one needs many things. let's call them A, B, C, D… I think it's fair to assume that more of A can partially compensate for a lack of B, C etc, and the relationship probably has a lot of negative convexity. like having a bit more IQ or money can help compensate for the lack of other desirable traits and push one up in the pecking order, but probably at a decreasing rate. a bit more of what you really lack, if it were obtainable, would be a much more efficient way to get an improvement (I remember this footballer who paid a fortune some 20 years back to get a hair implant)
therefore utility for A is a function of how much A you have, but also how much B, C, D etc. trying to find a "true" utility function for each of A, B and C can be pretty tough.
I think it is fair to assume that our psychological adaptations are smarter than our theories, and somehow account for the above. say I feel "above par" on A, B and C and below par on D. it is easier to be generous on "low delta" A, B or C due to little perceived loss of utility, or even in the hope for some subtle kind of trade-off for something more useful, like goodwill of others. envy is maybe something that starts kicking in when incurring a loss on things we perceive as "high delta". if that loss is serious enouogh, envy it is pushing us to do something about it even at a high risk to ourselves, which I guess makes sense.

John Hall said...

This is even less of a paradox than the Ellsberg paradox.

My answer would be similar to yours, but I wouldn't really need to invoke envy.

If I have preferences on different parts of the distribution, for instance I have preferences on the upside and downside conditional Value at Risk, then that will impact how the decisions are made. So let's say that my preference is mu+lambda_d*CVaR_d+lambda_u*CVaR_u. At the 1% level, CVaR_d is 1mn for 1A and 0mn elsewhere and CVaR_u is 1mn for the As and 5mn for the Bs. To prefer 1A to 1B, lambda_d>0.44+4*lambda_u. To prefer 2B to 2A, lambda_u>-0.39/4. In other words, you will almost always pick 2B at the 1% level (you have to hate extreme positives, which is sort of rare). And there are many cases where people will prefer 1A to 1B (also this result holds for CVaR_u up to 10%, but is less true as CVaR_d beyond 1%)

zby said...

I think it goes something like this: in the first case I can get $1 million for sure or gamble to get $5 million - in this case I would prefer to be sure. In the second case I have to gamble anyway and then for a small probability decrease I can have a chance to get 5 times more - the decrease seems like something I can ignore because it is so small so why not try this 5 times more score? I think this is pretty common way of thinking. This algorithm might seem non-optimal in terms of any utility function if we assume that the probabilities are exactly what we are told - but this is a very artificial situation so our minds are not necessarily evolved for this (just like they are not evolved to make rational choices about certain foods consumption).

Eric Falkenstein said...

zby: I agree with your intuition, but if you take the probabilities as accurate, this doesn't work. That is, basically you are stating nothing can have a probability less than 1 or 2% because 'we know' that this means 20% in practice...that runs into problems.

JohnH: I think you'd have to generalize that, and ask if it makes sense when applied to other issues. I doubt it would. As per the Ellsberg thing, you have to think about compound probabilities as being identical to simple probabilities; the sampling N times is just confusing you.

McMath said...

You would seriously pick 1A over 1B? What planet are you on?

Mercury said...

Both Experiments 1 and 2 are basically personality tests. Chossing "B" in either instance probably won't change the outcome 99 times out of 100 and experience from one's own life likely demonstrates that a great many (perhaps a huge majority) of otherwise very rational people, would go for "B".

Look at it this way - You're at a party and there are a couple of plain but pretty looking girls that are signaling "sure thing" when the subject of going back to your place comes up. Then there is Gisele Bundchen's twin sister standing in the corner who (this is an academic thought experiment!) is also signaling "sure thing"....minus a small risk that something might be getting lost in translation.

Where are you going to place your marker?

Anonymous said...

I would choose 1A.
One way to look at this is to assume 100 independent outcomes - in 89 of them, the result is likely to be the same - $1mm. So if we focus only on the other 11 outcomes: the choice is now 100% chance of $1MM in A, compared to 9.1% chance of zero and a 90.9% chance of $5MM. Obviously the probablity weighted value of this is $4.54MM - almost 5x. I would take my chances. Now if the number was $100MM or $500mm, that would be different - if I lost it would result in a suicide.

Mr. Nosuch said...

I think the reason people make the choices isn't due to envy, but stupidity. Humans, without a great deal of training, don't estimate probabilities correctly at all. It's a cognitive illusion.

Hell, trained scientists screw up probability too:

http://www.badscience.net/2011/10/what-if-academics-were-as-dumb-as-quacks-with-statistics/

Rajat said...

Eric, I am fully on board with your views on risk and return in investment, but I think zby is on to something and I don't follow your response.

I don't need to rely on envy to generate preferences for 1A and 2B. When I think of what $1 million would do to my life, I think of paying all my debts and putting something towards retirement. It wouldn't mean I would never have to work again, but it would certainly eliminate any financial headaches I might have. So, in experiment 1, to take even a 1% chance of (effectively) losing $1 million causes me to take pause. (BTW, I might take gamble 1B as I am reasonably financially comfortable and don't mind working, but I can see why most people would choose 1A).

On the other hand, with experiment 2, the probability of me getting anything at all is relatively low, so I don't regard myself as 'having' $1 million to 'lose'. In that case, for a small drop in probability, I have a small but significant chance of getting 5 times more, which would really change my life. So, like most people, I would take it.

I think the way I have put it is how most people think and I think invoking this thought experiment of "what would my neighbour do" is a bit contrived. Tell me if this scenario (experiment 3) helps clarify the issue: what if gamble 3A is a 99% chance of $1 million and a 1% chance of zero. And gamble 3B is a 98% chance of $5 million and a 2% chance of zero. I reckon most people would take 3B because once you accept a loss of certainty over that $1 million, more 'normal' attitudes over risk and return can take place until you get to much smaller probabilities of winning when speculative urges take hold.

Eric Falkenstein said...

I agree there are intuitive reconciliations, but when you mathematize them they fall flat. It's interesting that a relative status function works.

Tom Cooper said...

Eric wrote "I would make those choices too, and haven't met anyone who wouldn't. "

Yes you have!

Blixa said...

I was going to suggest that the wealth preference may be logarithmic, that is say 10 million has twice has much utility as 1 million, 100 million three times, etc, rather than linear.

The interesting thing then, and I only realised it when trying to calculate the expected values, is that LOG(0) = -oo, that is the "nothing" case has infinitely negative utility (compatible with common sense: something is usually much better than nothing).

The standard utility function then works: 1B has infinitely more risk than 1A, hence everybody picking the latter. 2A and 2B both have infinitely negative expected utilities, but you can still rank them in that 2A < 2B, hence everybody picking the least infinitely negative utility, 2B. That holds for all risk aversion coefficients.