Wednesday, April 23, 2008

Existence Theorems

Many arguments are predicated on existence proofs. That is, someone proves that there exists a solution, and then they assume the possible is probable, or at least, counters the alternative. But I really can't think of anything interesting where the two sides don't have possible results, but rather, it's the probabilities: what's the best way to manage the Fed, tax rates, welfare, etc. It's not whether something will or won't happen, but rather, how much or to what degree. You can prove that under certain conditions free trade makes a country worse (infant industries), or that higher prices raise demand (giffen goods), or that savings is bad for an economy (the fallacy of composition). But these are all generally, empirically, untrue. Possible, however. Intelligence tells us what is logically implied given assumptions (eg, mathematics), but probabilities are generally an empirical matter, often deriving from meta-knowledge, ie, common sense. Thus, intelligence is weakly correlated with common sense because it can't prove priorities, which are based on probabilities.

Take, for instance, the famous proof that because the market portfolio is inefficient, with very small deviations of the measured market portfolio (ie, the proxy, like the S&P500) and the true market portfolio, you can get a zero relationship between beta and cross-sectional returns. After Fama and French (1992) put a fork in the CAPM, Ross and Roll (1994) and Kandel and Stambaugh (1996) resurrected the Roll critique, and tried to address the issue of to what degree an inefficient portfolio can generate a zero beta-return correlation (now acknowledged as fact). That is, is it possible that beta is uncorrelated with the S&P500 or whatever index is being used, even though it works perfectly with the ‘true market’ index? In Ross and Roll’s words, if you mismeasure the expected return of the market index by 0.22% (easily 10% of one standard deviation away), it could imply a measured zero correlation with the market return.

Sounds impressive, proof that no evidence against the CAPM is meaningful because it could be due to an unavoidable deviation from the market proxy. There are several problems with these results. First, Stambaugh (1982) himself documented that inferences are not sensitive to the error in the proxy when viewed as the measure of the market portfolio, and thus while a theoretical possibility; this is not an empirical problem. Shanken (1987) found that as long as the proxy between the true market was above 70%, then rejecting the measured market portfolio would also imply rejecting the true market portfolio. So if you thought the market index, any one of them, was highly correlated with the ‘true market’, the CAPM was testable as a practical matter.

Secondly, to generate such a result with an ‘almost’ efficient index proxy, one needs many negative correlations among assets, and lots of returns that are 100fold the volatility of other assets. In other words, a very efficient, but not perfectly efficient, market proxy can make beta meaningless—but only in a fairy tale world where many stocks have 100 times the volatility of other stocks, and correlations are frequently negative between assets. Now average stock volatilities range from 10% to 300%, most between 15% and 70% annualized. Betas, prospectively, are never negative, correlations between equities never negative unless the equity is a closed-end fund that employs a short strategy.

Thus, it is possible the absence of a correlation between beta and returns is due to an inefficient market proxy. But it is extremely unlikely, and given restrictions on the the correlations and relative volatilities of assets, so improbable as to be irrelevant.

But these 'proofs' are common. For example, in Steven Wolfram's book A New Kind of Science, he argues that some simple recursive rules can generate patterns that look very complex and ordered, like fractals. Thus, he argues that perhaps all laws of physics and chemistry and biology (ergo a New science) have such little rules as their fundamental essence. Possible. But what are the number of such recursive processes that are generating patterns that can be selected? Does generating pretty pictures, 'homologous' to ordered complexity as we see in physical laws, imply that such a simple rule underlies it? It could. Given he is talking about rules operating at the sub-quark level we could never observe, I don't see the point because the search space for such rules is so large we won't find such a rule given a million monkeys and a googplex of time.

Similarly, my favorite flâneur, Nassim Taleb argues that because unexpected things like the market crash of 1987, or Google, or Harry Potter, were unexpected, a wise investment stratgy is to allocate, say, 15% of one's portfoio to things with vague, but unlimited upside. But what is the denominator in this expected return? Sure, we include conspicuous winners like Google in the numerator, but how many hare-brained investments were tried that didn't generate Google-type returns? Thousands? Hundreds of thousands? My Uncle Frank has a new investment idea each year and maybe one day he'll hit it big, but I'm not holding my breath.

And then there is evolution. [feel free to ignore this evolution stuff. I myself thought such people were idiots 12 months ago, but as mentioned, I had an epiphany reading about those who think we're are avatars in a giant World of Warcraft computer game by a hyper-developed intelligence. And I have no proof, nor expect any. But I still don't believe cellular machinery arose merely by natural processes, a whimsical belief with little relevance.] Sure, the bacterial flagellum involves a constellation of proteins and cellular processes that are improbable, but given the advantage to such a function (propulsion), isn't it probable, given enough time, something like this would be found? Well, it depends how large the space of such constellations of proteins is, as every mutation in the DNA's 4-bit code (CATG) affects, potentially, one amino acid, that may affect the ultimate protein, its address, the gene expression regulators, etc. How many successive changes are needed to rearrange a collection of 'homologous' proteins used for different purposes, to finally get a workable flagellum, and how does this relate to the space of possible paths of changes that ended up in evolutionary dead ends (as almost all mutations are)?

Like the inefficiency of a market proxy generating a zero beta-return relation, proof of existence implies there's a chance, but common sense on the context tells me these solutions are not correct because they are incredibly improbable given the state space.

3 comments:

Anonymous said...

yes, but evolution has a precise explanation for why our common sense thinks it's so improbable, while the other camp...not so much.

in mauboussin's (spell?) words "what have you learned in the last 30 seconds?" this is how much has passed since we started thinking in terms of probabilities, if the history of homo sapiens were a 24 hr clock. (or smth in that neighborhood. quoting from memory here). we're just not wired that way. yet.

anyway, large probability on top of large probability for a few billion cycles can lead to a highly improbable end result.

Anonymous said...

On what basis can one say something appears improbable? Perhaps I am not thinking about this the right way, but isn't it just a point of view? Let's say I look at billion coin flips. I find all of them heads. Is that improbable? What if I find the first half heads the second half tails? What if I see a HTHT pattern? Are these all improbable? By this logic it would seem any pattern is just as unlikely as any - one in a billion. Seeing all heads would make someone question ex-post whether this is a fair coin or whether god is behind the coin flips.

Also, we don't even have an intuition of all other possibilities. You are thinking of the possible worlds as we know it or as we can imagine it. What about other potential worlds that we cannot even imagine? Saying something is 'probable' in this contxt seems meaningless to me.

Of course all of what I say applies to the origin of the universe and of life. Wrt evolution you are talking about a time frame of over few bn yrs. On top of that you also have selection (natural, sexual, etc.) which would really speed things up.
Suppose you combine random letters from the alphabet to try to spell 'evolution'. The odds of hitting it right would be 3.6E12. But suppose if you pick the right letter in the word by chance you keep it for the next iteration. With such selection, what do you think the odds would be?
Andrew Lo uses a similiar analogy wrt to the mkts.

J said...

c3h5U8

The probability of someone leaving the above comment is so low that ... it is impossible.

Regarding evolution, once you have a self-reproducing molecular machine in place, it becomes inevitable.