A theory is is an analytic structure designed to explain a set of empirical observations. It should be nontautological, tautologies are for theorems. A good theory identifies a relation so that one can simplify or predict better than without the theory, and 'better' might be because of a lower mean-squared error, or because the simpler model makes it easier to intuit a solution using one's own wet neural net. A good theory can often be modeled, though not all theories are helped by modeling (eg, the theory 'power corrupts', or 'the invisible hand').
Attempts to formalize the principles of the empirical sciences in the same way mathematicians axiomatize the principles of logic use an interpretation to model reality. As economic systems do not have the precision of physical systems (there are no fundamental constants analogous to the force of gravity on Earth, no equations as strict as PV=nrt), I think economic modeling has gone too far, in that most top economic journals stress the model, thinking that improving the generality of a model, especially if it involves sufficient rigor (eg, theorems using set theory or value functions). But models are means to an end, and as there are no models in econ with the same reality as in physics, we should recognize their intermediate status.
A lot of good theory, as in Hayek or Adam Smith is best presented as mere words. A lot of top theorists today have created models that have not generated really novel important results (eg, Mankiw, Cochrane, Campbell, Lucas, Romer, Krugman), instead, the models seem to offer hope that they will prove foundational, creating a method for introducing a theory that tersely explains, and offers the potential for refinements that create new important insights. That hope, alas, is not grounded in experience. Bellman equations, input-output matrices, second-order difference equations, set theory, are all areas of economic modeling that were once thought to provide essential methods for understanding economics, though they have been highly disappointing in what they have produced.
I wrote a book, Finding Alpha, which had a simple theme. My newly updated paper on this theory is here. Risk, however defined, is not empirically positively related to return. This is not an exception to the rule, it is the rule. This is explained by the fact that if most investors are benchmarking—against the S&P500, for example—then there is no risk premium. As people appear to be more driven by envy than greed, this makes sense.
I discovered previous theoretical work consistent with this idea. Models that contained my idea as a special, extreme, case. For example, Gali (1994) notes that if utility is a function of both one’s own consumption, c, and aggregate consumption, C, such that
U(c,C)=(1-A)-1c1-ACγA
where γ <1 and A>0. So, if γ<0, there are consumption externalities, "keeping up with the Joneses” effects that cause people to herd into aggregate risky wealth investments. That is, emulating the average risky asset investment lowers the risk of falling too much behind the aggregate consumption, and this causes the required risk premium to be lower than it otherwise would be. However, Gali also notes that γ<0, there are public goods aspects to aggregate consumption, as when your neighbors spend their money on making your neighborhood more beautiful. This would increase the theoretical equity premium.
DeMarzo, Kaniel, and Kremer (2004) present a model where agent’s utility is a function of two types of consumption: standard goods, and positional goods. Positional goods are things like mates, beachfront properties, or table seatings at a restaurant, that are unaffected by aggregate wealth. They create a ‘complete’ model by having the positional goods proxied by service consumption in period 2 provided by a fixed amount of labor, so that regardless of the total wealth in the model, people will be competing for access to services in exactly the same way. Thus, the positional nature of the service goods is endogenous to the model. Their utility function is given by
U(cg,cs)=(1-a)-1(cg1-A+cs1-A)
here A is the standard coefficient of risk aversion, and cgthe consumption of goods, cs the consumption of services. Total output for the economy is given by 1+θx, where x is the allocation of wealth in the risky asset, and θ is a random variable. The production of services, however, is fixed by the size of the labor pool and unaffected by the output from 'risky' investment. DeMarzo et al shows that under some parameterizations for θ a zero risk premium can occur.
These other findings highlight that my logic is correct; a relative status orientation can lower the risk premia to zero. My main innovation is merely to make this connection less subtle, less tentative, less convoluted. In those papers the main innovation was basically to show that one could generate overinvestment or underinvestment because of externalities of consumption: Gali assumed it, DeMarzo set up a 'services' portion of the utility that had clear, positional good characteristics. They did not emphasize that risk premiums are often, if not usually zero, only that it could happen.
The key is what utility function best explains the the world we see, and a relative status one works much better than a standard one based on decreasing marginal utility based on absolute consumption. To add parameter that captures the fact that some consumption is of positional goods, and assert this can create different risk premia, including under some parameters a zero risk premium, is not so much a theory, but rather a flexible model.
To have results such as positional goods arise ‘endogenously’ via having services provided by a fixed labor supply, but then to highlight highly arbitrary parameterizations that yield equilibria different than the standard models does not make the idea of a relative wealth orientation more compelling to me, but that appears a matter of taste. I was told by one referee my model was too simple, and the fact these papers (Gali and DeMarzo et al) were published in top journals highlights that economics demands a certain level of rigor to ideas regardless of whether it is needed. That is, I could have generated a general model that encompassed my main result and the standard model as special cases when a parameter is adjusted from 0 to 1, but I find that disingenuous, pretentious, and ambiguous. The main point is that if people are better defined as envious, not greedy, there is a lot of benchmarking, and this leads to zero risk premiums, which is generally what we observe. That's a simple idea, which makes it a better idea than saying something less definitive but more ornate. But, that's why I'm not an academic.
8 comments:
'...if utility is a function of both one’s one consumption....'
I take that to mean 'one's own consumption.
Just $1 New Year offer from world’s 1 internet wealth advocate for making money online. Over 55,000 customers.. http://tinyurl.com/yga4bvj
In my understanding of the philosophy of science, a scientific theory is a hypothesis or a set of hypotheses. The theory is expressed through a model, either conceptual or mathematical. The model is the syntactic aspect of the theory, and it is a set of tautologies. The theory is connected with data through semantic interpretation of its theorems, so that the elements of the model can be related to data through observation (in the case of a purely conceptual model) and through measurement (in the case of a quantitive or mathematical model).
A scientific theory/hypothesis cannot be expressed without a model, either conceptual or mathematical. The simplest model of a hypothesis is a hypothetical syllogism, which provides the logical form. The premises are assertions about data, the truth or falsity of which can be checked through observation. A syllogism with valid form and true premises constitutes a sound argument for the truth of the hypothesis. If the hypothesis involves a necessary condition, then a single false instance is a sufficient condition for disconfirmation.
I agree that your theory offers a simple and direct account of observations. The material question is whether modeling a main effect is sufficient for a theory. I will suggest that it is not. A theory should offer an explanation as to why we will observe deviations.
If it holds up for the next 10 years, that theory should deserve a Nobel Prize. A lot less useful work than this has won such prizes. But you should also be able to make more money with this idea than the prize committe can offer.
If it holds up for the next 10 years, that theory should deserve a Nobel Prize. A lot less useful work than this has won such prizes. But you should also be able to make more money with this idea than the prize committe can offer.
Anybody can see that models are NOT theories. Here are some models;
http://tinyurl.com/yl8cv7q
And here are some theories;
http://en.wikipedia.org/wiki/Evolution
http://en.wikipedia.org/wiki/Theory_of_relativity
I hope that clears this up.
Amazing as always
Post a Comment