In physics, there are constants defined to 10+ decimal places. Most economic debates are about the sign: is Coke riskier than GM stock? Does increasing the minimum wage increase aggregate worker total wage, or reduce it? Would increasing government spending increase or decrease GDP over the next 5 years?
Consider the following example, from Shane Frederick's
presentation on time discounting at MIT. This is the estimate of the spped of light over the past 150 years or so:
In contrast, below are a set of estimates of an important parameter in economics, the time discount parameter. If delta =1, you count the future equal to today, and are indifferent to receiving a massage in 1 year or tomorrow. If you have a delta=0, you totally ignore the future, and so glue sniffing is optimal, because though it kills your brain cells, it is a great rush over the next 5 minutes (or however long glue-sniffing highs last).
Note even at first, the estimates of the speed of light were reasonably close to the true value (or, the current value), off by about 0.07%, and converged to the current estimate about 50 years ago. In contrast, estimates for the discount factor seem to be a random draw from the uniform distribution between 0 and 1, though slightly more between 0.9 and 1.0.
So, the equations in advanced economics and physics look the same, but that's a pretty superficial similarity
3 comments:
Two possibilities:
(1) What the paper calls a "time discount factor" is actually a set of different things being called by the same name -- i.e., the measurements disagree because they're not measurements of the same quantity.
(2) The measurements are of a unitary phenomenon, but are being made in many different ways, each of which is subject to systematic errors so that there is no agreement, even in the range of expected values.
The cluster near one of late suggests that there is at least some agreement. It would be interesting to see if there was correlation in measurement methodology in that cluster.
But given the lack of agreement, I think (1) is the more likely culprit here, and I agree with the conclusions of the paper.
The better way to model utility from underlying preferences is to permit intertemporal variation in preferences, and measure revealed preferences as the cumulative frequency of certain activities within a window of time.
Since we can now track how such preferences change in real time thanks to things like Google and Twitter, we ought to update our economic theories accordingly.
http://brokensymmetry.typepad.com/broken_symmetry/2008/07/how-to-observe.html
physicists actually joke about how, as a group, they are much worse at math than economists. Since the models are gauged against real and measurable phenomenon, the model itself doesn't need to be as clean. The measurements will reveal the innacuracies in due course.
I haven't read the study, but don't you think that there is inter- and intra-variation when it comes to the discount factor? I have a different time preference than other people and my time preference isn't invariant either. When I was fourteen, you could have often caught me with a can of lighter gas (way better than glue), though I actually used a Zippo to light my cigaretts. These days I actually use a straw to drink carrot juice so I don't stain my teeth...
Post a Comment