I've got a new job with Pine River, and I want my new colleagues to know I'm not going to blab about anything that comes up, so blogging is now really over. Of course, if you bump into me you can always buy me drinks and try to get me spill the beans (about non-proprietary matters) but I should warn you, I can drink a lot of beer. Best.
Tuesday, September 24, 2013
Tuesday, September 17, 2013
Historical CBO Budget Projection Highlights Bias
Recently the CBO issued its annual budget projection, and it's pretty benign for the next decade, then climbs at a pretty measured pace.
Yet, note that in the last recession our debt relative to GDP doubled. Given that economists still don't have a good model for predicting business cycles let alone avoiding recessions, we can expect more of them. I think the odds that we elect a modern-day Calvin Coolidge next term are much smaller than the odds the deficit will increase dramatically when the next recession hits.
Consider that in 2007, before anyone saw any hint of the 2008 crisis, the debt was actually projected to fall, but we know how that turned out (black are actual historicals). That is, they never anticipate recessions, though we all know they haven't been abolished. I pulled these numbers out of the 2007 'wayback machine' which is a great way to hold large institutions accountable, because for some reason the CBO doesn't keep their historical forecasts on their current site (maybe the NSA can get Google to scrape them away?). Liberals who happen to be economists (eg, Brad DeLong) think the latest objective projections prove we have no budget worries. I guess some people really do think This Time It's Different.
Sunday, September 08, 2013
MSCI Quality Index
I was unaware MSCI had beaten AQR to the punch by producing a boatload of quality indices last spring. These are applied worldwide, so they are necessarily more parsimonious than AQRs...but jeez, these are really barebones:
1) Net Income/Book Equity
2) Debt/Book Equity
3) Earnings volatility over 5 years
Instructively, they Winsorize the data, which everyone should do to financial ratios (ie, truncate extremums). But, book equity in the denominator? Earnings volatility over 5 years? Those seem like bad choices, and AQR's quality index will be superior.
I have a feeling MSCI is a bit confused, as they have another tab noting their 'Risk Premia Indexing', which they note
The risk-begets-return model of economics is clearly nonfalsifiable amongst current financial academics and their coterie. They still say interesting things on occasion, so it doesn't render them useless, but it definitely impairs their ability to see and interpret reality.
1) Net Income/Book Equity
2) Debt/Book Equity
3) Earnings volatility over 5 years
Instructively, they Winsorize the data, which everyone should do to financial ratios (ie, truncate extremums). But, book equity in the denominator? Earnings volatility over 5 years? Those seem like bad choices, and AQR's quality index will be superior.
I have a feeling MSCI is a bit confused, as they have another tab noting their 'Risk Premia Indexing', which they note
An accumulating body of empirical research has found positive gross excess returns from exposure to factors (or risk premia) such as Value, Momentum, Low Size (small firms), and Low Volatility stocks. The studies show that these factors historically have improved return-to-risk ratios. Today, interest in risk premia (also known as smart beta or alternative beta) has been widespread across the institutional investor community.In other words, risk premia are really return premiums, because predictable returns only come from risk (in theory). But then, they also 'improve return-to-risk ratios', because, as we all know, these factors aren't risk in any obvious way, so strangely they all have 'excess return' premia. Indeed, 'value and size were initially thought to be due to distress risk, which would show up only episodically. Alas, 'quality' is basically an metric of anti-distress, and this generates a return premium, which MSCI occasionally calls a 'risk premium'...so basically whatever asset outperforms over the next 20 year period will ex post be declared risky.
The risk-begets-return model of economics is clearly nonfalsifiable amongst current financial academics and their coterie. They still say interesting things on occasion, so it doesn't render them useless, but it definitely impairs their ability to see and interpret reality.
Wednesday, September 04, 2013
de Botton on Status Anxiety
I find Alain de Botton's approach to philosophy rather refreshing, because one senses his genuine lack of certainty, and appreciation of discovering, in his works. He's interested in applying virtue for daily betterment, and the search for meaning, two very important goals in my life. Interestingly he was insightfully quoted in a NYT review of Sophie Fontanel's self-indulgent book on her self-induced celibacy, which highlighted his breadth and profundity (de Botton's quip was basically that 'sex is messy, get over it').
Anyway, here's de Botton on status anxiety. He argues that status anxiety is worse than ever because now we believe we are less constrained by our birth, more responsible for our fate. Paul Krugman agrees with this view of life, but like most economists, can't take this to it's ultimate implication, that this this leads to a zero risk premium, which when combined with the various attractions of sexy stocks, leads to high risk assets have lower-than-average returns (see my book The Missing Risk Premium).
Anyway, here's de Botton on status anxiety. He argues that status anxiety is worse than ever because now we believe we are less constrained by our birth, more responsible for our fate. Paul Krugman agrees with this view of life, but like most economists, can't take this to it's ultimate implication, that this this leads to a zero risk premium, which when combined with the various attractions of sexy stocks, leads to high risk assets have lower-than-average returns (see my book The Missing Risk Premium).
Sunday, September 01, 2013
How to Maximize Lottery Revenue
As a proponent of the idea that people are oriented towards their relative success, not absolute wealth, I think this lottery idea is fiendishly clever. Here's a description from TheWeek of a clever way to capitalize on this instinct:
A salient example is the "Postcode Lottery" in the Netherlands. Weekly it awards a "Street Prize" to one postal code, the Dutch equivalent of a zip code, chosen at random. When a postal code (usually about 25 houses on a street) is drawn, everybody who played the lottery in that code wins about $12,500 or more. Those living there who neglected to buy a ticket win nothing — except the chance to watch their neighbors celebrate.
In a 2003 study, researchers in the Netherlands noted that fear of regret played a significantly larger role in the Postcode Lottery than in a regular lottery. It was not the chance of winning that drove the players to buy tickets, the researchers found, it was the idea that they might be forced to sit on the sidelines contemplating missed opportunity.
The Boring Premium
Todd Mitton and Keith Vorkink from (boring) BYU published Why Do Firms With Diversification Discounts Have Higher Expected Returns? Their answer: no skew. People will pay up for lottery tickets, but if you take those dreams away, it becomes an asset that is neglected. They find diversified firms offers less skew, and diversification discounts are significantly greater when the diversified firm offers less
skewness than typical focused firms in similar business segments. They suggest a
substantial proportion of the excess returns received on discount firms relative to premium
firms can be explained by differences in exposure to skewness.
The implication is clear: people pay a premium for volatile stocks that have stories and potential. Conditional upon playing in a risky game, such as equities, there's not a return premium for risk, there's a premium for boring.
The implication is clear: people pay a premium for volatile stocks that have stories and potential. Conditional upon playing in a risky game, such as equities, there's not a return premium for risk, there's a premium for boring.
Sunday, August 25, 2013
AQR's Quality at a Reasonable Price
Our intrepid equity researchers at AQR have come out with a new paper adding to the color on how to pick a strategy given value considerations. In Asness, Frazzini and Pedersen's latest paper, Quality Minus Junk, they first try to create a 'quality' metric, and then try to meld it with value.
Quality is defined very clearly as the composite of 4 factors (each of which is made up of 3-5 ratios):
They find that
1) Stocks with higher 'quality' have higher market/book ratios (higher price ceteris paribus)
2) A long-short portfolio, where one goes long high quality, short low quality, generates significant, positive excess and total returns
They assert that a value-quality portfolio that tries to balance quality with value has nice properties, and the Sharpe maximizing combination is about 70% quality, 30% value. This is coming from Asness, who is a pretty big value proponent, so I think this is rather telling (value losing it's pre-eminence!).
Their quality metric has a kitchen-sink aspect to it, with about 20 ratios that go into those 4 different groupings. I could imagine many people would find this an attractive framework to develop and tweak their own quality metric, substituting for various ratios, or subtle changes to the functional form. Haugen and Baker's (2008) Case Closed, and Zack's Handbook of Investment Anomalies are good places to look for alternative ratios.
I would like to see how this QMJ factor compares to Analytic Investor's Volatile Minus Stable (VMS) factor...they seem similar, though obviously 1) they are negatively correlated and 2) the VMS factor is simply a vol factor, which is just one part of the 'quality' metric.
Lastly, I love the little note at the end:
Quality is defined very clearly as the composite of 4 factors (each of which is made up of 3-5 ratios):
- Profitability (eg, Net Income/Assets)
- Growth (eg, change in Profitability)
- Safety (eg, volatility, leverage)
- Payout (eg, equity issuance, dividend payout)
They find that
1) Stocks with higher 'quality' have higher market/book ratios (higher price ceteris paribus)
2) A long-short portfolio, where one goes long high quality, short low quality, generates significant, positive excess and total returns
They assert that a value-quality portfolio that tries to balance quality with value has nice properties, and the Sharpe maximizing combination is about 70% quality, 30% value. This is coming from Asness, who is a pretty big value proponent, so I think this is rather telling (value losing it's pre-eminence!).
Their quality metric has a kitchen-sink aspect to it, with about 20 ratios that go into those 4 different groupings. I could imagine many people would find this an attractive framework to develop and tweak their own quality metric, substituting for various ratios, or subtle changes to the functional form. Haugen and Baker's (2008) Case Closed, and Zack's Handbook of Investment Anomalies are good places to look for alternative ratios.
I would like to see how this QMJ factor compares to Analytic Investor's Volatile Minus Stable (VMS) factor...they seem similar, though obviously 1) they are negatively correlated and 2) the VMS factor is simply a vol factor, which is just one part of the 'quality' metric.
Lastly, I love the little note at the end:
Our results present an important puzzle for asset pricing: We cannot tie the returns of quality to riskBy construction their return-generating metric seems patently 'anti-risk', as quality implies 'low risk'. The risk-begets-return theory obviously has a lot of intuition, because empirically it's counterfactual when not irrelevant. I think if you divided the data described by asset pricing theory into 'puzzles' and 'consistent', it's mainly puzzles.
Economath and the Drake Equation
There were several posts last week on the hypothesis that there's too much emphasis on mathematical modeling in modern economics. Most said yes (Dave Hendersen, Bryan Caplan, Noahpundit, Robin Hanson, The NewYorkTimes), though Krugman said no.
Krugman's experience is very pertinent as his Nobel Prize winning model on increasing returns to scale is a good example of obtuse economodeling: its thesis was known before being the basis of the centuries-old infant industry argument, and after Krugman it was no easier to apply. Consider Detroit, a popular application for regional increasing returns when applied to autos in the early 20th century: what were the key conditions that allowed it enjoy increasing returns to scale in the early 20th century, but then decreasing returns to scale later in the century? He doesn't say.
Krugman responded that his theory changed the debate, because it showed--under certain parameterizations--that increasing returns to scale can be an argument for lower trade barriers! While true, this is a possibility, not a probability, and those who believe in increasing returns to scale invariably are more inclined to believe in selective tariffs, that is, they don't use Krugman's model to support free trade but rather increased protection. So, it hasn't changed the debate and is counter to his assertion that his New Trade Theory is "probably the main story" in import-export arguments for decreasing trade restrictions; his new model has not changed the debate at all, merely added another obscure reference to the confabulators. Increasing returns to scale remains 1) a fringe argument and 2) used primarily to support trade restrictions, as it was in the 1900s before Krugman's New Trade Theory model.
Krugman is a very smart person, but the fact he can't see this highlights that the greatest lies we tell are the ones we tell ourselves, because he clearly has the capacity to see slight inconsistencies and flaws in others (he's a meticulous advocate against his opponents).
I think a lot of math in econ is like the cargo cult phenomenon, where people see correlations (planes and cargo) and suppose the essence of something is one of those correlations (eg, build models of planes, and cargo will show up). Thus, just as naive people think the essence of a good poem is rhyming, naive economists think that setting up a hypothesis as if one were deriving the Dirac equation or special relativity seems like the essence of a science. Unfortunately, economic equations rarely work out that way.
Consider the Drake equation.
Where
N = the number of civilizations in our galaxy with which communication might be possible
R* = the average number of star formation per year in our galaxy
fp = the fraction of those stars that have planets
np= The number of planets, per solar system, with an environment suitable for life
etc.
None of the terms can be known, and most cannot even be estimated. As a result, the Drake equation can have any value from a hundred billion to zero. An expression that can imply anything implies nothing. I mean, this formulation is worthy of writing down, but it's very different than the Dirac equation or Newton's laws, even though at some level there's a similarity.
I remember teaching a money and banking course, and a fun way to get the kids introduced to economic models is to show them the Baumol-Tobin money demand model. This can be derived from some simple assumptions, and applies calculus to the maximization function individuals would apply, generating the equation:
Where
M=money demand
C=cost of withdrawing money
Y=Total income
i=interest rate
All very rigorous and tidy. Yet, it doesn't help predict interest rates, or the size of money aggregates. It's empirically vacuous, because it simply doesn't fit the data.
That's one of the more concrete equations. Most equations are like this one for money demand:
Basically one merely argues what arguments should be in the function and then the derivatives on those arguments. Thus, the first argument is 'permanent income' Yp, and the first derivative here is positive. Yet, the parameters can vary wildly, and may even be endogenous themselves. At the end of the day, atheoretical vector-autoregressions do a better job predicting any of these variables.
Yet, for all the insufficiency of mathematics in creating a good science, sociologists show that an absence of rigor doesn't seem to be any better. I think this highlights there's no delusion greater than the notion that method can make up for lack of common sense. Ultimately, there is no method but to be very intelligent.
Krugman's experience is very pertinent as his Nobel Prize winning model on increasing returns to scale is a good example of obtuse economodeling: its thesis was known before being the basis of the centuries-old infant industry argument, and after Krugman it was no easier to apply. Consider Detroit, a popular application for regional increasing returns when applied to autos in the early 20th century: what were the key conditions that allowed it enjoy increasing returns to scale in the early 20th century, but then decreasing returns to scale later in the century? He doesn't say.
Krugman responded that his theory changed the debate, because it showed--under certain parameterizations--that increasing returns to scale can be an argument for lower trade barriers! While true, this is a possibility, not a probability, and those who believe in increasing returns to scale invariably are more inclined to believe in selective tariffs, that is, they don't use Krugman's model to support free trade but rather increased protection. So, it hasn't changed the debate and is counter to his assertion that his New Trade Theory is "probably the main story" in import-export arguments for decreasing trade restrictions; his new model has not changed the debate at all, merely added another obscure reference to the confabulators. Increasing returns to scale remains 1) a fringe argument and 2) used primarily to support trade restrictions, as it was in the 1900s before Krugman's New Trade Theory model.
Krugman is a very smart person, but the fact he can't see this highlights that the greatest lies we tell are the ones we tell ourselves, because he clearly has the capacity to see slight inconsistencies and flaws in others (he's a meticulous advocate against his opponents).
I think a lot of math in econ is like the cargo cult phenomenon, where people see correlations (planes and cargo) and suppose the essence of something is one of those correlations (eg, build models of planes, and cargo will show up). Thus, just as naive people think the essence of a good poem is rhyming, naive economists think that setting up a hypothesis as if one were deriving the Dirac equation or special relativity seems like the essence of a science. Unfortunately, economic equations rarely work out that way.
Consider the Drake equation.
Where
N = the number of civilizations in our galaxy with which communication might be possible
R* = the average number of star formation per year in our galaxy
fp = the fraction of those stars that have planets
np= The number of planets, per solar system, with an environment suitable for life
etc.
None of the terms can be known, and most cannot even be estimated. As a result, the Drake equation can have any value from a hundred billion to zero. An expression that can imply anything implies nothing. I mean, this formulation is worthy of writing down, but it's very different than the Dirac equation or Newton's laws, even though at some level there's a similarity.
I remember teaching a money and banking course, and a fun way to get the kids introduced to economic models is to show them the Baumol-Tobin money demand model. This can be derived from some simple assumptions, and applies calculus to the maximization function individuals would apply, generating the equation:
Where
M=money demand
C=cost of withdrawing money
Y=Total income
i=interest rate
All very rigorous and tidy. Yet, it doesn't help predict interest rates, or the size of money aggregates. It's empirically vacuous, because it simply doesn't fit the data.
That's one of the more concrete equations. Most equations are like this one for money demand:
Basically one merely argues what arguments should be in the function and then the derivatives on those arguments. Thus, the first argument is 'permanent income' Yp, and the first derivative here is positive. Yet, the parameters can vary wildly, and may even be endogenous themselves. At the end of the day, atheoretical vector-autoregressions do a better job predicting any of these variables.
Yet, for all the insufficiency of mathematics in creating a good science, sociologists show that an absence of rigor doesn't seem to be any better. I think this highlights there's no delusion greater than the notion that method can make up for lack of common sense. Ultimately, there is no method but to be very intelligent.
Tuesday, August 13, 2013
Is The Low Vol Anomaly Really a Skew Effect?
The idea that low volatility stocks have higher returns than high volatility stocks is difficult for economists to digest, because it's so hard to square with standard theory. It brings to mind Dostoyevsky's line "If God is dead, then everything is permitted." Similarly, when one sees their favored theory as being abandoned, it seems like all explanation is lost and chaos reigns. Yet, when a wrong theory is adopted, well, as the ever-logical Bertrand Russel used to note, if 1+1=1, everything is both true and untrue. We need a framework to evaluate reality, and it has to be consistent.
Alas, many frameworks are largely untrue, leading to inconsistencies and explanations that are transparently tendentious. The sign of a bad Weltanschauung is that explanations for reality become more and more convoluted, like epicycles in Ptolemaic astronomy. I'll gladly enjoy the hypocrisy of those who don't share my worldview because, as the Detroit bankruptcy has reminded us (eg, its bankruptcy blamed on too much or too little gov't), people might admit tactical errors, but they'll go to their grave with their worldview (see Max Planck).
Consider the recent papers arguing that low volatility is really just a skew effect, in which case their worldview is safe. In the recent Journal of Economic Perspectives, longtime behavioral finance academic Nicholas Barberis wrote a paper on Kahneman and Tversky's prospect theory (that's Nobel prize winning Danny Kahneman, who's unimpeachability seems somewhere around that of Nelson Mandela) It's helpful to note that this insight is 34 years old, because many seem to all think these newfangled behavioural insights are going to revolutionize economics as if they haven't been applied continuously over the past generation.
Barberis goes over his Barberis and Huang (2008) model where prospect theory is used to motivate the hypothesis that a security’s skewness in the distribution of its returns will be priced. A positively skewed security— a security whose return distribution has a right, upper, tail is longer than its left tail—will be overpriced relative to the price it would command in an economy with standard investors. As a result, investors are willing to pay a high price for lottery-ticket type stocks.
Barberis references several papers, including Bali, Cakici, and Whitelaw (2011), and Conrad, Dittmar, and Ghysels (here's the 2009 version, though a more recent version was just published in the Journal of Finance). He also finds it relevant to the underperformance of IPOs, the low average return of distressed stocks, of bankrupt stocks, of stocks traded over the counter, and of out-of-the-money options (all of these assets have positively skewed returns); the low relative valuations of conglomerates as compared to single-segment firms (single-segment firms have more skewed returns); and the lack of diversification in many household portfolios (households may choose to be undiversified in positively skewed stocks so as to give themselves at least a small chance of becoming wealthy).
It seems like an orthogonal way to address these puzzles compared to the constrained rational approach offered by Betting Against Beta, but there's a problem, and it's that the well-know equity risk premium has a negative skew relative to what's considered less premium-worthy, long-term bonds. That is, equities in general have a lower (ie, more negative) skew than bonds, and this is the most prominent 'risk premium', so it must not be an exception to a rule.
Note that indices have negative skew while individual stocks have positive skew. This is because correlations go up in down markets, and this predictable tendency creates a problem for idiosyncratic skew pricing models. That is, in the CAPM and other asset pricing models, risk factors have prices that are linear in the covariances, otherwise there is arbitrage, the essence of the Arbitrage Pricing Theory: whatever risks are priced, they are based on additive moments, so risk and returns are linear functions. Now we have priced risks that are not just diversifiable, but change sign depending on what else is in the portfolio. If true, there is an implausible level of profit to be had from buying portfolios and selling the constituents.
As an ivy league confabulator Barberis deftly ignores this inconsistency and instead notes that the equity risk premium makes perfect sense given Benartzi and Thaler’s (1995) idea that if you focus only on the net changes in wealth (technically, U(x) vs. U(w+x)), you can get this to work in cumulative prospect theory, because losses hurt more than gains, so one gets paid to take risk in this case.
Alas, there's a limit to how much skew and variance can both be priced in the same universe, where people love positive skew and hate variance. If skew explains most of the volatility anomaly, that implies people can't be globally risk averse because they would like extremum up-moves too much, and these happen proportionally more for volatile stocks. Yet if that's true there's no risk premium of any sort, because people would simply buy single assets or derivatives and have no incentive to mitigate risk via bundling and arbitrage. This has been shown formally by Levy, Post, and van Vliet (2003), but it should be intuitive: skew is positively correlated with volatility for stocks with lognormal returns, so there's a point at which one's love of skew dominates one's fear of volatility. If that point is reached, volatility is always less costly than skew is beneficial. This constrains the size of the skew-loving effect to be an order of magnitude less than the risk premium if global risk aversion exists. If global risk aversion does not exist, then the rest of the general framework presented in simply meaningless.
So we have prospect theory explaining the overpricing of high volatility stocks due to skew, the underpricing of equity indices due to 'narrow framing.' One could add that prospect theory is used to explain why people overpay for longshots at the horse track, in that 'decisions weights' applied to payoffs prospect theory are observationally equivalent to overoptimistic probability assessments (see Snowberg and Wolfers (2010)), and that Danny Kahneman is an admirer of Nassim Taleb's Black Swan theory, which argues that small probability events are generally underappreciated. In other words, whatever the probability density function and expected return, it's explained by prospect theory.
Skew also shows up also in the recent publication of Conrad, Dittmar, and Ghysels (2013), who are incredibly meticulous in their analysis of how skew relates to future returns, highlighting what three top researchers over several years can do to data. Yet, they then ignore the elephant in the room, that is, if volatility is negatively priced and skew is positively priced, how do these both exist in equilibrium? It should be hard for these authors to say they don't care, because they are very exhaustive in their analysis, noting at one point:
I'm sure former JoF editor Cam Harvey read this while nodding approvingly throughout (he's referenced every other page, and a big believer that risk explains most everything in finance). While understanding SDFs and their risk premiums won't help you get a job at a hedge fund, it will help you get published and be popular among publishing academics.
I agree that skew is important, as it measures the upside potential that delusional lottery-ticket buying investors love, and because of relative wealth preferences, arbitrage is costly and their footprint remains. That's a mathematically consistent story. Skew loving effects can't exist on the same par with variance hating effects in any consistent story about asset returns. Is this important? Consistency can be overdone, but I don't think this is foolish because one tends to see what one believes rather than vice versa, and I think there's more power and predictability in viewing volatility as merely a desirable attribute for delusional investors, as opposed to something that pays you a premium.
Paradoxically, behavioral refinements such as prospect theory are preventing needed outside-the-box adjustments and are used to maintain a defective status quo, one that has been wrong on a profound empirical issue for 50 years (ie, the risk premium). These putative revolutionary insights allow academics to wax eloquent on how their complex paradigm handles subtleties such as any of those 50 behavioral quirks, and outside commentators are pleased to be part of a new vanguard, obliviously marching in basically the same, pointless, confabulating path.
Alas, many frameworks are largely untrue, leading to inconsistencies and explanations that are transparently tendentious. The sign of a bad Weltanschauung is that explanations for reality become more and more convoluted, like epicycles in Ptolemaic astronomy. I'll gladly enjoy the hypocrisy of those who don't share my worldview because, as the Detroit bankruptcy has reminded us (eg, its bankruptcy blamed on too much or too little gov't), people might admit tactical errors, but they'll go to their grave with their worldview (see Max Planck).
Consider the recent papers arguing that low volatility is really just a skew effect, in which case their worldview is safe. In the recent Journal of Economic Perspectives, longtime behavioral finance academic Nicholas Barberis wrote a paper on Kahneman and Tversky's prospect theory (that's Nobel prize winning Danny Kahneman, who's unimpeachability seems somewhere around that of Nelson Mandela) It's helpful to note that this insight is 34 years old, because many seem to all think these newfangled behavioural insights are going to revolutionize economics as if they haven't been applied continuously over the past generation.
Barberis goes over his Barberis and Huang (2008) model where prospect theory is used to motivate the hypothesis that a security’s skewness in the distribution of its returns will be priced. A positively skewed security— a security whose return distribution has a right, upper, tail is longer than its left tail—will be overpriced relative to the price it would command in an economy with standard investors. As a result, investors are willing to pay a high price for lottery-ticket type stocks.
Barberis references several papers, including Bali, Cakici, and Whitelaw (2011), and Conrad, Dittmar, and Ghysels (here's the 2009 version, though a more recent version was just published in the Journal of Finance). He also finds it relevant to the underperformance of IPOs, the low average return of distressed stocks, of bankrupt stocks, of stocks traded over the counter, and of out-of-the-money options (all of these assets have positively skewed returns); the low relative valuations of conglomerates as compared to single-segment firms (single-segment firms have more skewed returns); and the lack of diversification in many household portfolios (households may choose to be undiversified in positively skewed stocks so as to give themselves at least a small chance of becoming wealthy).
It seems like an orthogonal way to address these puzzles compared to the constrained rational approach offered by Betting Against Beta, but there's a problem, and it's that the well-know equity risk premium has a negative skew relative to what's considered less premium-worthy, long-term bonds. That is, equities in general have a lower (ie, more negative) skew than bonds, and this is the most prominent 'risk premium', so it must not be an exception to a rule.
US Monthly Data 1962-2013
10-year US T-Bond |
SP500 Index |
|
AnnRet | 7.05% | 7.28% |
AnnStdev | 6.86% | 15.05% |
Skew | 61.09% | -42.16% |
Note that indices have negative skew while individual stocks have positive skew. This is because correlations go up in down markets, and this predictable tendency creates a problem for idiosyncratic skew pricing models. That is, in the CAPM and other asset pricing models, risk factors have prices that are linear in the covariances, otherwise there is arbitrage, the essence of the Arbitrage Pricing Theory: whatever risks are priced, they are based on additive moments, so risk and returns are linear functions. Now we have priced risks that are not just diversifiable, but change sign depending on what else is in the portfolio. If true, there is an implausible level of profit to be had from buying portfolios and selling the constituents.
As an ivy league confabulator Barberis deftly ignores this inconsistency and instead notes that the equity risk premium makes perfect sense given Benartzi and Thaler’s (1995) idea that if you focus only on the net changes in wealth (technically, U(x) vs. U(w+x)), you can get this to work in cumulative prospect theory, because losses hurt more than gains, so one gets paid to take risk in this case.
Alas, there's a limit to how much skew and variance can both be priced in the same universe, where people love positive skew and hate variance. If skew explains most of the volatility anomaly, that implies people can't be globally risk averse because they would like extremum up-moves too much, and these happen proportionally more for volatile stocks. Yet if that's true there's no risk premium of any sort, because people would simply buy single assets or derivatives and have no incentive to mitigate risk via bundling and arbitrage. This has been shown formally by Levy, Post, and van Vliet (2003), but it should be intuitive: skew is positively correlated with volatility for stocks with lognormal returns, so there's a point at which one's love of skew dominates one's fear of volatility. If that point is reached, volatility is always less costly than skew is beneficial. This constrains the size of the skew-loving effect to be an order of magnitude less than the risk premium if global risk aversion exists. If global risk aversion does not exist, then the rest of the general framework presented in simply meaningless.
So we have prospect theory explaining the overpricing of high volatility stocks due to skew, the underpricing of equity indices due to 'narrow framing.' One could add that prospect theory is used to explain why people overpay for longshots at the horse track, in that 'decisions weights' applied to payoffs prospect theory are observationally equivalent to overoptimistic probability assessments (see Snowberg and Wolfers (2010)), and that Danny Kahneman is an admirer of Nassim Taleb's Black Swan theory, which argues that small probability events are generally underappreciated. In other words, whatever the probability density function and expected return, it's explained by prospect theory.
Skew also shows up also in the recent publication of Conrad, Dittmar, and Ghysels (2013), who are incredibly meticulous in their analysis of how skew relates to future returns, highlighting what three top researchers over several years can do to data. Yet, they then ignore the elephant in the room, that is, if volatility is negatively priced and skew is positively priced, how do these both exist in equilibrium? It should be hard for these authors to say they don't care, because they are very exhaustive in their analysis, noting at one point:
We use several methods to estimate [the stochastic discount function] Mt(τ) that allow for higher co-moments to influence required returns. These methods differ in the details of specific factor proxies, the number of higher co-moments allowed, and the construction of the SDF.Alas, as usual in analysis of SDFs, there is no take-away input one can use to measure risk, no soon-to-be-indispensable tool, just a promise that this has all been vouchsafed against high-falutin theory and so 'it's all good.' Consistency is a good thing, but only in certain dimensions. One of the authors, Dittmar (2002), wrote a very nice paper for the Journal of Finance in 2002 noting that if you restricts a non-linear pricing kernel to obey the risk-aversion needed to ensure that the market portfolio is the optimal portfolio, the explanatory power goes away of higher moments. With all the abstruse checks in this paper, one would think he might want to address that issue, but instead he ignores it.
I'm sure former JoF editor Cam Harvey read this while nodding approvingly throughout (he's referenced every other page, and a big believer that risk explains most everything in finance). While understanding SDFs and their risk premiums won't help you get a job at a hedge fund, it will help you get published and be popular among publishing academics.
I agree that skew is important, as it measures the upside potential that delusional lottery-ticket buying investors love, and because of relative wealth preferences, arbitrage is costly and their footprint remains. That's a mathematically consistent story. Skew loving effects can't exist on the same par with variance hating effects in any consistent story about asset returns. Is this important? Consistency can be overdone, but I don't think this is foolish because one tends to see what one believes rather than vice versa, and I think there's more power and predictability in viewing volatility as merely a desirable attribute for delusional investors, as opposed to something that pays you a premium.
Paradoxically, behavioral refinements such as prospect theory are preventing needed outside-the-box adjustments and are used to maintain a defective status quo, one that has been wrong on a profound empirical issue for 50 years (ie, the risk premium). These putative revolutionary insights allow academics to wax eloquent on how their complex paradigm handles subtleties such as any of those 50 behavioral quirks, and outside commentators are pleased to be part of a new vanguard, obliviously marching in basically the same, pointless, confabulating path.
Sunday, August 11, 2013
Now Not the Time to Value-Tilt Low Vol
Every week, a low volatility researcher has the same epiphany: tilt low volatility towards value. This addresses two pressing issues simultaneously: avoiding overbought securities and adding value alpha.
A neat articulation of this view is from Feifei Li of Research Affiliates, who first shows that lots of people are investing in low volatility (there's another such piece here by Dangle and Kashofer from Vienna UofT). Clearly growth in low volatility is rising exponentially, and our intuition senses a Malthusian endgame that will be nasty and brutish.
That might seem scary, but to put it in perspective, there's now $80B in value ETFs alone, so this isn't anywhere close to value and size. Next, she shows some valuation metrics. Three different types of low vol portfolios are seemingly higher priced using two different value metrics, book/market and earnings yield. That is, low vol portfolios over the past 10 years used to have higher earnings yields than the market, and higher book/market ratios; now it's the reverse.
To put these into perspective, the relative difference in the book-to-price ratio moving from 0.3 to 0.6 is about moving from the 15th percentile to the 45th percentile. Li suggests adding a valuation criteria to low volatility to counteract this value-creep. The basic idea is, say the book/market ratio has a linear relation with expected return, where a higher book/market is associated with a higher return. So if we take the universe of a set of low vol stocks, say the constituents of the ETF SPLV, which looks a the 100 least volatile stocks of the past year, and then take those stocks with the highest book/market ratios within that set, we simultaneously capture more of the value effect and avoid overbought stocks. That seems like a win-win improvement.
There are two problems with this approach. First, the return to the book/market is not linear. Therefore, merely moving your average book/market ratio may may you feel better, but unless you pick the right stocks, you won't change much. Here's the average return by book/market deciles, for those stocks above the 20th percentile of the NYSE (all data here are from Ken French's excellent website, I use the 20th percentile cut-off because stocks below that aren't really investable in scale anyway, so potentially misleading).
Now, these are average monthly return premium above the market average. If we are looking at geometric returns, that sharp increase for the top decile isn't there, but forget that for now (I think the geometric average is more relevant given that in practice people don't rebalance monthly, but to each his own). The key is, this relationship over the investable universe is basically all happening at the end-deciles, not in between. Thus, average book/market decile can be misleading, because not much happens between the 30th and 90th percentiles.
Curiously, market cap is not allocated evenly across all ten book/market deciles because the cutoffs for the size and book/market sorts are constructed once a year using the NYSE. For example, currently, there's 3 times as much market cap in book/market decile 1 than book/market decile 10.
Here's the market-cap-weighted average book/market decile over time (in blue). I'm just calculating a number generated by French's data here, all the work is in this Excel spreadsheet (there's nothing proprietary going on here). So here's that average number calculated each month, and the total return on French's value factor (aka, HML, or High-Minus-Low factor portfolio proxy).
Clearly the low average decile corresponds to big increases in the HML factor returns. If I take that time series, and put the data into deciles, I get a pretty clear pattern for future HML returns:
Basically, the value (ie, HML) factor only pays off when the average book/market decile is in the bottom third of its distribution. Alas, we're not there, we are around the 70th percentile right now. So, here's the average return for the value factor, for that 50% of the time when the average book/market decile is above average (ie, now):
That line is sloping the wrong way if you are banking on a value premium.
In sum, loading up on the value factor to improve low volatility is dangerous because 1) the relation between book/market and returns not linear, so simple portfolio averages can be misleading and 2) the value premium can be predictably predictable given the distribution of the market across book/market deciles.
In practice, the value premium to passive indices seems about 1-2% since it was popularized around 1990. The 2.8% HML premium from 1928-2013 is due a lot to shorting low book/market stocks, a premium with dubious feasibility, so this number is not a good rule of thumb for the value of tilting towards value. Value ETFs like IWD arose fortuitously around 2000, and so their 3% annual outperformance is all from the bursting of the internet bubble--if those value ETFs went back to 1990, the return premium would be less. I would estimate there's 100 basis points in the value factor, yet, that's by itself. When you try to use value to add to other strategies, it's not obviously beneficial, and most low vol practitioners are doing this, so you really aren't thinking outside the box.
A neat articulation of this view is from Feifei Li of Research Affiliates, who first shows that lots of people are investing in low volatility (there's another such piece here by Dangle and Kashofer from Vienna UofT). Clearly growth in low volatility is rising exponentially, and our intuition senses a Malthusian endgame that will be nasty and brutish.
That might seem scary, but to put it in perspective, there's now $80B in value ETFs alone, so this isn't anywhere close to value and size. Next, she shows some valuation metrics. Three different types of low vol portfolios are seemingly higher priced using two different value metrics, book/market and earnings yield. That is, low vol portfolios over the past 10 years used to have higher earnings yields than the market, and higher book/market ratios; now it's the reverse.
To put these into perspective, the relative difference in the book-to-price ratio moving from 0.3 to 0.6 is about moving from the 15th percentile to the 45th percentile. Li suggests adding a valuation criteria to low volatility to counteract this value-creep. The basic idea is, say the book/market ratio has a linear relation with expected return, where a higher book/market is associated with a higher return. So if we take the universe of a set of low vol stocks, say the constituents of the ETF SPLV, which looks a the 100 least volatile stocks of the past year, and then take those stocks with the highest book/market ratios within that set, we simultaneously capture more of the value effect and avoid overbought stocks. That seems like a win-win improvement.
There are two problems with this approach. First, the return to the book/market is not linear. Therefore, merely moving your average book/market ratio may may you feel better, but unless you pick the right stocks, you won't change much. Here's the average return by book/market deciles, for those stocks above the 20th percentile of the NYSE (all data here are from Ken French's excellent website, I use the 20th percentile cut-off because stocks below that aren't really investable in scale anyway, so potentially misleading).
Now, these are average monthly return premium above the market average. If we are looking at geometric returns, that sharp increase for the top decile isn't there, but forget that for now (I think the geometric average is more relevant given that in practice people don't rebalance monthly, but to each his own). The key is, this relationship over the investable universe is basically all happening at the end-deciles, not in between. Thus, average book/market decile can be misleading, because not much happens between the 30th and 90th percentiles.
Curiously, market cap is not allocated evenly across all ten book/market deciles because the cutoffs for the size and book/market sorts are constructed once a year using the NYSE. For example, currently, there's 3 times as much market cap in book/market decile 1 than book/market decile 10.
Here's the market-cap-weighted average book/market decile over time (in blue). I'm just calculating a number generated by French's data here, all the work is in this Excel spreadsheet (there's nothing proprietary going on here). So here's that average number calculated each month, and the total return on French's value factor (aka, HML, or High-Minus-Low factor portfolio proxy).
Clearly the low average decile corresponds to big increases in the HML factor returns. If I take that time series, and put the data into deciles, I get a pretty clear pattern for future HML returns:
Basically, the value (ie, HML) factor only pays off when the average book/market decile is in the bottom third of its distribution. Alas, we're not there, we are around the 70th percentile right now. So, here's the average return for the value factor, for that 50% of the time when the average book/market decile is above average (ie, now):
That line is sloping the wrong way if you are banking on a value premium.
In sum, loading up on the value factor to improve low volatility is dangerous because 1) the relation between book/market and returns not linear, so simple portfolio averages can be misleading and 2) the value premium can be predictably predictable given the distribution of the market across book/market deciles.
In practice, the value premium to passive indices seems about 1-2% since it was popularized around 1990. The 2.8% HML premium from 1928-2013 is due a lot to shorting low book/market stocks, a premium with dubious feasibility, so this number is not a good rule of thumb for the value of tilting towards value. Value ETFs like IWD arose fortuitously around 2000, and so their 3% annual outperformance is all from the bursting of the internet bubble--if those value ETFs went back to 1990, the return premium would be less. I would estimate there's 100 basis points in the value factor, yet, that's by itself. When you try to use value to add to other strategies, it's not obviously beneficial, and most low vol practitioners are doing this, so you really aren't thinking outside the box.
Monday, August 05, 2013
On the Inverse Correlation between Expected Risk and Return
Imagine a world where expected returns are solely a function of covariances as standard theory implies. Then for assets with specific covariances, the market should give them specific expected returns. People expect risk and return to be positively correlated in this theory.
Instead, Sharpe and Amromin find that people expect volatility and returns to be inversely correlated: when they are bullish they expect low volatility, and when they are bearish they expect high volatility. This is counter to standard theory, which is why it has been in 'working paper' hell for 6 years, because referees find a lot to quibble with when results don't make sense (eg, high vol-low return papers in the 1990s). If you generate a paper like this, it really helps to already have credibility (eg, Fama-French 1992), because otherwise there will be a thousand reasons not to publish it.
On vacation last week I read a great airplane book, You are Not so Smart by blogger David McRaney, which highlights psychological biases in a succinct, interesting way. He noted the work of Finucane, Alhakami, Slovic, and Johnson (2000), in a paper entitled The affect heuristic in judgments of risks and benefits. Slovic is the other coauthor of the famous book on behavioral biases, Judgement Under Uncertainty: Heuristics and Biases. They asked a bunch of people about controversial issues like natural gas, food preservatives, and nuclear power. They divided subjects into groups where some people read only about the risks, while others only read about the benefits of various issues. Needless to say, those exposed to the risk arguments estimated the risks of these technologies to be higher, and those primed by the benefits judged there to be higher benefits. However, those who saw the elucidation of risks also judged there to be lower benefits, and those who read about benefits saw lower risks.
Logically, risk and return are separate, but intuitively, we see them as part of a whole, related in a totally antithetical way. For example, Warren Buffet has famously written that those stocks with the highest return had the lowest risk because such stocks had the largest 'cushion' in their forecast error. 'Low risk' and 'high return' are both good, so they go together in most people's intuition, as do the bad qualities, 'high risk' and 'low return.'
This is part of the 'halo effect', where people see those who are handsome as smarter, because things with good qualities are seen as intrinsically good, so good in each and every way. Think of a saint with a halo: he was probably good at everything. Indeed, if you give people positive information on one attribute, they will tend to assume the others attributes are correlated. Clearly this makes some sense, as I imagine 'fitness' in a person, in terms of their desirable traits, has a general factor the way IQ helps explain language, math and visual-spacial skills, but it has its limits. This is also why it's hard for people to accept that a lout like Hitler really was nice to dogs and a decent painter, because it seems like that implies you liked his other attributes.
A big theory as to why low volatility stocks outperform high volatility stocks is the Asness, Frazzini and Petersen's Betting Against Beta theory. I'm more in the 'no risk premium plus delusional lottery ticket demand' camp. In my view, people buy high beta stocks incidentally because these tend to have characteristics amenable to comforting delusions: big stories, potential for big gains. In the Betting Against Beta view, people buy high beta stocks because of the higher return implied by this covariance, and their constrained in their allocation to equities by rules of thumb and regulations. I think investors are focused on the return and underestimating the risk, but in any case, buying in spite of it.
The Betting against Beta theory does follow more directly from the Capital Asset Pricing Model than my take, but Sharpe and Amromin, and now I learn, Finucane, Alhakami, Slovic, and Johnson are more on my side.
Instead, Sharpe and Amromin find that people expect volatility and returns to be inversely correlated: when they are bullish they expect low volatility, and when they are bearish they expect high volatility. This is counter to standard theory, which is why it has been in 'working paper' hell for 6 years, because referees find a lot to quibble with when results don't make sense (eg, high vol-low return papers in the 1990s). If you generate a paper like this, it really helps to already have credibility (eg, Fama-French 1992), because otherwise there will be a thousand reasons not to publish it.
On vacation last week I read a great airplane book, You are Not so Smart by blogger David McRaney, which highlights psychological biases in a succinct, interesting way. He noted the work of Finucane, Alhakami, Slovic, and Johnson (2000), in a paper entitled The affect heuristic in judgments of risks and benefits. Slovic is the other coauthor of the famous book on behavioral biases, Judgement Under Uncertainty: Heuristics and Biases. They asked a bunch of people about controversial issues like natural gas, food preservatives, and nuclear power. They divided subjects into groups where some people read only about the risks, while others only read about the benefits of various issues. Needless to say, those exposed to the risk arguments estimated the risks of these technologies to be higher, and those primed by the benefits judged there to be higher benefits. However, those who saw the elucidation of risks also judged there to be lower benefits, and those who read about benefits saw lower risks.
Logically, risk and return are separate, but intuitively, we see them as part of a whole, related in a totally antithetical way. For example, Warren Buffet has famously written that those stocks with the highest return had the lowest risk because such stocks had the largest 'cushion' in their forecast error. 'Low risk' and 'high return' are both good, so they go together in most people's intuition, as do the bad qualities, 'high risk' and 'low return.'
This is part of the 'halo effect', where people see those who are handsome as smarter, because things with good qualities are seen as intrinsically good, so good in each and every way. Think of a saint with a halo: he was probably good at everything. Indeed, if you give people positive information on one attribute, they will tend to assume the others attributes are correlated. Clearly this makes some sense, as I imagine 'fitness' in a person, in terms of their desirable traits, has a general factor the way IQ helps explain language, math and visual-spacial skills, but it has its limits. This is also why it's hard for people to accept that a lout like Hitler really was nice to dogs and a decent painter, because it seems like that implies you liked his other attributes.
A big theory as to why low volatility stocks outperform high volatility stocks is the Asness, Frazzini and Petersen's Betting Against Beta theory. I'm more in the 'no risk premium plus delusional lottery ticket demand' camp. In my view, people buy high beta stocks incidentally because these tend to have characteristics amenable to comforting delusions: big stories, potential for big gains. In the Betting Against Beta view, people buy high beta stocks because of the higher return implied by this covariance, and their constrained in their allocation to equities by rules of thumb and regulations. I think investors are focused on the return and underestimating the risk, but in any case, buying in spite of it.
The Betting against Beta theory does follow more directly from the Capital Asset Pricing Model than my take, but Sharpe and Amromin, and now I learn, Finucane, Alhakami, Slovic, and Johnson are more on my side.
Sunday, July 28, 2013
My Big Toe
A large problem in physics concerns the nature of quantum
reality, where as Richard Feynman famously said, “if you think you understand it, you
don’t understand it.” A currently
popular solution is the many worlds hypothesis, preferred by Eliezer Yudkowsky among others, which is that for
every quantum event, everything actually happens, merely in different branching
universes.
Another solution is offered by physicist Tom Campbell, author of My
Big TOE (Theory of Everything), who argues that we are in one of Nick Bostrom’simulations, autonomous ems or avatars in a multiplayer game. Campbell argues the collapse of the
wave-function is only necessary when an observer is watching, in the same way a
field in World of Warcraft is not rendered to anyone until a player wanders
over to it. He goes on to argue the essence of our
world is consciousness, which makes reality arise the same way
walking around The Matrix creates the pixels showing a specific
landscape. This implies there’s no sound of a tree if there’s
no consciousness to hear it, among other things.
Yet, things like sounds have effects via
their acoustic reverberations on many unconscious things like plants and rocks,
and these effects need to be incorporated into the environment. Things that don’t affect a
consciousness still have real effects that we see later, and if the world is
simulated precisely because it’s too complicated to calculate the emergent
property ex ante, then one needs to run the simulation for these effects all
the time anyway, not just when a conscious agent looks.
Consciousness seems like an arbitrary way to dictate the nature of
what actually happens in a simulation, as if the simulators are monitoring conscious
agents and their peripherals, which seems implausible.
As an economist, I like idea that everyone has to
economize, in that no one is too rich to ignore time constraints or the size of one's stomach; this is why economics is at some level universal: trying to maximize an objective function given constraints is something everything has to deal with. Similarly, an ethereal designer
would also have resource constraints, because if they had infinite resources, they
wouldn't be playing games, they’d be trying to figure out how to kill
themselves because it would be painfully boring to exist after a google years. Any finite computer system doesn't have memory for the infinite number of simulated universes created at every quantum resolution as implied by the many worlds hypothesis.
My dog might think I'm god, giving him food from sources he can't understand. Avatars in a sim might consider their programmers gods. In both cases, these gods are not omnipotent nor omniscient, exactly why they appreciate their creations. Even if god were as in the Bible, I can't imagine he's truly omnipotent because in such cases he would have no reason to create the Earth or people, because he would know what would happen--he could figure everything out-- making the exercise uninteresting, uninformative, pointless.
My dog might think I'm god, giving him food from sources he can't understand. Avatars in a sim might consider their programmers gods. In both cases, these gods are not omnipotent nor omniscient, exactly why they appreciate their creations. Even if god were as in the Bible, I can't imagine he's truly omnipotent because in such cases he would have no reason to create the Earth or people, because he would know what would happen--he could figure everything out-- making the exercise uninteresting, uninformative, pointless.
So, assuming we are the instantiation of a simulation
designed by hyper intelligent beings in the 10th dimension, they almost surely have resource constraints; they can’t allow the simulation's usage of memory to explode over time. We don’t know the purpose of this game, but there
are rules that we see as the laws of physics, including their physical rule that the simulation not require an infinite amount of memory.
This nicely explains why they chose to have wave-particle
duality. So many things interact over
time that keeping track of all those point objects would require a great deal
of information, indeed, the information needed if we didn't have wave-particle duality would overtax our simulators. Luckily, the interaction of
wave functions for these particles greatly compresses the amount of information
needed to precisely capture their interaction.
That is, the wave function does not merely approximate these particles,
it completely captures their interaction.
The set of all possible wave functions at any given time forms a vector
space, which means that it is possible to add different waves like how you can combine Gaussian
distributions into a single Gaussian distribution and ignore all the ones that came before. When
there are many particles there is only one wave function, not a
separate wave function for each particle.
This is probably one of many reasons our god-like programmers chose
this for an aspect of their simulation, our reality. For example,
as we move forward it time, having electrons exist as a constant probability
mass as opposed to moving about is both consistent with all its potential
interactions and uses a lot less memory.
The bottom line is, if
we are in a simulation, there needs to be compressive sampling because
quantum effects between particles would otherwise require an infinite amount of
memory, so the meaning of the wave function is that it's for data compression. Particles don't care if conscious entities are watching or not, the compression is built into nature as a way to save on electricity, as opposed to caring about whether clever scientists are watching or not. The fortuitous data compression implicit in wave functions is merely another reason to suspect we are in a simulation. If we are in a simulation, it's interesting to think why this is amusing or informative to a really smart being.
I would like to tell
our designers that I’m on to them, and will refrain from further elaboration
upon increases in Strength, Wisdom, and Charisma.
Sunday, July 21, 2013
Missing Risk Premium: a Synopsis
I recently made a presentation of my book, The Missing Risk Premium, and thought it was concise, so I'm sharing it here.
Historical return data contradicting 'expected return positively linearly related to risk' theory:
- Within equities:
- Firm leverage
- Firm profitability
- CAPM beta
- Total Volatility
- Residual Volatility
- Financial Distress/Default metrics and equities
- Penny stocks vs. regular stocks
- IPOs vs regular stock
- Country returns in developed countries
- Country returns in emerging markets
- Analyst disagreement across stocks
- High vs. low trading volume
- R rated movies vs. G rated movies
- Volatility/future equity Index returns over time
- Overnight vs. Intraday stock returns
- Bond credit: Distress, Junk, and BBB-A rated Bonds
- Bond duration: post 2 years
- Out-of-the-money options vs. at-the-money options
- S and C corps vs. equity indexes
- Senior vs. Subordinated
- Reinsurance: rebalanced vs. peak peril
- Converts: low and high moneyness
- Merger Arbitrage: stock-financed vs. cash-financed
- Lotto vs. ‘quick pick’ lotteries
- 50-1 horses vs. 3-1 horses
- Mutual funds
- Hedge Funds
- Commodity Trading Advisors (CTAs)
- Currencies
- Futures
- Real Estate
Data consistent with
Risk/Return theory:
- Short end of yield curve
- BBB-Treasury credit spread
- Top-line equity return over libo
How high risk
generates low premium:
Winner’s curse: excess demand for volatile stocks generates
below average returns
Why the extra demand?
Why the extra demand?
- Overconfidence
- Information costs lower about risky firms
- High returns have high risk, ergo risk implies higher return fallacy
- Some are risk loving
- Some people are positive skew loving
- Alpha discovery
- Easier sell to clients (amenable to stories)
- Payoffs to fund managers
- Bounded rationality (think SML is positively sloping)
- Agency problem (exploit option with fund source)
- Those buying stocks think stocks will rise, in which case higher beta is better
Why is Flat/downward SML not arbitraged?
- Tradition (60-40 stock/equity ratio is a binding constraint)
- Irrationality (others don’t notice SML flat/negative)
- Relative risk (my favorite, other consistent data below)
- Easterlin Paradox suggests happiness is relative
- No one sells low risk, lower-than-average return stocks, because in a relative risk world, one only takes risk if the return is above average
- home bias because you want to outcompete your peers, not strangers
- Relative orientation evolutionarily robust compared to a Constant Relative Risk Aversion utility (which would be a strange coincidence)
- Glucocorticoid levels such as cortisol related to status, not wealth
- Imitation generally dominates figuring things out oneself (eg, fire, calculus), leading to an other-person directed brain
- fMRI identifies neural mechanisms for empathy, social information
- Status is a human universal, greed is not
- Reverse dominance hierarchies in humans common (ie, status more important than wealth)
- Politics about redistribution more than efficiency
Counter: If Risk has
no Premium, why take risk?
- 40% of all men reproduce, where 80% of women do.
- Men have out-of-the-money option, need to take risk
- Why not take infinite risk? Moderation in all things.
- Life is a complex, nonlinear, dynamic game where every parameter has a local maximum. Radiation, vitamin A, oxygen, tolerance, risk taking, can all be too much or too little.
Are Pre-Modern Societies Socialist?
Many assume that pre-modern society was communistic, like hunter gatherers, and these roots give us a socialist intuition. Larry Arnhart argues this simply isn't true, that hunter-gatherer societies share big game meat, but most everything else is shared communistically only within the nuclear family, other things are more quid-pro-quo.
This comes up a lot, as Emmanuel Todd writes interesting books about European history, as in his 1990 L’invention de l’Europe. He makes the bold claim that political ideology is the result of three things: family structure, literacy, and godlessness. In the modern age with universal literacy and godlessness, political ideologies are mainly projections of a people’s unconscious premodern family values.
As my politics are opposite of my two siblings, my family structure clearly explains little, but perhaps that's just because I'm truly exceptional.
This comes up a lot, as Emmanuel Todd writes interesting books about European history, as in his 1990 L’invention de l’Europe. He makes the bold claim that political ideology is the result of three things: family structure, literacy, and godlessness. In the modern age with universal literacy and godlessness, political ideologies are mainly projections of a people’s unconscious premodern family values.
As my politics are opposite of my two siblings, my family structure clearly explains little, but perhaps that's just because I'm truly exceptional.
Thursday, July 18, 2013
Milton Friedman on Behavioral Economics Circa 1978
Ever since Freakonomics and Kahneman's Nobel prize, people have been writing articles about the radical new idea that people are not lightning-quick calculators complicated algorithms as economists always thought, but instead, real people! True enough, but it's useful to understand what rational really means, and why it's used so much by economists. Here's Milton Friedman (22:15ish) noting why it's basically about predicting what people do, on average, nothing more:
Also interesting, a fetching young Laura Tyson asks a question around 34:45. I think she's aged well too (Keynesians may be wrong, but they can be cute)
Also interesting, a fetching young Laura Tyson asks a question around 34:45. I think she's aged well too (Keynesians may be wrong, but they can be cute)
Sunday, July 14, 2013
Beware Discrete Auctions
There's an interesting paper on High Frequency Trading (HFT) by Budish, Cramton, and Shim from the U of Chicago. They set up an interesting model, but then propose at the end that batching eliminates the HFT arms race, both because it reduces the value of tiny speed advantages and because it transforms competition on speed into
competition on price. I think that solution is interesting, but would prefer the following
In any case, we have several different exchanges, and so if there's a simple way to increase welfare, an exchange that adds any such rule would generate more traffic, and ultimately, imitation. Yet, that's how paying for liquidity provision (passive quotes) originated, as well as a bunch of other tactics. One thinks they are annoying features created in a smoke-filled room, but instead they were simply the result of giving market participants what they want. A retail trader can always opt-out by simply trading at the Volume-Weighted-Average Price (VWAP), because that gives you the average daily price that averages out all these shenanigans, so I really don't have any sympathy for those who are bothered by it: if you don't like the intraday game, don't play it!
But, I was intrigued by the idea of batch auctions, because I've heard people who think discrete auctions are better than continuous ones. I think they are not very good, and a simple example is an extreme of this, one can look at the performance of closing vs. opening prices. One is at the end of continuous trading, one where there's no trading. What happens?
Well, I looked at over 1000 stocks, from 2000 to today. I excluded really small, low-priced stocks. The Open-Open returns have slightly higher volatility (2% higher), but more importantly, there's a lot more 'mean reversion'. The graph below shows the future returns(O-O or C-C), sorted each day into deciles by the immediate prior return (O-O or C-C, respectively), then averaged over all those days.
- add a 20ish millisecond randomizer to any incoming order (ie, a lag of anywhere from 0 to 20 milliseconds). Thus, if you know you are getting randomized, the marginal value of 1 millisecond is much smaller to the innovator, your place in the queue is noisy.
- make a rule that those who receive fees (liquidity providers) must have such orders exist for at least 1 second. That would get rid of a lot of flash quotes that are trying to be too clever and simply clog the bandwidth.
In any case, we have several different exchanges, and so if there's a simple way to increase welfare, an exchange that adds any such rule would generate more traffic, and ultimately, imitation. Yet, that's how paying for liquidity provision (passive quotes) originated, as well as a bunch of other tactics. One thinks they are annoying features created in a smoke-filled room, but instead they were simply the result of giving market participants what they want. A retail trader can always opt-out by simply trading at the Volume-Weighted-Average Price (VWAP), because that gives you the average daily price that averages out all these shenanigans, so I really don't have any sympathy for those who are bothered by it: if you don't like the intraday game, don't play it!
But, I was intrigued by the idea of batch auctions, because I've heard people who think discrete auctions are better than continuous ones. I think they are not very good, and a simple example is an extreme of this, one can look at the performance of closing vs. opening prices. One is at the end of continuous trading, one where there's no trading. What happens?
Well, I looked at over 1000 stocks, from 2000 to today. I excluded really small, low-priced stocks. The Open-Open returns have slightly higher volatility (2% higher), but more importantly, there's a lot more 'mean reversion'. The graph below shows the future returns(O-O or C-C), sorted each day into deciles by the immediate prior return (O-O or C-C, respectively), then averaged over all those days.
Basically, markets open at extremes that are quickly erased, allowing the market makers to pocket nice premiums for these temporary imbalances (you can't make money off this if you aren't a specialist). Continuous trading takes such trades away from monopolists, and allows competition to work.
These authors propose more frequent auctions to be sure, but the logic remains.
Trayvon Martin and Keynesian Multipliers
Pundits, websites, and news programs had very predictable opinions on the guilt of Zimmerman based on their view about Keynesian multipliers. Those that favor more redistribution, more governmental spending and regulation overwhelmingly sided with Trayvon Martin, the deceased African-American. Clearly that's not a coincidence, but reflects something deeper, mainly a peculiar groupishness.
Now, if basically prejudices drive reason as opposed to vice versa: what's the most basic belief there? It's not obvious our beliefs on fiscal policy and criminal justice would be almost perfectly correlated. Jonathan Haidt wrote a great book on moral confabulations, but I don't think his 6 foundations of political thought help here. For example, both think their side is 'fair', just for different reasons. Instead, I think it's people choosing in-groups vs. outgroups, the basic building block of multilevel selection theory outline by David Sloan Wilson. Thus Harvard elites and don't mind quotas because,as Harvard grads, they will get the good jobs anyway. Those in elite positions don't mind quotas, get the support of quota recipients, and can portray themselves as progressive; those in the middle look like selfish bigots, and they lose.
Ultimately, via the logic of Hotelling's median voter theory, there are two teams, and while they are somewhat inconsistent in their beliefs (free choice in abortions, but not employment or insurance) these beliefs form the most basic coarsening of a set of two self-interested groups. Everyone likes their team, and want power at the expense of those on the other side. It's Lenin's Who, Whom?
Marx's historical dialectic class struggle was profoundly wrong: it has never been simply poor vs. rich, but rather complex coalitions that interweave, because rich need the numbers of the poor and the poor need the capital and skill of the rich. Think about academic elites and the really poor: they are both heavily Democratic. It clearly isn't just rich vs. poor.
Of course, this also means most debates about the multiplier are pointless because if I can predict whether you think the multiplier is large based on your beliefs about Zimmerman's guilt, the real issue has not much to do with econometrics. If you are doing objective work on multipliers, remember no one expects details to be dispositive, the narrative will drive what facts are seen as irrelevant or essential.
Now, if basically prejudices drive reason as opposed to vice versa: what's the most basic belief there? It's not obvious our beliefs on fiscal policy and criminal justice would be almost perfectly correlated. Jonathan Haidt wrote a great book on moral confabulations, but I don't think his 6 foundations of political thought help here. For example, both think their side is 'fair', just for different reasons. Instead, I think it's people choosing in-groups vs. outgroups, the basic building block of multilevel selection theory outline by David Sloan Wilson. Thus Harvard elites and don't mind quotas because,as Harvard grads, they will get the good jobs anyway. Those in elite positions don't mind quotas, get the support of quota recipients, and can portray themselves as progressive; those in the middle look like selfish bigots, and they lose.
Ultimately, via the logic of Hotelling's median voter theory, there are two teams, and while they are somewhat inconsistent in their beliefs (free choice in abortions, but not employment or insurance) these beliefs form the most basic coarsening of a set of two self-interested groups. Everyone likes their team, and want power at the expense of those on the other side. It's Lenin's Who, Whom?
Marx's historical dialectic class struggle was profoundly wrong: it has never been simply poor vs. rich, but rather complex coalitions that interweave, because rich need the numbers of the poor and the poor need the capital and skill of the rich. Think about academic elites and the really poor: they are both heavily Democratic. It clearly isn't just rich vs. poor.
Of course, this also means most debates about the multiplier are pointless because if I can predict whether you think the multiplier is large based on your beliefs about Zimmerman's guilt, the real issue has not much to do with econometrics. If you are doing objective work on multipliers, remember no one expects details to be dispositive, the narrative will drive what facts are seen as irrelevant or essential.
Altenatives to Rational Expectations
When Rational Expectations was developed in the 1960s most people thought it was a classic academic result. Certainly mutual fund managers chuckled at the thought a monkey could outperform a skilled professional. Further, everyone had a stupid relative, neighbor, or co-worker, that proved people were not rational. Meanwhile, the Capital Asset Pricing Model (ie, Beta), meanwhile, was accepted as true even before data suggested it was.
Yet, we now have almost 100 years of data showing mutual fund managers do not outperform naive benchmarks (Cowles foundation (1932), Malkiel's Random Walk Down Wall Street (1973)), making the signature prediction of rational expectations surprising and true, a sign of a great theory. Samuelson’s (1965) application of the law of iterated expectations to explain why Bachelier (1900) notices that securities prices look like a random walk, made for both elegant theory and voluminous evidence. The efficient markets theory reached its current plateau in 1970 with Fama's articulation of what efficient markets mean, but never has it been generally accepted by practitioners or non-economists, and there have always been many economists looking at exceptions to this rule.
The latest refutation of this plank of economics sounds very similar to a middle-aged economist reading this stuff:
It's been long known that people over-estimate specifics based on things like the availability heuristic, as when people more easily think an introverted woman is a librarian than an accountant because it fits with what one can think about as an archetype, and this has been discussed since the 1970s (see Kahneman, Tversky and Slovic's Heuristics and Biases). This is patently illogical, and spawned the behavioralist revolution, where we have about 100 such biases.
Yet, it's one thing to say people are inconsistent, another to say liquid markets are. While a random survey of retail investors might have the equity premium at something silly like 10% or the odds of a US default at 5%, that doesn't mean derivatives are priced that way, because knowledgeable people with capital tend to arbitrage these beliefs out of the market. For example, Snowberg and Wolfers showed that horse races are illogical looking at millions of races, and this has been known since 1949 (note: the favorite has the highest expected returns). Yet, to the extent such prices reflect this inconsistency, it isn't large enough to make large amounts of money. Does this translate to anything about the general market, or tradable stocks like those in the Wilshire 3000? Almost always, no.
These findings strike me a bit like this finding listed in a eulogy of this Harvard psychologist who sounds like just an awesome mensch, where they noted that
The problem with the alternatives to Rational Expectations is that they generally point out that in low-stakes environments people make logical mistakes in predictable ways, but they hardly ever explain the yield curve, the equity premium, or anything else important, but rather, the poor returns to 100-1 horses or some other curiosity. I like betting on horse longshots at the track and implicitly pay a premium for that, but it probably costs less than what I spend on beer at the track, so it's really not complicated.
Now, one might think that evolutionary biology provides a fruitful alternative, in that it tries to look at predictable expectations from the standpoint of its evolutionary success. I think that's great, and indeed, think that a relative utility function makes more sense because, among other reasons, it is more evolutionarily robust.
David Sloan Wilson, who is well-known for showing how religion is a 'rational' adaptive solution within a multi-level selection process, has organized a special issue of the Journal of Economic and Behaviour Organisation entitled ‘Evolution as a General Theoretical Framework for Economics and Public Policy’. A lot of those articles are interesting, but none really that radical.
For example, there's the article arguing economists should use multi-level selection, as opposed to the atomistic selection implicit within abstract markets. That is, in evolutionary biology, there's a big debate about where selection occurs. Before Richard Dawkins became fixated on the specter of Christian Fundamentalists, he wrote The Selfish Gene to rebut the idea that selection occurs at the level of a species, championing the ideas of George C. Williams and John Maynard Smith. These biologists noted casual speakers would talk about 'what is good for a species', but clearly gazelles don't act as a group, but as individuals, and within individuals, as cells, and then as genes. Dawkins argued that it's selfishness at the lowest level, the gene, ergo the title. Things have gotten rather complicated since, and it's clear selection takes place at more than one level.
For example, take the Hymenoptera order, which includes ants. There's a joke about socialism: good theory, wrong species (ie, ants not humans). For these species their genes actually assist in their goupishness, and so they willing kill themselves for the group all the time, because sisters are more related to each other than a queen is to her offspring. Thus, the selection for haplodiploidy could take place at one level--the swarm--not the gene level, and this then affects the payoffs for individuals and thus genes. Selection seems to occur at the lowest level, the genes, but also cells, the organism, and among humans, in culture. Economist Herbert Gintis has argued that societies that promote pro-social norms, as in group selection, have higher survival rates than societies that do not.
I think that's all super, and think some societies might have different intrinsic levels of individualism depending on their evolutionary environment. For example, here's nice discussion about how the Western concept of marriage outside of extended families led to a unique level of individualism, something that is rather unique to Western Civilization. In contrast, in the middle east, a lot of people marry cousins, and this breeds greater groupishness because your extended family is so obviously genetically oriented, making outsiders very clear (ie, altruism towards one's clan, indifference at best towards those outside it).
Yet, it's not clear how much these insights are really new as applied to markets and theories of the firm. Many theories apply to how oligopolies and monopolies behave, and these are quite different than what one sees for perfect competition, so the idea suggested in Wilson's Special Issue, that selection occurs at the industry, firm, or worker level, is not that revolutionary. Game theory often looks at mechanism design, and the stability of coalitions, and the conditions of various equilibrium. I get the sense they think the only behavior economics looks at is perfect competition. It's not, and hasn't been for 100 years.
It's generally accepted that one needs the system to obey two constraints:
These are sensible restrictions, so the issue is whether behavior is rational or not, from the agent's perspective. Rational expectations merely adds the requirement that any behavior should be consistent with zero abnormal profits for the simple and sensible reason that it is very difficult to generate abnormal profits in most markets. I still think that's a good bias, because it's usually true.
Yet, we now have almost 100 years of data showing mutual fund managers do not outperform naive benchmarks (Cowles foundation (1932), Malkiel's Random Walk Down Wall Street (1973)), making the signature prediction of rational expectations surprising and true, a sign of a great theory. Samuelson’s (1965) application of the law of iterated expectations to explain why Bachelier (1900) notices that securities prices look like a random walk, made for both elegant theory and voluminous evidence. The efficient markets theory reached its current plateau in 1970 with Fama's articulation of what efficient markets mean, but never has it been generally accepted by practitioners or non-economists, and there have always been many economists looking at exceptions to this rule.
The latest refutation of this plank of economics sounds very similar to a middle-aged economist reading this stuff:
Psychologists ... say that markets are not immune from human irrationality, whether that irrationality is due to optimism, fear, greed, or other forces.... "There's this tug-of-war between economics and psychology, and in this round, psychology wins," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at the California Institute of Technology...What did they show? That people basically make inconsistent bets when presented with subsets vs. supersets, for example, the odds that Hillary Clinton (Democrat) will win the Presidency is less than the probability a Democrat will win the Presidency, though in aggregate surveys this doesn't always hold.
It's been long known that people over-estimate specifics based on things like the availability heuristic, as when people more easily think an introverted woman is a librarian than an accountant because it fits with what one can think about as an archetype, and this has been discussed since the 1970s (see Kahneman, Tversky and Slovic's Heuristics and Biases). This is patently illogical, and spawned the behavioralist revolution, where we have about 100 such biases.
Yet, it's one thing to say people are inconsistent, another to say liquid markets are. While a random survey of retail investors might have the equity premium at something silly like 10% or the odds of a US default at 5%, that doesn't mean derivatives are priced that way, because knowledgeable people with capital tend to arbitrage these beliefs out of the market. For example, Snowberg and Wolfers showed that horse races are illogical looking at millions of races, and this has been known since 1949 (note: the favorite has the highest expected returns). Yet, to the extent such prices reflect this inconsistency, it isn't large enough to make large amounts of money. Does this translate to anything about the general market, or tradable stocks like those in the Wilshire 3000? Almost always, no.
These findings strike me a bit like this finding listed in a eulogy of this Harvard psychologist who sounds like just an awesome mensch, where they noted that
while teaching at the University of Virginia, he devised an experiment to study secrecy and obsessions. Enlisting college students to play card games at a table, he instructed some to play footsie under the table and tell everyone they were doing so. Others had to keep their footplay secret. One result? “The subjects who made secret contact with their partner were significantly more attracted to their partners,” Dr. Wegner told the Globe in 1994, a finding that, among other things, shed light on the allure of affairs outside relationships.Now, that is very interesting, but it doesn't really contradict any fundamental conception of humanity, and so it goes with the behavioral finance results: neat, but not profound.
The problem with the alternatives to Rational Expectations is that they generally point out that in low-stakes environments people make logical mistakes in predictable ways, but they hardly ever explain the yield curve, the equity premium, or anything else important, but rather, the poor returns to 100-1 horses or some other curiosity. I like betting on horse longshots at the track and implicitly pay a premium for that, but it probably costs less than what I spend on beer at the track, so it's really not complicated.
Now, one might think that evolutionary biology provides a fruitful alternative, in that it tries to look at predictable expectations from the standpoint of its evolutionary success. I think that's great, and indeed, think that a relative utility function makes more sense because, among other reasons, it is more evolutionarily robust.
David Sloan Wilson, who is well-known for showing how religion is a 'rational' adaptive solution within a multi-level selection process, has organized a special issue of the Journal of Economic and Behaviour Organisation entitled ‘Evolution as a General Theoretical Framework for Economics and Public Policy’. A lot of those articles are interesting, but none really that radical.
For example, there's the article arguing economists should use multi-level selection, as opposed to the atomistic selection implicit within abstract markets. That is, in evolutionary biology, there's a big debate about where selection occurs. Before Richard Dawkins became fixated on the specter of Christian Fundamentalists, he wrote The Selfish Gene to rebut the idea that selection occurs at the level of a species, championing the ideas of George C. Williams and John Maynard Smith. These biologists noted casual speakers would talk about 'what is good for a species', but clearly gazelles don't act as a group, but as individuals, and within individuals, as cells, and then as genes. Dawkins argued that it's selfishness at the lowest level, the gene, ergo the title. Things have gotten rather complicated since, and it's clear selection takes place at more than one level.
For example, take the Hymenoptera order, which includes ants. There's a joke about socialism: good theory, wrong species (ie, ants not humans). For these species their genes actually assist in their goupishness, and so they willing kill themselves for the group all the time, because sisters are more related to each other than a queen is to her offspring. Thus, the selection for haplodiploidy could take place at one level--the swarm--not the gene level, and this then affects the payoffs for individuals and thus genes. Selection seems to occur at the lowest level, the genes, but also cells, the organism, and among humans, in culture. Economist Herbert Gintis has argued that societies that promote pro-social norms, as in group selection, have higher survival rates than societies that do not.
I think that's all super, and think some societies might have different intrinsic levels of individualism depending on their evolutionary environment. For example, here's nice discussion about how the Western concept of marriage outside of extended families led to a unique level of individualism, something that is rather unique to Western Civilization. In contrast, in the middle east, a lot of people marry cousins, and this breeds greater groupishness because your extended family is so obviously genetically oriented, making outsiders very clear (ie, altruism towards one's clan, indifference at best towards those outside it).
Yet, it's not clear how much these insights are really new as applied to markets and theories of the firm. Many theories apply to how oligopolies and monopolies behave, and these are quite different than what one sees for perfect competition, so the idea suggested in Wilson's Special Issue, that selection occurs at the industry, firm, or worker level, is not that revolutionary. Game theory often looks at mechanism design, and the stability of coalitions, and the conditions of various equilibrium. I get the sense they think the only behavior economics looks at is perfect competition. It's not, and hasn't been for 100 years.
It's generally accepted that one needs the system to obey two constraints:
- Individuals must receive a positive return to their action
- Individuals must be doing their best given their information and beliefs
These are sensible restrictions, so the issue is whether behavior is rational or not, from the agent's perspective. Rational expectations merely adds the requirement that any behavior should be consistent with zero abnormal profits for the simple and sensible reason that it is very difficult to generate abnormal profits in most markets. I still think that's a good bias, because it's usually true.
Sunday, July 07, 2013
Stevenson and Wolfers' Flawed Happiness Research
Russ Roberts had a podcast a couple weeks ago where he interviewed Betsy Stevenson (below) and Justin Wolfers (right), primarily about their research on the Easterlin Paradox, and it highlighted what's wrong with so many academic debates.
To review, in 1974 USC professor Richard Easterlin found that within a given country people with higher incomes were more likely to report being happy. However, between developed countries, the average reported level of happiness did not vary much with national income per person. Similarly, although income per person rose steadily in the United States between 1946 and 1970, average reported happiness showed no long-term trend and declined between 1960 and 1970. Theoretically, utility is generally assumed to be increasing at a decreasing rate (eg, log(x)). So, if you have twice as much GDP/capita, you should be happier, but in practice it doesn't seem to work this way.
I agree with Easterlin, and the relative-status utility function is the key to my book, The Missing Risk Premium. Utility as Stevenson-Wolfers see it is a necessary and sufficient condition for an omnipresent risk premium, that is, it exists in a symmetric if--then relation (if one exists then the other does). Yet, the risk premium seems to show up in only three places, and is usually missing (thus, my book title), if not going the wrong way. Furthermore, evolution favors a relative utility function as opposed to the standard absolute utility function, and the evidence for this is found in ethology, anthropology, and neurology. Economists from Adam Smith, Karl Marx, Thorstein Veblen, and even Keynes focused on status, the societal relative position, as a motivating force in individual lives (this was before mathematical utility functions in the 1950s made the profession ignore relative position). So, this isn't just a crazy idea championed by a wacky Easterlin guy, or just wacky me.
Stevenson and Wolfers are married coauthors, and they have published at least three papers on the topic, all refuting the Easterlin finding. Wolfers states 'most economists have our view, that there is no Easterlin paradox and there probably never was.' I'm sure he is correct, that most economists share his views, but only because they always have: if economist used a relative utility function many (most) seminal models would become ambiguous, and the whole field loses much of its foundation. Interviewer Russ Roberts is the Merv Griffin of economics interviewers--agreeable to a fault--and so never presses them on what specifically causes the Easterlin crowd to see things so differently. As Stevenson is now part of the prestigious yet irrelevant Counsel of Economic Advisors and Wolfers has more affiliations than your average CFA (University of Michigan, Brookings, CAMA, CEPR, CESifo, IZA and NBER), these two represent best practices in economics. It would be useful to see what the 'best of the best' do when applying their laser-logic.
First, there's the paper that made this May's American Economics Review, Subjective Well‐Being and Income: Is There Any Evidence of Satiation? Here they document two things. First, that cross-sectionally, higher GDP/capita generates higher happiness. Strangely, they find that the income-happiness effect is at least twice as strong among richer countries, which no one thinks is true (the effect should decrease as wealth increases), and further using one set of data the effect of income on happiness is negative. The authors note, however, that this is merely because of one country, the Phillipines. Strange that 2 of the 5 observations here were significant in the direction no one argues is true. If this was the effect of one single country, could such an explanation be responsible for the positive effect for the rich countries? The data look disputable (one chart shows Denmark and Norway above Italy and Spain, which would be unusual). That said, by itself it does support their assertion.
Their second set of findings concern cross-sectional data within a country. Easterlin did not dispute this, however. Given positional goods like mates and lakefront property, relative wealth should matter. Thus, S-W spend a lot of time refuting findings that looks somewhat relevant to the Easterlin Paradox, and definitely supportive of their view, but if you are a smart economist you should understand this is irrelevant to anything but a caricature of the Easterlin idea. I don't think they are fools or consciously disingenuous, just really good at playing the game: they have convinced themselves that their academic confabulations are objective science as opposed to tendentious rhetoric. This doesn't move the debate forward, but it does help their status in their tribe, which is what most economic research is really about and why you don't have to follow most of it.
So, what about the original Easterlin note, that among developed countries, where people are more worried about obesity than malnutrition, as GDP/capita rises we aren't getting happier? Well, Sacks, Stevenson, and Wolfers (2013) adress this point directly, and show this chart, where happiness is on the y-axis (vertical), and log GDP normalized for country and 'waves'. Now, 'waves' is the name for the particular set of years a specific survey tended to use identical phrasing and protocols, which usually last a handful of years, thus each such set was assigned a fixed-effect. Here is the resulting data, and their 'effect' in the line just in case it isn't obvious to you.
When an economist tells you a symmetric ovoid contains a highly significant trend via the power of statistics, don't believe them: real effects pass the ocular test of statistical significance (ie, it should look like a pattern). Here's another view of the data we are interested in--change in log(GDP)/capita over time within a country--versus change in happiness, using a variety of surveys:
Again, for each its happiness on the y-axis, income on the x-axis. S-S-W add little lines trying to show a pattern that they are sure is there.
Now, two can play this game, as from 2010 Easterlin and co-authors have data with similar blobs, but they draw downward-sloping lines over them.
I think it's best to say, no relation, and to stop drawing lines on blobs.
In any case, the biggest problem with the Sacks, Stevenson and Wolfers analysis is that they estimate a short-term relationship between life satisfaction and GDP, rather than the long-term relationship. Surely over an economic cycle, say between 2007 and 2009, or 1999 and 2002, income is correlated with general anxiety in the predictable way. Only over decades does the null effect of income on happiness arise, and this is basically taken out via 'wave-fixed-effects', which are basically time-dummies for 5-year groupings.
While I think people who aren't fighting for basic necessities are focused primarily on status and the things it can buy, I don't think this implies we should be indifferent to growth. That would be the naturalistic fallacy, that 'is' implies 'ought.' We should aspire higher than envy, which paradoxically seems to elevate greed, but really just forces us to be grateful for things like the internet, strawberries in winter, and five-blade razors that we take for granted once everyone has them. I note many writers I otherwise admire, usually libertarian leaning, are quite averse to the Easterlin conclusion, thinking it will lead us to adopt a luddite policies because growth would not matter in such a world (see Ron Bailey here, or Tim Worstall there).
The key is that while I admit that my relatively impoverished grandfather was probably as happy as I am, I'm also very glad I live now: growth is good in spite of my envious homunculus. Further, as productivity growth is the natural consequence of free minds and markets, flattening growth means not merely focusing on 'more important things' but rather squelching freedom, and liberty is more important than equality because it's feasible while allowing a great deal of the latter. In contrast, true equality is only possible via force because people are not equal in effort or ability. I mean, how would one prevent Larry Ellison or LeBron James from being richer than everyone else? The only way would be to destroy new companies or merit-based systems, why the worst rise to the top in hierarchies based on non-pnl signals, with examples from smarmy politicians, clueless executives in large regulated corporations, and of course genocidal socialists.
Monday, July 01, 2013
Gettysburg and American Exceptionalism
150 years ago, Gettysburg was fought. Here is War Nerd on the battle:
You know how many civilians were killed in the whole battle of Gettysburg? One. I dare anybody from any other country anywhere, any time, to find me a battle with over 50,000 military casualties—and one civvies died. One! It’s incredible. People don’t realize how amazing that is. Those were supermen, there’s no other explanation. You read their letters and they write in complete sentences, they even have great handwriting, even the paragraphs work.
Sunday, June 23, 2013
A Premium for Negative Skew?
Xiong, Idzorek, and Ibbotson have a new paper coming out in the JPM showing that mutual funds with the highest tail risk (ie, highest probability of extreme downside returns) have higher returns. That is, there's a positive risk premium to negative skew.
This is rather curious.
The only thing I saw in this field related to this found the opposite. That is, in a 2006 JoF paper, Kosowski, Timmermann, and Wermers, and White bootstrapped the alphas of the entire universe of U.S. domestic equity mutual funds, and found that the top decile had much more positive skewness than the median. The bottom decile had more negative skewness.
I don't have the data, but it can't be consistent with both findings.
The only thing I saw in this field related to this found the opposite. That is, in a 2006 JoF paper, Kosowski, Timmermann, and Wermers, and White bootstrapped the alphas of the entire universe of U.S. domestic equity mutual funds, and found that the top decile had much more positive skewness than the median. The bottom decile had more negative skewness.
I don't have the data, but it can't be consistent with both findings.
Models aren't Optional
Models get a bad break, and the key is remembering the golden mean: moderation in all things. Macroeconomics, String Theory, Climate studies, all produce highly complicated models that are presented by our best academics as being thoroughly vetted and informative. Yet, for predictions in the real world, they stink. This leads to people saying models are all garbage, and we should all be engineers.
Clearly a bad model is worse than no model, but if you are operating in some domain, you have implicitly or explicitly, a model of that domain. in this way, it's simply nice to write it down as clearly as possible to better understand what you are doing.
I came across the The Good Regulator theorem (1970) by Roger Conant and Ross Ashby. It is stated "Every Good Regulator of a system must be a model of that system".
I came across the The Good Regulator theorem (1970) by Roger Conant and Ross Ashby. It is stated "Every Good Regulator of a system must be a model of that system".
the theorem shows that, in a very wide class (specified in the proof of the theorem), success in regulation implies that a sufficiently similar model must have been built, whether it was done explicitly, or simply developed as the regulator was improved. Thus the would-be model-maker now has a rigorous theorem to justify his work.I don't really follow the proof, but I think it's definitely true that to regulate something well, you need a good model of that something.
Sunday, June 16, 2013
UK Austerity in the 19th Century
I was watching Bloggingheads.tv, where Mark Blyth noted that contra Rogoff and Reinhart, Great Britain had a very high debt/gdp ratio at the beginning of the 19th century and then proceeded to have one of the best centuries of absolute growth in the history of civilization. Thus, debt is, if anything, salutary at really high levels, so the government should run higher deficits, etc. I've heard this argument quite a bit. See this chart from Wikipedia:
It peaks after the Napoleonic wars at 250%, well above the 90% number everyone has been talking about in the US. Note the clear decline in debt from Waterloo to World War 1. This is because the government started to perpetually run a surplus (see chart below, taken from here).
After the big wars, they would run surpluses regardless of the business cycle. So, was the prosperity from 1815-1914 caused by the debt in 1815, or the subsequent surpluses? Given the debt was used to make war, I don't think we can say it funded the public goods like roads that then generated 1000% returns.
I like Kevin Williamson's argument that the US is bound to default, and that's a good thing. Increasing the state doesn't create prosperity, rather, it first takes people out of the productive sector, then increases resentment because people given housing vouchers or make-work jobs know they are low status, undeserving relative to a billion other souls in the world. Better to let people find their way by getting out of the way.
It peaks after the Napoleonic wars at 250%, well above the 90% number everyone has been talking about in the US. Note the clear decline in debt from Waterloo to World War 1. This is because the government started to perpetually run a surplus (see chart below, taken from here).
After the big wars, they would run surpluses regardless of the business cycle. So, was the prosperity from 1815-1914 caused by the debt in 1815, or the subsequent surpluses? Given the debt was used to make war, I don't think we can say it funded the public goods like roads that then generated 1000% returns.
I like Kevin Williamson's argument that the US is bound to default, and that's a good thing. Increasing the state doesn't create prosperity, rather, it first takes people out of the productive sector, then increases resentment because people given housing vouchers or make-work jobs know they are low status, undeserving relative to a billion other souls in the world. Better to let people find their way by getting out of the way.
Subscribe to:
Posts (Atom)