Tuesday, September 21, 2010

Low Volatility and Beta 1.0 Portfolios

A low volatility portfolio targets the lowest volatility, or beta stocks. I have found that using a 3-4 factor model to minimize variance generates the lowest volatility portfolios, but the advantage to this approach is 2-4% annualized volatility, and given most investors do not understand factor analysis, merely taking the lowest volatility or beta stocks, generates a decent approximation while retaining a great deal more intelligibility. The alternative I proposed yesterday was the beta 1.0 portfolio. The difference low volatility and beta 1.0 approaches is shown below:


Beta 1.0 merely takes those stocks with betas nearest to 1.0. That is, every 6 month I estimated betas for every US stock listed that had a sufficiently high market cap (about 2500 stocks), and monitored the return, putting merged or delisted stocks back into the index. Data are from CRSP, Compustat, and Bloomberg, and when constructed over the past, they include dead companies, so this is a survivorship free dataset. Prior to 1997, I used monthly returns, after that, daily returns in estimating betas. The beta 1.0 portfolio has had a portfolio beta very near 1.0 in real time (about 1.05). The low beta portfolio merely took the lowest 100 beta stocks (dark blue obs) every six months, doing the same thing.

The differences in their mean returns is as follows:

US Returns Since 1962
 Beta1.0Low BetaS&P500
Avg. Ann.Geo. Return11.4%10.4%9.3%
Ann StDev17.4%13.1%15.1%
Beta1.040.58; 


If people cared primarily about beta, a proposition still dominating Business Schools, they should invest in low volatility or low beta portfolios. You can get a much lower 'risk' at about the same return, perhaps even a little higher. Merely cutting out the high volatility stocks generates a better return too, which is why low volatility portfolios, such as Robeco's, should be attractive to institutional investors trying to maximize a Sharpe ratio.

Note the Beta 1.0 stocks have a slightly higher return than the low beta stocks (and a slightly higher volatility). This is not really surprising, in the the perverse volatility effect is generallly that high volatility or beta stocks underperform massively. From low to mid beta, you actually get a modest return improvement.


Above we see the histogram of relative returns, and the relative return distribution is in fact fatter for the low beta portfolio compared to the beta 1.0 portfolio. 'Low risk' is risky if you are benchmarking against the index. This was especially pronounced in the tech bubble of 2000 (below), when high beta stocks greatly outperformed, and anyone incidentally plying a low beta stocks probably lost their jobs (unless they were Warren Buffet, who survived tech underperformance).


The bottom line is that if you are concerned about relative performance then the Beta 1.0 portfolio is a smart alternative to the low volatility approach, which has as its greatest advantage avoiding those lousy lottery-ticket stocks that over the long run have proven disastrous across every major equity market.

Now you might say, 'what kind of idiot measures risk relative to the S&P?" Don't I only care about my consumption, my wealth? I would say, most people care more about relative returns than absolute returns. CAPM pioneer Bill Sharpe consults for pension funds evaluating asset managers and states his first objective in is that ‘I want a product to be defined relative to a benchmark’ (see Tanous). Fund Manager Kenneth Fisher‘s book The Only Three Questions that Count, in the index next to Risk, it has ’see Benchmarking.’ When asked about the nature of risk in small stocks, Eugene Fama noted that in the 1980‘s, “small stocks were in a depression”, and Merton Miller noted the underperformance of the Dimensional Fund Advisors small-cap portfolio against the S&P500 for 6 years in a row was evidence of its risk. But smaller stocks actually had comparable total returns, and higher returns relative to the risk-free rate, in the 1980‘s compared to the 1970‘s. It was only relative to their benchmark (the S&P500, large cap stocks) that they had ‘poor‘ returns’ highlighting that even Fama and Miller’s practical intuition on risk is purely relative, and these are champions of the standard model. Needless to say, other people, especially fund managers, are keenly aware of their year and life-to-date performance relative to the S&P500.

It seems reasonable to presume that for investment professionals and academics, risk is a variation in return relative to a benchmark. This fact has several important implications for investing optimally, whether you act that way or not.

14 comments:

John said...

I wonder how close a portfolio you could get to the beta 1.0 portfolio by accounting for tail risk. For instance, a mean-cvar optimization would put a greater emphasis on the stocks that have lower conditional value at risk like these beta 1.0 stocks seemingly do.

kyle said...

Here's what I don't understand about the interaction between this analysis and your stuff on the risk premium -- if, on average, one can't really earn more than the risk-free rate by investing in equities, then why should one even bother investing in equities at all, as opposed to just buying Treasurys (or, for those of us with a low enough level of wealth, bank deposits with even better yields)? Does cutting out the higher-vol stocks really lead to a yield higher than the risk-free rate, on average? I feel like there's something I'm missing.

At any rate, thanks for sharing your views here and in your book -- very thought-provoking for a wannabe/larval PM like myself.

Anonymous said...

Where did you get the data for the ~2500 stocks you are testing back to 1962? I'd like to look at it if you could point us in the right direction.

Also, is the data survivorship-bias free?

kyle said...

Another question -- over what period do you estimate the betas?

Eric Falkenstein said...

data are survivorship free...my datasets had dead companies, including delisting returns...if they died, their return was noted...betas using monthly data used historical 36-60 months prior to 1997, daily data for prior year after that

Anonymous said...

Can you add a column comparing to equal weighted s&p500?

Anonymous said...

Reading comments here and on the previous article, its just goes to show Eric that you can lead a horse to water but cant make him drink. Im guessing that most readers will still try and bet on individual equities based on noise from media commentators than logical systematic based common sense. I woud love to see vanguard pik up on your concepts. Well done as usual.

travis said...

Is the standard deviation for the SP500 correct? 33% seems pretty high.

Eric Falkenstein said...

travis: thanks! big typo.

Anonymous said...

*that you can lead a horse to water but cant make him drink.*

As the author of this blog is found of saying, if you make strong claims, you've gotta back them up with strong arguments. Comparing equal weight to market weight is not fair in the light of what has happened these last teen years. Given that the author has the data needed to correct this, this it makes it even slightly annoying at best and suspicious at worst.

Eric Falkenstein said...

I do not have the monthly constituents of the S&P500 to create my own Equal weighted index. If you can point me to the reference, I'd be happy to use it. I did create Minimum variance subsets of the indices every 6 months for the past 12 or so years, but after I created theses subsets, I don't have those lists anymore.

But I find the interest in equal weighted S&P 500 to be similar to using, say, the Russel 3000 as opposed to the S&P500. There are many indices, some do better than others. I think the S&P500 is the most common, so that's the benchmark presented. I'm not selling anything, I'm telling you exactly what I did so you can do it yourself. Use this idea if you want, and if you don't, that's ok too.

Anonymous said...

I'm Anonymous from 6:40PM. Eric:> thanks, the first § clarifies things alot (you gotta admit that if you don't mention this point, your comparaison look strange to the untrained eye).

Anonymous said...

How did you define the cutoff for "stocks with betas nearest to 1.0"?

Eric Falkenstein said...

100 closest to 1.0