Thursday, September 30, 2010

What is the Carry Trade?

I recently mentioned the Carry Trade, and it wasn't obvious what i was talking about, so here's a short description. The Carry Trade in currency markets is when you borrow in the currency with the low interest rate, and then invest that in the currency with the higher interest rate. If the exchange rate does not change, this generates a positive return. Uncovered Interest Rate Parity is a theory that connects current to future spot rates. This theory states that you have two ways of investing, which should be equal. First, you can invest in your home country at the riskless rate. So if the US interest rate is 5%, you can make a 5% return in one year, in USD. Alternatively, you can buy, say, Yen, invest at the yen interest rate (each currency has a different risk-free rate), and then convert back to USD when your riskless security matures. For this to be equal, you need something like:

rusd=ryen + Appreciation in Yen

Where rusd is the US interest rate, etc. So, if you make 5% in USD, an American investor should receive that same return in yen, via the interest rate in yen, plus the expected appreciation/depreciation in the yen against the dollar. If the interest rate in yen is 1%, this means one expects the yen to appreciate by 4%. When the foreign interest rate is higher than the US interest rate, risk-neutral and rational US investors should expect the foreign currency to depreciate against the dollar by the difference between the two interest rates. This way, investors are indifferent between borrowing at home and lending abroad, or the converse. This is known as the uncovered interest rate parity condition, and it is violated in the data except in the case of very high inflation currencies. In practice higher foreign interest rates predict that foreign currencies appreciate against the dollar, making investing in higher interest rate countries win-win: you get appreciation on your currency, and higher riskless interest rates while in that currency.

Now the rates of expected return via the two investment paths can differ according to risk, so academics have been trying to explain this pattern via 'risk'. So one can imagine, looking at the yen, or the dollar, or various European currencies in the 1970’s, etc., trying to tie each to some measure of a home currency’s risk factor: consumption, the stock market.

Like high returns to low volatility stocks, it is difficult, but not theoretically impossible, to make sense of the higher currency returns to high interest rate currencies. Robert Hodrick wrote a technical overview of the theory and evidence of currency markets in 1987. He summed up his findings in this paragraph:

We have found a rich set of empirical results... We do not yet have a model of expected returns that fits the data. International finance is no worse off in this respect than more traditional areas of finance.

Hodrick looked at CAPM models, latent variable models, conditional variance models, models that use expenditures on durables, or nondurables and services, Kalman filters. None outperformed the spot rate as a predictor of future currency prices. Hodrick leaves off with the idea that ‘simple models may not work well’.

For the next 20 years, and many hedge funds specialized in the ‘carry trade’, which was as simple as it was successful: lend capital to high interest rate currencies, enjoy the high riskless rates and currency appreciation on the spot rate; borrow capital at the low interest rate currency, and make money on the depreciation of this debt over time. In 2008 these strategies suffered significantly, but the net effect is still there is no clear relation between risk and return in currencies. Below is the total return to going long the Australian dollar (a high interest rate currency) short the Yen (a low interest rate currency), from 1990 to 2010. Note on average it makes money (about 1.5%).

Brunnermeier, Nagel and Pedersen (2008) noted that

Overall, we argue that our findings call for new theoretical macroeconomic models in which risk premia are affected by funding and liquidity constraints, not just shocks to productivity, output, or the utility function.

What they mean is that the carry trade continued to work 30 years after being identified by Farber and Fama, and it has continued as a puzzle because no reasonable risk factor can explain it.

Tuesday, September 28, 2010

What's with the US Dollar/Equity Market Correlation?

Lately, the US dollar and the US stock market have been in lock-step. Below are the S&P500 and the US currency ETF "DBV". You don't need statistics to see they are highly correlated (why a good graph is better than a good statistic!).

It has not always been this way. Looking at the rolling correlation that looks at the past 252 daily returns, using the dollar's value against the major currencies (see here) prior to 1995, and the trade-weighted dollar subsequently. This currency index has higher values for a lower value of the dollar, opposite to DBV ETF, but the important point is that current dollar/equity correlations are at an all-time absolute high. The dollar is now driving the stock market--or vice versa--at an unprecedented level. The implications, to me, are not obvious. [addendum: commenter John noted that DBV is a carry trade ETF, not a currency value ETF. So, I still don't know what's going on, but at least I'm not as confused as before]

Conventional Wisdom in 2003

Head of failed mortgage bubble bank CountryWide, Angelo Mozillo at Harvard:
While the number of minority homeowners has advanced recently, climbing from 9.5 million in 1994 to 13.3 million in 2001 – an increase of 40 percent – the fact remains that it is still not at a level equal to that of white homeownership. And as President Bush pointed out, the homeownership rate for African Americans is 47 percent and for Hispanic Americans it is 48 percent, a stark contrast to the homeownership rate of 75 percent for white American households.

That means there is currently a homeownership gap of over 25 points when comparing white households with African Americans and Hispanics. My friends, that gap is obviously far too wide.
One of the more obvious resolutions to the Money Gap is the elimination of down payment requirements for low-income and minority borrowers. Current down payment requirements of 10 percent or less add absolutely no value to the quality of the loan. It is the willingness and the ability of a borrower to make monthly payments that are the determinants of loan quality.
We must do this through improved automated underwriting models that take into account more variables, and measure true indicators of risk and willingness to pay. We need an ongoing educational process, not only at the primary market level, but also in the secondary markets and with mortgage insurers to help lead this effort to calibrate the scoring system. And finally, it must be recognized that borrowers with credit scores below what is currently defined as “creditworthy” levels can still be acceptable credit risks. Thus, the credit score bar dividing creditworthy from high-risk borrowers, must be substantially lowered by the GSEs, the secondary market in general, and with bank regulators. The GSEs have made good progress over the last few years in expanding their credit criteria, but I encourage them to become much more aggressive in this regard.

Back in the bubble, the one thing everyone (regulators, academics, politicians, journalists) thought banks were doing right, was mortgage lending. This conventional beltway wisdom now has more power.

Sunday, September 26, 2010

Small Banks Wary of 'Help'

Obama's weekly $30B expenditure last week was directed at small banks. The idea was to give banks more 'capital', so they would be more willing to loan to small businesses. The bankers are reluctant this time:
And then there's concerns that the government money will have strings attached.

The fears stem from what happened under TARP, the Troubled Asset Relief Fund, formed at the height of the financial meltdown to pump money into banks. Banks that accepted TARP money had to later cut dividends to shareholders and limit compensation to top executives. They were also penalized for early repayment.

In this new legislation, the government is taking steps to avoid the tarnish that accompanied TARP. The key part of this effort: Banks can return the money without penalty if rules governing the small business loans change.

But Chase, the bank CEO in Memphis, isn't convinced. "The rules can be changed any time," said Chase.

If you accept government money, many will argue this implies that the government has a right to micromanage your business.

Thursday, September 23, 2010

Minimum Variance and Beta Portfolio Data

I created indices based on the idea that since risk is not positively related to return, it is pretty straightforward to dominate the indices. Strangely, it isn't easy to find data on how 'high' or 'low' beta stocks are doing, other than incidentally by looking at tech stocks or some other proxy. The implication is that one should buy low volatility portfolios if one is a Sharpe ratio maximizer, or Beta 1.0 portfolios if one is an Information Ratio maximizer. Go to to see historical performance of these strategies. I update the information every month. No passwords, just free data to play with (downloadable!), data not easily available elsewhere.

Two Shoes are Enough

Anyone who works with a Bloomberg terminal, and knows how much they are paying per month, knows Mayor Bloomberg is a very wealthy man. Yet, interestingly, he owns only two pair of 'work' shoes:
Mayor Bloomberg is the rare billionaire who can preach penny-pinching without putting his foot in his mouth.
He's been wearing the same shoes for 10 years.
"The mayor owns only two pairs of work shoes," his spokesman, Stu Loeser, told The Post. "One day he'll wear one, the next the other -- and when they get worn down, he has them resoled."

I agree, in that when you find a good pair of shoes you want to wear them every other day (they need a breather). Indeed, many of my favorite objects are like that: chairs, couches, jeans, t-shirts. It makes you understand why after only a little bit of income (~$70k/year) happiness is uncorrelated with income.

Tuesday, September 21, 2010

Low Volatility and Beta 1.0 Portfolios

A low volatility portfolio targets the lowest volatility, or beta stocks. I have found that using a 3-4 factor model to minimize variance generates the lowest volatility portfolios, but the advantage to this approach is 2-4% annualized volatility, and given most investors do not understand factor analysis, merely taking the lowest volatility or beta stocks, generates a decent approximation while retaining a great deal more intelligibility. The alternative I proposed yesterday was the beta 1.0 portfolio. The difference low volatility and beta 1.0 approaches is shown below:

Beta 1.0 merely takes those stocks with betas nearest to 1.0. That is, every 6 month I estimated betas for every US stock listed that had a sufficiently high market cap (about 2500 stocks), and monitored the return, putting merged or delisted stocks back into the index. Data are from CRSP, Compustat, and Bloomberg, and when constructed over the past, they include dead companies, so this is a survivorship free dataset. Prior to 1997, I used monthly returns, after that, daily returns in estimating betas. The beta 1.0 portfolio has had a portfolio beta very near 1.0 in real time (about 1.05). The low beta portfolio merely took the lowest 100 beta stocks (dark blue obs) every six months, doing the same thing.

The differences in their mean returns is as follows:

US Returns Since 1962
 Beta1.0Low BetaS&P500
Avg. Ann.Geo. Return11.4%10.4%9.3%
Ann StDev17.4%13.1%15.1%

If people cared primarily about beta, a proposition still dominating Business Schools, they should invest in low volatility or low beta portfolios. You can get a much lower 'risk' at about the same return, perhaps even a little higher. Merely cutting out the high volatility stocks generates a better return too, which is why low volatility portfolios, such as Robeco's, should be attractive to institutional investors trying to maximize a Sharpe ratio.

Note the Beta 1.0 stocks have a slightly higher return than the low beta stocks (and a slightly higher volatility). This is not really surprising, in the the perverse volatility effect is generallly that high volatility or beta stocks underperform massively. From low to mid beta, you actually get a modest return improvement.

Above we see the histogram of relative returns, and the relative return distribution is in fact fatter for the low beta portfolio compared to the beta 1.0 portfolio. 'Low risk' is risky if you are benchmarking against the index. This was especially pronounced in the tech bubble of 2000 (below), when high beta stocks greatly outperformed, and anyone incidentally plying a low beta stocks probably lost their jobs (unless they were Warren Buffet, who survived tech underperformance).

The bottom line is that if you are concerned about relative performance then the Beta 1.0 portfolio is a smart alternative to the low volatility approach, which has as its greatest advantage avoiding those lousy lottery-ticket stocks that over the long run have proven disastrous across every major equity market.

Now you might say, 'what kind of idiot measures risk relative to the S&P?" Don't I only care about my consumption, my wealth? I would say, most people care more about relative returns than absolute returns. CAPM pioneer Bill Sharpe consults for pension funds evaluating asset managers and states his first objective in is that ‘I want a product to be defined relative to a benchmark’ (see Tanous). Fund Manager Kenneth Fisher‘s book The Only Three Questions that Count, in the index next to Risk, it has ’see Benchmarking.’ When asked about the nature of risk in small stocks, Eugene Fama noted that in the 1980‘s, “small stocks were in a depression”, and Merton Miller noted the underperformance of the Dimensional Fund Advisors small-cap portfolio against the S&P500 for 6 years in a row was evidence of its risk. But smaller stocks actually had comparable total returns, and higher returns relative to the risk-free rate, in the 1980‘s compared to the 1970‘s. It was only relative to their benchmark (the S&P500, large cap stocks) that they had ‘poor‘ returns’ highlighting that even Fama and Miller’s practical intuition on risk is purely relative, and these are champions of the standard model. Needless to say, other people, especially fund managers, are keenly aware of their year and life-to-date performance relative to the S&P500.

It seems reasonable to presume that for investment professionals and academics, risk is a variation in return relative to a benchmark. This fact has several important implications for investing optimally, whether you act that way or not.

Sunday, September 19, 2010

Beta 1.0: A New Low Cost Indexing Strategy

In have argued that people pay for hope--lottery tickets, Black Swans--and this shows up as lower returns for highly volatile assets of all forms. In practice it's worse than raw data show, because highly volatile assets tend to have higher transaction costs because they are often less liquid, having a higher bid-ask spread and move more when you try to position into them.

Thus, it seems obvious a Sharpe maximizing investor should target low volatility portfolios, and indeed many such indices are funds are being created. The Dutch asset manager Robeco employs economist Pim van Vliet who has written on the perverse low-volatility results, and they have two low volatility funds, the Global Conservative Equity and European Conservative Equity funds. I think such strategies dominate their benchmarks because they avoid the high-flyers that have very poor returns, and they lower risk: win-win in return-volatility space. Institutional equity managers should flock to these, because they should be sophisticated enough to see that the Sharpe ratio is the best metric for a broadly diversified equity portfolio.

Yet, I understand that many investors are more concerned with 'underperformance', such as any deviation from a benchmark such as the S&P500. Around the year 2000 I was trying to pitch a low volatility strategy, and running a small fund with my own money as a sideline to my day job as a quant and risk manager. I remember many saying to me that it all seemed fine, but it would have underperformed over the prior year, and no one wants to put money into a strategy that underpeformed recently. I thought it was irrational, but given the way money flows into funds--via relative performance--it was rational given their constraints. Alas, it did very well over the next two years. The key is, when you play the averages, using decades of data, you can't really market time as well; not everything has momentum.

I used to be concerned that my result would be arbitraged away but now I realize that like many people with a good new idea, my problem is not people stealing it, rather, shoving it down their throats. The idea that 'the CAPM' does not work, is pretty well established. Even the fact that higher volatility stocks underperform is now pretty universally acknowledged. The implication is therefore obvious (though I had to spend a lot of money to be able to say this): low volatility equity portfolios are a dominant equity investing strategy.

Back in 1993 when I was trying to sell the Northwestern faculty on my finding that lower volatility stocks had higher returns than high volatility stocks, they figured I just made an error, and hoped I would disappear. My finding couldn't be true because rational investors should not allow it, 'the market' should have a dominant Sharpe ratio, and I did not identify it via generalized method of moments or Banach spaces. Just control for price, or size, and sort, and there it was.

One key stumbling block to my potential advisers was this finding would imply many funds were being irrational. Indeed, as Sharpe maximizers, they are. Buying hope is very common, which is why you keep getting stupid spam: lots of idiots answer these adds on the chance that some Duke in Nigeria does need only a $2000 processing fee to unlock $10MM USD. Bloomberg Magazine's issues on top analysts, and end of year reports of top funds, consistently highlight 'top' achievers, those with the biggest gains over the prior year. Returns are never 'risk adjusted' in any way. Someone who merely outperformed the S&P500 by 2% a year, but never made the top 10% in any year, would probably lose his job the first year he underperformed, because he would never have one of those years that generates those dubious awards given out by industry to itself (e.g., Risk Magazine's Risk Manager of the Year--which historically has included people from Enron and WorldComm).

I argue that the fact people are better described as envious as opposed to greedy, this leads to benchmarking, and leads to an elimination of the risk premium. Add to this that there's a 'hope premium' in highly volatile assets, and high volatility stocks are basically suboptimal within a long-only portfolio.

Yet if you want to maximize an Information Ratio, as opposed to a Sharpe ratio, a strategy of targeting stocks 'in the middle' gives you a little lift in return, while minimizing benchmark risk. The Beta 1.0 strategy takes the 100 stocks within the S&P500 with betas nearest 1.0. Thus, by construction it has a beta near that of the S&P500 index. The average returns over the past 50 years is as follows:

US Returns Since 1962

Avg Ann.Arith Return12.9% 10.5%
Avg. Ann.Geo. Return11.4%9.4%
Ann StDev17.4%15.1%

The average return over the past 2 years has been as follows:

US Returns Since 2009

Avg Ann.Arith Return18.21% 13.1%
Avg. Ann.Geo. Return15.7%10.8%
Ann StDev27.6%24.4%

This is a new strategy that I think is quite attractive. As most portfolio managers both cling to their benchmarks, yet underperform by a couple percent, this strategy would do so in a much lower-cost, straightforward way, and historically has generated a 2% premium.

The Sad Life of a 12-Year-Old Existentialist

Socrates, clearly one to see the glass half full, noted that, "If you get a good wife, you will become very happy; if you get a bad one, you will become a philosopher—and that is good for every man." In other words, if you are less happy you will focus more on what constitutes a good life. I think that's somewhat true, in that ebullient moods are great for socializing but bad for thinking about difficult problems. Nothing focuses the mind like adversity of some sort.

I read this about the Bubble Boy David Vetters, who his entire life, right from birth, without human contact because doctors knew his immune system had a rare defect (an earlier brother died shortly after birth). He seems a thoughtful little fellow, and as he aged he could see it was not going to end well. As he reached puberty, he started becoming more depressed and uncontrollable. He once asked his nurse:

Whatever I do depends on what somebody else decides I do. Why school? Why did you make me learn to read? What good will it do? I won't ever be able to do anything anyway. So why? You tell me why

These are very deep questions from a 12-year-old, who was given everything he needed to live, but had no ability to connect or make an impact on anyone or anything. A life needs real achievement to matter. A week before he died, his mother touched his skin for the first time.

Friday, September 17, 2010

Never Enough Data

One problem you discover very quickly in macro is there isn't enough data to distinguish between many competiing theories. 100 years is a long time, but really not that many business cycles. The problem also pertains to bond default rates. Many people look at bonds, and because the bond has a 1000 issuers over 10 years, it seems to have 10000 observations, a pretty large sample. But if that time period contained only one recession, that's really not a large sample.

Macroeconomic cycles have peculiar sectoral shocks every recession (part of my Batesian Mimicry explanation of business cycles). The past have been, in reverse order: housing, tech, commercial real estate, and energy. Notice a pattern? The pattern is they are all different! So, if your 1000 issuers weren't in the bad sector, you can easily gain a false sense of security.

The above is from a paper by Giesecke, Longstaff, Schaefer and Stebulaev, Corporate Bond Default Risk: a 150-Year Perspective. Unlike Rienhart and Rogoff's This Time It's Different, these guys actually looked not only at the number of defaults, but made a stab at their 'rates'. That is, R&R's results are intriguing but given they only mention the number of defaults, it's not obvious if this is large or small because you don't know how many countries were issuing. Facts are very important in economics and finance, because so many important issues like the equity premium and default rates have such large uncertainty. Most people think facts are easy, and theory is hard, but actually I think it is the reverse. Theory, once you understand it, is trivial, yet important facts are very elusive, often at the bottom of most major disagreements.

It appears that default rates were much higher in the 19th century, and this could be very relevant to the 21st, because so many countries, and government entities within countries, have been on an unsustainable Greek-like binge. A subsequent wave of sovereign defaults could seem 'unprecedented', but only if you counted your lifetime as the sample of inference. As Faulbert noted about human folly, 'Our ignorance of history makes us libel our own times. People have always been like this.'

Wednesday, September 15, 2010

America and Haiti

Haiti is very instructive to the US. First, when listening to economists remember they have no consensus on the seemingly straightforward question 'why is Haiti so poor?' Thus, when some economist tells you the optimality of some fiscal policy is "Economics 101", remember that any interesting important macro fact is basically a puzzle to "Economics 101". While economists can model their opinions, they do not agree on the big economic issues of the day any more than garbage men or biologists do.

As the aerial photo of the Dominican-Haiti border shows, property rights are clearly an issue. Property rights are weak in Haiti, so no one has an incentive to cultivate or protect land, and so it turns into a literal sewer. The January earthquake has highlighted this problem, as to date very little debris has been moved according to the AP:
a major obstacle to demolishing buildings has been the lack of property records, which either were destroyed in the quake or never existed at all.

Without an owner's consent, it is difficult to remove debris, he said.

In Haiti, like the USA, there is great concern that if people own land, this will exacerbate inequality. The solution, that no one, or 'the state' owns it, leads to a wasteland.

My local paper highlights the same problem in my community. Most home loans are non-recourse: the bank has no 'recourse' to go after a borrower if he doesn't pay his mortgage, merely take the property. But this seizing of collateral takes 12 to 24 months given all the rules designed to protect homeowners. The result are many properties in foreclosure have no one managing them who wants to maximize their value. Indeed, the homeowner sure to default can cannibalize the house, ripping out $2 worth of fixtures and selling them at $1. Such venal behavior would be immediately condemned by politicians and journalists if it were done by banks, but because individuals are doing it there's no outrage. Here's a case where giving bankers more power would greatly improve neighborhoods. The following is from the Minneapolis Star-Tribune:
On Labor Day, Deneen Clarke was scraping the woodwork in her bathroom when she heard banging from outside. From her window, Clarke could see workers hauling doors, leaded glass windows and even a built-in buffet out of the century-old home next door.

Clarke called the cops. Another neighbor cussed out the workers. But they learned the police weren't interested as soon as they determined the person who authorized the work -- the homeowner.

It turned out that Clarke's neighbor decided to strip out valuable pieces of her foreclosed house the day before the bank took possession of it.

"The person is lowering my property values by taking stuff out," Clarke said. "The house is worthless next door now... The house can't be occupied the way it is."

One key characteristic of civilization is property rights, where people have clearly delimited areas of responsibility.

Tuesday, September 14, 2010

Is Capital a Cushion for Bad Times?

Recent capital regulation has moved various capital ratios up from about 5% to 8%--depending how you calculate capital. I think raising the capital ratios is probably a good idea, given the 'too big to fail' implicit guarantee on bank debt implies huge option value to the equity owners, and no default risk for the debt owners, this is better than doing only 'too big to fail'.

Yet, it is important point to remember is that equity is not a cushion for unexpected losses in some Merton and Perold (1993) type model of financial institutions. In the Merton and Perold model, both credit and market risk are completely understood by insiders and outsiders of the firm. Their measure of capital implies an increase in return on capital simply by bringing all firms under one big legal entity, as in this case profits would be strictly additive while capital would benefit from the diversification benefits. The empirical contrast, in the form of many firms in equilibrium, implies that they are missing something big.

One way to see that capital is not merely a cushion for hard times, is to consider the following example. Assume a goose lays an average of 100 golden eggs a year, normally distributed with mean 100, stdev of 10. The annualized discount rate is 10%, making the present value of this goose is $1,000 (100/.1).

What is the amount of capital needed such that the annualized probability of default is 1%?

A 1% adverse production over a year is a 76.7 eggs, so lending such that an interest rate of 10% generates that kind of interest payment implies that a 767 in debt would be paid each year by the eggs, with a 1% chance of failure.

But if only 76 eggs were produced one year, meaning the goose owner could not pay his interest with cashflow, that same equity owner could easily borrow 1 egg based on his equity collateral, and stave off bankruptcy. The cashflow loss would have to be improbably large to overcome this obvious solution by the equity owner.

It is the variability in the present value of the cash flows that is relevant, because there is basically no scenario that would wipe out the present value of the equity owner so much that he could not borrow to pay off his debt if he had debt of $767. So now, assume the prospective mean of the egg distribution follows a random walk, and to simplify, the standard deviation of cash flows is now zero. Assuming the 1-period liability holders demand a 1% annualized default rate. With 100 as the mean number of eggs, and a 10 as the standard deviation of the mean, the liability holder is willing to finance this goose with 767 in debt (if the mean falls by 23.3, a 1% chance). But say the mean number of eggs falls to 76, and so the new present value is 760. The equity owner now has a book value of -7. Is that bankruptcy?

Certainly not. Even in this case, the equity owner can, if he can produce 7 eggs, still maintain equity interest, and thus control over future revenues. If the return next period goes up 57 eggs, he gains 500 in equity value, if it loses any more eggs, he is worth zero. The equity owner has an incentive to borrow small amounts until this position becomes untenable, and he does this mainly by arguing that the prospective mean value is not really 767, but something higher, and bank executives are fundamentally salesmen. So the key issue is not the change in eggs, but that the change in eggs is not so large that the option value is so out of the money that equity owners are unable to pull this trick on debtholders.

The most important determinant of failure is not not the size of the cushion or the shock, but rather, it's the prospect implied by the nature of the recent loss: is that a one-time effect or permanent? Is the 'business model' doomed, or just temporarily troubled?

Defaulted debt averages a 50% recovery rate, so on average equity owners are able to game this until the value of the asset is 50% of the nominal liability. This implies such financial wrangling is common in sinking enterprises, who only 'fail' when outsiders stop buying their excuses (almost all failed companies, according to insiders, only needed 'a little more time').

But this all gets back to an article I wrote back in 1997 for Derivatives Quarterly, that Value at Risk is not directly related to economic risk capital. Sure it is related, but very indirectly. How much would Citigroup have to lose for it to default? That very much depends on the type of loss. If the loss seems a one-timer, this won’t affect the expected returns going forward. If there seems a secular trend in their business such that the current loss is the tip of the iceberg, a small loss is sufficient, and debtholders are just salivating for a legal pretext to grab the assets, chop the company up, and sell it in parts.

Historically, there is absolutely no evidence that leverage ratios are related to future default rates when you look at banks cross-sectionally (excluding the obvious cases where you are measuring leverage less than 6 months from failure, which is too late to affect anything). I think this is for two reasons. First, historically industry considered regulatory minimums too high, and so all banks had very clustered leverage ratios close to the regulatory minimum, making it hard to distinguish bad from good. Secondly, shocks to banks occur so episodically, in such radically different forms, so bankers tend to under provision for them (this is because the mimicry genesis of business cycles). Bankers all think they have way too much capital, because there's a massive survivorship bias within banking, where everyone in charge pre-2008 had never experienced more than a couple percent draw-down. Sure, energy lending got whacked in the 1980's, commercial real-estate in 1990, and there were others, but that was 'those guys', idiots.

Saturday, September 11, 2010

Is Levitt's Abortion Study as Good as it Gets?

When someone questioned the value of economics, Tim Harford tried to counter with an example of an abstruse economic result that was important and came up with the following
What, for instance, of the famous contention by the economist Steven Levitt and his co-author, John Donohue, that legalised abortion in the US reduced the crime rate about 18 years later? This is a hypothesis about history, but one that no historian is well-qualified to judge. Instead, the hypothesis has been tested statistically with some ingenuity; the statistical models themselves have been contested, pulled apart, found wanting in some respects, double-checked using alternative data and tested against the experience in other countries. The debate continues. Is this process “science”? I am not sure. But it certainly isn’t idle banter.

Is the Freakonomics signature study the best example of clever and important economics? This study was also mentioned when Russ Roberts asked economists to name an empirical study that uses sophisticated statistical techniques that were so well done, it ended a controversy and created a consensus.

Levitt's abortion paper is a result celebrated by many for reasons they would never admit--the thought that abortion is eugenic because it allows parents to avoid having a child they cannot nurture (Planned Parenthood founder Margaret Sanger was a big eugenicist). But there is good reason to think that abortion costs would be dysgenic. Abortion for anticipated lack of parental resources would be common among thoughtful moms, less common among moms who view life as a series of random events that happen to them. It is not obvious how this plays out.

The Levitt abortion paper seemed to document that abortion was eugenic, showing the relative difference in crime rates declined in those states that legalized abortion in 1973. The second Freakonomics book was far less celebrated, I think in large part because it had no such useful rationales (prostitutes are lazy and like their work? Gross and mean!).

Any complex assertion, such as that abortion lowers crime, involves many different forces affecting cause and effect independently. In this case, the secular crime rise from 1960 to 1975 that occurred internationally, the crack wave created a crime peak around 1990.

The bottom line is that any fact should be discernible through several independent lenses. In Levitt's case, he used the fashionable tactic of an 'instrument', that different states had different abortion laws (illegal in some states, not in others, prior to 1973). That gets you kudos among economists who think models are more important than their application. Angrist and Krueger's estimation of education effects via instruments, highlighted in Mostly Harmless Econometrics, is considered 'best practice' econometrics. But does the abortion-crime effect stand up to more mundane corrections, such as looking at arrests on a per-capita basis as opposed to mere numbers? No.

Consider that the peak years for serious violent crime by 12 to 17-year-olds, as reported in the FBI's authoritative annual survey of crime victims were 1993 and 1994, well after abortion was legalized across all states in 1973. How does that square with the hypothesis? Obviously, it does not because the peak crime demographic should have been affected by eugenic abortions, lowering crime. The econometrics crowd is to look at the data via one very clever test and ignore independent views such as the aforementioned issues. It's pretentious intellectually bullying and profoundly short-sighted because, in the long run, no one remembers clever papers that backed incorrect or irrelevant hypotheses like that abortion is eugenic.

Another inconvenient data point to the Levitt-Donahue thesis is the black-white homicide rate.  After abortion was legalized the black fertility rate declined more, and the black abortion rate rose more, which should have implied a lower black-white homicide ratio via increased eugenic behavior among blacks. Instead, the differential worsened.

The best empirical work looks at a variety of data, as opposed to being very clever with one set of data. Robustness comes not from standard errors as much as noting that if 'x' is true, it should show up in various statistics, not one test on one set of data (though, with a really clever and abstruse metric!). There are important issues like how you measure crimes, what other secular trends were occurring (crack), or how punishment for crime changed over the period. This latter work isn't as interesting to economists because it's parochial and economists want to find eternal, generalizable truths like Newton's laws because that's what scientists do. This makes them truly academic, uninterested in facts and issues related to messy, real data because such facts do not play into their comparative advantage, which is the slick econometrics.

So, if Harford thinks that Levitt and Donahue's abortion paper is the best example of what econometrics can do for us, he might as well admit that econometrics is just another way to make an argument. It has no privileged insight into profound truths, and the eternal debates will never be 'settled' by econometric methods. Reality will reveal itself over time, and econometrics does not speed up our discernment of this reality by that much.

Friday, September 10, 2010

A Fatal Conceit

I was watching BloggingHeads, and this week's liberal vs. libertarian banter included Ann Lowrey, who basically argues we should spend more on everything (presumably via the logic of the government multiplier, which implies a free lunch to deficit spending). Alas, economists are responsible for allowing this kind of wishful thinking to be intellectual defensible (though, I believe, wrong).

There's this gem from Matt Yglesias, who claimed an attempt to limit government spending as a percent of GDP was 'idiotic', back in March 2010:
The appropriate level of overall government spending should be determined by adding up the level of specific spending on worthwhile things. Programs that are ineffective, or whose impact is small relative to the impact of the taxes needed to pay for them, should be cut or eliminated. Programs that are good should be maintained or expanded. Settling on an arbitrary number first and then making unspecified cuts to reach the target is ridiculous.

The key issue is the presumption that ineffective programs would be cut or eliminated. This happens only very rarely, and via strong opposition. AFDC and busing were only tossed because of Republican action, there was no internal re-assessment by the original Democratic proponents. Basically, no political movement admits when in oversteps, making it very rare.

Amtrak? NASA? The Department of Education? The mohair subsidy? Who, after the housing crisis, would extend mortgages to borrowers with only a 3.5% down payment? The US government, of course, who is quickly becoming the primary guarantor for homebuyers with mortgages under $300k! The government, like most liberals, think the solution to any failed policy is to double down.

Once government gets into something, the size of the loss is irrelevant, because there's always some speculative spill-over effects that can act as a pretext to maintain some patronage system.

Thursday, September 09, 2010

Theories that Explain Everything

Stephen Hawking has a new book out, and he gives his great credibility to "M-Theory", which is another name for the 11-dimensional string theory. But they note:
One big issue is that M-theory makes more than one prediction about the nature of the universe. In fact, the number of predictions it makes is somewhere around 10 to the 500th power. That's a 1 followed by 500 zeroes.

"On the surface, that sounds like a bad thing," Krauss said.

Yeah, I'd say that sounds like a bad thing. Many think they have cast off the shackles of superstition and now come to their thoughts through rational, objective science (aka, positivism). Yet, many of those same people believe ideas based on their potential to either explain everything or generate the kind of future they like.

Anyway, it is interesting that many people think understanding the origin or end of the universe is important for humans to know. I have my doubts.

Wednesday, September 08, 2010

Was Friedman Right on Markowitz?

Harry Markowitz’s 1950 dissertation on portfolio mathematics was the basis for his Nobel Prize in 1990. At the time his dissertation advisor Milton Friedman thought it contained too much statistics as opposed to economics because it focused on algorithms to determine the portfolio weights for the efficient frontier. Nothing stings like the lukewarm approval of men we respect, and so in a scene surely envied by many a thesis advisee, Markowitz noted in his Nobel lecture:

When I defended my dissertation as a student in the Economics Department of the University of Chicago, Professor Milton Friedman argued that portfolio theory was not Economics, and that they could not award me a Ph.D. degree in Economics for a dissertation which was not in Economics. I assume that he was only half serious, since they did award me the degree without long debate. As to the merits of his arguments, at this point I am quite willing to concede: at the time I defended my dissertation, portfolio theory was not part of Economics. But now it is.

It is understandable that Friedman found the nuts and bolts of generating the efficient frontier of little relevance to economics, as with hindsight the efficient frontier in return-standard deviation space has nothing to do with risk according to the latest general equilibrium models. Further, no one has ever used this method of generating portfolios because in practice returns do not increase as a function in volatility or correlation.  Markowitz’s big idea was merely the benefits of diversification, which is a good idea, but simple enough and hardly novel (the idea was mentioned in the Bible and by Shakespeare--but that didn't stop Telluride Asset Management from suing to keep me from applying the top-secret idea of mean-variance optimization).

As the vast majority of Markowitz's early work focused on algorithms that turned into a dead end, I think Friedman’s intuition has been vindicated: the efficient frontier, and therefore how to get there, is irrelevant.

There's always the hope that, like Wile's proof of Fermat's Last Theorem, a complex three-day proof exposes an insight of incredible depth and profundity, an idea that that requires some specialized technical knowledge, as this would vindicate the method of those with their technical knowledge. Great classical music is an example of something that validates rigorous study because if you study music rigorously you appreciate it more. Alas, it rarely works that way.

Bottom line: diversification is still good, but volatility is not priced risk (and Friedman, as usual, was right).

Monday, September 06, 2010

Risk and Return in General

My theory is that life is like the purple chart, where risk premiums exist on Baa bonds, riding up 3 years on the yield curve, but that's it. After that risk is something we take with negative expected returns, because risky assets play into our overconfidence, signaling, and other outside-the-box needs. (regular chart is the conventional academic wisdom from Cam Harvey's website).
I've expanded by SSRN paper on risk and return, adding two new sections, and updating the empirical survey. I now have sections outlining the creation of the standard model, and the history of empirical testing, so it's now a 150 page beast (but skimming is encouraged!). Indeed, I put the paper into HTML and posted it up here, and you can skim around sections pretty easily there. When I put it into html the equations look kind of funny, but the pdf is pretty clear. I also have an updated set of references on asset pricing here.

As Greg Mankiw argues that students take economics in college, primarily finance, I think my paper is superior to the standard Corporate Finance course you will learn in college or graduate school. This is because they neglect to mention the fundamental theory that risk and expected return are positively correlated, is an empirical failure. Professors have been very successful at presenting the CAPM and its spawn as a triumph of the social sciences, in a way similar to how macroeconomists used to present Keynesian macro models before the Phillips curve started to do multiple backflips. The profs are filled with wishful thinking based on ever more obscure econometric tests that prove their big idea works, a science no less than thermodynamics. It doesn't work, not even as an approximation. You have a finite life, don't waste it on theories theories popular among professors but not practitioners. Plus, like Khan Academy, it's free.

Friday, September 03, 2010

Stupidest Website Ever

At, you can swap files with random people on the web. Just click on a file you have, upload it, and hit swap. You then get a file from someone else. I uploaded a text file with stock tickers, and got back a jpeg of a moped. Its profound stupidity made me laugh.

Wednesday, September 01, 2010

FDIC Problem Banks

The FDIC keeps raising the number of 'problem' banks, kind of like Moody's lowering the rating on senior CMO tranches to BB when they are trading at less than 80% of par. They added a bar to this chart, now up to 829. What, exactly, is a problem bank? On the FDIC website, we learn:
Problem Banks - The FDIC creates reports on problem or troubled banks in the aggregate. We do not make the details of this list publicly available.

There's always a reason for decreased transparency, and it's always lame. You don't have to name names, but you can say with some specificity the criteria you are using, as opposed to the vague report given here. Vagueness is never wrong, but its also never right. You might as well say, 'we used our collective common sense', but as common sense is inversely proportional to the size of the collective 'we', that insures it's classic hindsight risk attribution.

Problem banks were insignificant at the end of 2007, when banks were all sitting on loans without any documentation, and built-in collateral appreciation assumptions. If our regulators did not see this, it's extremely naive to think they will foresee the next crisis. Charge-offs peaked in December 2009, and non-current loans peaked in March 2009, yet the number of problem banks continues to rise. Hindsight like this is giving economists and stock analysts a good name.