A AAA rating means about a 1 in 10,000 chance (0.01%) of defaulting, annually. This is very difficult to get at via mere experience, but was plausible because default rates for regular AAA corporate debt for Moody's from 1920-07 was zero. Thus, they figure it was less than the AA rate, which actually had some defaults, around 0.06%, and so seemed a reasonable extrapolation. But a slew of AAA rated paper trading at 50 cents on the dollar basically means that the AAA rating was wrong, rejected, statistically, at the 0.001% level. So if AAA ratings are wrong, how does one adjust them? Is it now 1 in 100 (like a BB+ rating), as opposed to 1 in 10,000?
Such changes in probabilities are not peculiar to finance. Note the the Space Shuttle prior to the Challenger disaster in 1986 had an internal probability of error at 1 in 100,000; after the crash, this was raised to 1 in 50. If this is the magnitude of the default perception change it will take a long time to change, because via bayesian inference the rating agencies will need several years of cross-sectional observations to get these estimated probabilities back near old levels. There is no way to speed this up, such is validation.
Rating agencies used to rubber stamp some types of securities with ratings using deference to the market and path dependence, and so now everyone is skeptical of all of their investment grade ratings. The result is that everyone assumes the worst, and only the Federal Government has a AAA rating, because they can always print money to pay debts. Thus the market now is missing a credible stamp of investment grade (which implies, usually, you can assume its true), everything is trading like its junk (ie, less than BBB-). Even worse, via an ordinal ranking this means that B and BB rated bonds trade like distressed securities. The market is constipated because the secondary market for debt has been shut down, as previously useful ratings are now considered suspect, and our financial institutions do not have the wherewithall to warehouse all the debt in the economy. The New York Port Authority had no bids for AA rated bonds, clearly showing no one believes they are near that quality.
But consider that since this bad mortgage debt was not only rated AAA, but traded as if it were 'risk free' back in 2006. If debt is trading as if it is AAA, if would be very difficult to defend rating it as, say, BB. The market is so big, many people are independently corroborating your judgment. Surely many of them looked at these deals in detail; indeed, probably hundreds of thoughtful people did, including regulators at Office of Federal Housing Enterprise Oversight (OFHEO) and the Fed. The rating agencies were analyzing mortgages the way everyone else, with very different incentives, did. As the market weakened underwriting standards from the 1990's onward, it seemed to make no difference, and academics and regulators were saying these new mortgage innovation were not material. Legislators with a lot of regulatory power implied these were morally righteous changes.
If the rating agency decided in 2006 that the cumulative effect of these changes increased the probability of default from 0.01% (AAA) to 0.50% (BB+), this not only would have been ridiculed, if a convincing case was put forward this would have shut down the housing bubble earlier. But then wouldn't the rating agency then have been blamed for this entire mess? After all, the total amount of bad housing debt is a fraction of the total loss in wealth from this financial debacle, as some unknown accelerator mechanism has destroyed at least 10 times the value of the bad assets at the center of this debacle. Estimates of housing wealth destruction were 'only' around $200B back in March 2008, after the mortgage market had collapsed. Year to date global stock markets have destroyed over $10 trillion subsequently. Now, if Moody's had the prescience in 2006, to see say 50% of this, and caused, say, a $1 trillion mini-correction but avoiding the $10 trillion loss, would they have been congratulated? Remember, you don't get to view alternative time-paths in the multiverse to prove your actions mitigated a disaster, rather, you are left looking like you screwed everything up.
Further, that's all with hindsight. The graph above shows how Investment Grade default rates vary over time (mean about 0.15%). I wouldn't know how to characterize that distribution, with a mode at zero, and some positive numbers that seem to have a non-stationary distribution. Such distributions get funkier with lower default rates. Since low-probability defaults are clustered in time, you probably will not observe the event that will prove you right in your working life. The net benefit to making a change in prospective default probabilities from 0.01% to 0.50% thus would be probably not be something you would expect to 'prove' in your working career at Moody's. Thus, standing athwart history yelling 'Stop!' sounds neat, but in reality that's only realistic when you have no effect, because generally people do not get credit for preventing low probability events from happening.
Given the cyclical nature of default rates, I just don't see how you can design a mechanism to reliably estimate a 1 in 10,000 annual default rate given the diverse set of securities, often novel securities, they are asked to evaluate. We just don't have enough data on comparable issues. Perhaps every new security needs a top rate of BB+ until it generates 50 years of data.
But AAA has never traded like it was only .01% away from treasuries, in fact isn't the premium more like 0.50%?
i second the 1st anon. also why would they need several years? you have an observation, bang, you adjust your priors.
Yeah, but there is a distinction between AAA and Treasuries, because there is a huge difference between zero and 0.01%. Further, regulatory risk capital is quite different for Treasuries. Historically AAA traded on average about 70 basis point above Treasuries. The point is, for Asset Backed Securities of all types, even those that have still performed (eg, auto loans), AAA has lost all credibility, and this 'sets the curve' for the rest.
Just a side comment... I heard a talk by John Lipsky, First Deputy Managing Director of the IMF, and according to research by the IMF, the overvaluation/bubble of housing in the U.S. was relatively smaller than the overvaluation in housing in a number of other countries, such as Spain.
When the housing bubble went bust in the U.S., the leveraged nature of the U.S. home owner (because of mortgage debt) brought problems directly into the financial sector. In Europe, there is less leverage from mortgage debt, but the busting of their housing bubble has still had significant wealth effects impacting the consumer and economic outlook.
I'v been guilty of this, a tendency to take a very U.S. centric view of this crisis, but the global nature of the impact and massive selloff of U.S. and global securities, this appears to be part of broader situation and set of problems than just craziness in U.S. housing.
Bill Gross's Investment Outlook for December hits the nail on the head:
"My transgenerational stock market outlook is this: stocks are cheap when valued within the context of a financed-based economy once dominated by leverage, cheap financing, and even lower corporate tax rates. That world, however, is in our past not our future. More regulation, lower leverage, higher taxes, and a lack of entrepreneurial testosterone are what we must get used to – that and a government checkbook that allows for healing, but crowds the private sector into an awkward and less productive corner."
All the bullish cheerleading in the world won't change the facts stated above.
"Estimates of housing wealth destruction were 'only' around $200B back in March 2008, after the mortgage market had collapsed. Year to date global stock markets have destroyed over $10 trillion subsequently."
I think the paradox is only an apparent one. When (by most estimates in late 2005) banks ran out corporate bonds to securitize, they resorted to doing it with CDO's, and when they ran out of those too - with synthetic bonds and CDO's. In the process, they assumed (synthetically) huge leverage, but not hedging super-senior tranches, which were deemed quadruple-A - so risk-free that you'd be a bozo to hedge or sell that good stuff, rather than keep it and collect a small, but super-safe income stream. Which was true, until it was not, as the cliche goes. As it happens, those super-senior tranches were in excess of 85% of the pool of the CDO's. Their risk models failed them quite spectacularly.
To recapitulate: I heartily agree with Mr Falkenstein's view that predicting the default rate of a sample that was selected precisely because it almost never defaults is only rational if you are getting paid handsomely to do it, without being actually exposed to the defaults yourself - which is precisely the situation S&P and Moody's found themselves in. You'll find no self-respecting astronomer willing to predict where and when the next Tunguska event will occur - until, that is, you offer $1mm, no strings attached - then you will find plenty.
Thus, with the huge leverage in play (for the structure of SIV's, not necessarily all of his conculsions, check Alan Kohler ), and the perverse incentives in play, I do not find the losses all that staggering.
In the second paragraph of my post, "but not hedging " should read "while not hedging".
Post a Comment