A pernicious reaction to failures is to add factors and nuance, not simplify and integrate. That is, an additional risk measure, or an additional risk oversight group. But risk is a scalar, not a vector. The problem with risk as a vector, that is, a set of numbers like {3.14, 2,73, 1.41} is that it's relevance is left as an exercise for the reader. Thus, one number can be high, another low. Should one look at the 'worst' number? Average all the numbers? Throw out the high and low numbers? In general, such a bevy of numbers leads to some being above average, others below. Indeed, as the risk vector grows over time, invariably some are above average because reviewers love to give everyone some marks that need improving, and some that give the reviewed some encouragement.
The famous 1987 space shuttle disaster involved, with hindsight, the serious error of flying during low temperatures that were known to cause problems with the O-rings. Now, this issue was raised as part of the review process, with about a hundred other such 'mission critical' risks. Of course, it was overridden. A good risk number should go off rarely, so when it does go off, you respond. Otherwise, it's like the 'check engine' light on a crappy car: everyone thinks the light is broken, not the engine (and they are usually right).
Take the UBS case. They have Group Internal Audit, Business Group Management, GEB Risk Group Subcommittee, Group Head of Market Risk, Group Chief Credit Officer, UBS IB Risk & Governance Committee, IB Business Unit Control and IB Risk Control, UBS Group Risk Reporting, Audit Committee, IB Business Unit Control. I could be missing some. This clusterpuck of groups is a recipe for disaster, because tricky issues are generally left for someone else. Lots of groups, and their opinions, is isomorphic to having a risk vector, not a risk scalar. For example, the VAR for subprime used only a 10-day horizon, and was based on only the past 5 years of daily data. Who's idea was that? Who is accountable? I bet somewhere you will find someone who made the appropriate criticism, but in the end, they were ignored. Basically, if some one person can't see how the risk of a product in its entirety, and report this to the CEO, who can understand what is being said, this company should not do these things.
Take the Moody's response to the subprime mess. They suggest that perhaps, for structured credits, they Moody's mentions they could supplement their structured credit grades with something that gives additional information, say, AAA.v2. As Cantor and Kimball note in their February 2008 Special Comment, "Should Moody’s Consider Differentiating Structured Finance and Corporate Ratings?":
The famous 1987 space shuttle disaster involved, with hindsight, the serious error of flying during low temperatures that were known to cause problems with the O-rings. Now, this issue was raised as part of the review process, with about a hundred other such 'mission critical' risks. Of course, it was overridden. A good risk number should go off rarely, so when it does go off, you respond. Otherwise, it's like the 'check engine' light on a crappy car: everyone thinks the light is broken, not the engine (and they are usually right).
Take the UBS case. They have Group Internal Audit, Business Group Management, GEB Risk Group Subcommittee, Group Head of Market Risk, Group Chief Credit Officer, UBS IB Risk & Governance Committee, IB Business Unit Control and IB Risk Control, UBS Group Risk Reporting, Audit Committee, IB Business Unit Control. I could be missing some. This clusterpuck of groups is a recipe for disaster, because tricky issues are generally left for someone else. Lots of groups, and their opinions, is isomorphic to having a risk vector, not a risk scalar. For example, the VAR for subprime used only a 10-day horizon, and was based on only the past 5 years of daily data. Who's idea was that? Who is accountable? I bet somewhere you will find someone who made the appropriate criticism, but in the end, they were ignored. Basically, if some one person can't see how the risk of a product in its entirety, and report this to the CEO, who can understand what is being said, this company should not do these things.
Take the Moody's response to the subprime mess. They suggest that perhaps, for structured credits, they Moody's mentions they could supplement their structured credit grades with something that gives additional information, say, AAA.v2. As Cantor and Kimball note in their February 2008 Special Comment, "Should Moody’s Consider Differentiating Structured Finance and Corporate Ratings?":
The additional credit characteristics could be conveyed through symbols that would not be physically appended to the rating but instead provided in separate data fields, analogous to other existing rating signals such as rating outlooks and watchlists. This approach would avoid entanglement with the existing rating system – a potential issue for both rating agencies and users of ratings data – and would encourage the addition of more information content over time because such information would not have to be appended to the rating itself. For example, an issue could have a “Aaa rating, with a ratings transition risk indicator of v1, with a data quality indicator of q3, and a model complexity indicator of m2."A single group, with a single number, such as a properly constructed VAR or Moody's rating, is very informative because it is unambiguous. Such a number should have lots of documentation that mentions how different risk factors were addressed, but at the end of the day, risk, like return, is a single number. I know caricatures of Value-at-Risk are pointless, but those errors are rectifiable within the framework: fat tails, long holding periods, absence of specific historical data. The nice thing about it is that when its wrong, it's wrong, and people and methodologies can be held accountable. But a vector can always be right if you just would have looked at the various signals that were bad (in a large vector, some always exist). Risk measurement is about creating an unambiguous evaluation, so that it can be compared directly to another business line, without all sorts of qualifications. Eventually, you have to explain this to 60-something senior management, and while they should have some level of knowledge, they shouldn't have to integrate 10 risk group's findings on risk reports that each contain 5 numbers and lots of qualitative nuance. That's how UBS lost $37B.
No comments:
Post a Comment