Mark Kleiman and Heather MacDonald discuss crime policy, and though I thought this whole debate was very informative, I disagreed with the following point by Kleiman: .
Kleiman's argument is as follows: more police lowers crime irrespective of the criminal system behind it. He notes that unlike other majors cities, NYC has been lowering its crime rate while lowering its prison population. Thus, the payoff seems to be in police on the streets, not judges or prison cells.
But I think he neglects the major demographic change in New York City over the past 10 years, where basically it's too expensive for criminals to live there. Manhattan is one of the highest-income places in the United States with a population greater than 1 million. I don't see how an area, where the downtown boasts an incredible density of $200k/year households, its somehow relevant for Los Angeles, which is large, but much less dense or wealthy. In this case, NYC is the exception, not the rule. Look at the cities of Philadelphia, Cleveland, Dallas, etc., but to highlight New York, the largest collection of wealthy people in North America, is simply like using Lance Armstrong as the case study for cancer patients.
I'm not saying poor people are morally worse than rich people, merely, they commit more violent and property crime, which is what we mainly look at. Indeed, true cowardice is at least as common among the wealthy as the poor, and their ethics a bit like post-modern science: whatever is good for them, rationalized very articulately. Thus, those that use their education and intelligence to defend the indefensible, the immoral, the cruel, are more pathetic than your average whore or crack dealer, because their intellect gives them power to do more harm to more people (think of Noam Chomsky defending Pol Pot in the 1970's). But, given the issues of day-to-day mischief, noise, and violence, I'd rather live next to a hedge fund lawyer than a crack dealer even though they probably have equal chances of reaching heaven.
Saturday, May 31, 2008
Why I Hate Microsoft #459
I paste a set of data from excel into Access, and always get the following sequence of nanny controls that are totally annoying and useless:
The Value you entered isn't valid for this field (ie, there are #N/As)
-hit OK
Do you want to supress further error messages telling you why records can't be pasted?
-hit Yes
You are about to paste 4689 record(s)
-hit Yes
Records that Microsoft Office Access was unable to paste have been nstered into a new table called 'Paste Errors'
-hit OK
Because the middle two are yes/no, I can't just hit enter when I hear a beep. Who in their right mind would say OK, Yes, then, when told they are about to paste 4000 records say 'no'? Why not simply ask us 100 times if we are sure, because, you never know, and it's all about protecting the user from himself. I spend more time trying to disable Microsoft protection, than fighting unauthorized users or viruses.
The Value you entered isn't valid for this field (ie, there are #N/As)
-hit OK
Do you want to supress further error messages telling you why records can't be pasted?
-hit Yes
You are about to paste 4689 record(s)
-hit Yes
Records that Microsoft Office Access was unable to paste have been nstered into a new table called 'Paste Errors'
-hit OK
Because the middle two are yes/no, I can't just hit enter when I hear a beep. Who in their right mind would say OK, Yes, then, when told they are about to paste 4000 records say 'no'? Why not simply ask us 100 times if we are sure, because, you never know, and it's all about protecting the user from himself. I spend more time trying to disable Microsoft protection, than fighting unauthorized users or viruses.
Friday, May 30, 2008
Simplify English
We have 42 different sounds in English, and we spell them 400 different ways. Isn't that a rather silly thing to do?
...
Many other languages have undertaken spelling reforms in the 20th century, including French, Greek, Spanish, Swedish, Irish, Japanese and Hebrew. In 1996, four German-speaking countries agreed on a comprehensive spelling reform of the language.
From today's WSJ. There's a movement to simplify spelling. I can't agree more. This is truly egalitarian and progressive, because the current system taxes is biased towards those with higher IQs and education, who know how to spell 'diarrhea' correctly. English has a lot of inefficiencies, but if you read Olde English you see it has gotten better. At least we aren't British English, with 'colour' and pronouncing schedule as 'shedule' (fools!), or French. At Northwestern, I had this fellow student from francophone Africa who spelled his name something like Houencheconne, because that's how the French authorities spelled his name when he pronounced it for them. No teacher came close to pronouncing it correctly when they read his name. I see he changed it to Leonard Wantchekon. I know if I was Wally Szczerbiak, I would change the spelling, because I would get sick of spelling it over the phone. Falkenstein is pretty phonetic, though I guess I could change it to 'Falconstine' to make it easier to pronounce.
I switched to a Dvorak keyboard about 5 years ago. Dvorak has all the most frequently used keys on the home row, so it's easier to type. I read somewhere that it doesn't much matter, and I guess I'm a pretty good typist so my max speed is really constrained by my ability to spell in my head (I often misspell homonyms) rather than in my fingers, but I think I think less about fingering with Dvorak than before. It took me about 3 months to really get used to it, but now it's much easier. Plus, if you have a Dvorak keyboard snoopers won't download porn on your computer when you are out for lunch, because they start typing and see all this gibberish. So, I think such changes are feasible, and good.
Thursday, May 29, 2008
The Importance of Being Right
Day to day, being right is not so important. Goodness knows the average bloviator in academics, talk shows, or the blogosphere is judged more on their delivery, metaphors, who they know, and the popularity of what they are arguing, rather than whether they are right. But posterity has a huge bias towards those who were right, regardless of their methods.
I read Einstein's Luck, and it goes over several prominent scientists, and how they often skewed their empirical work. We see that Pasteur, for example, had good reason to be skeptical of his theory against abiogenesis given the data he had available: he didn't publish his own experimental results contradicting his theory because he believed they were 'errors'. Gregor Mendel fudged his genetics data.
And take the famous confirmation of Einstein following WW1, which was, in some sense, a grand conspiracy. Einstein’s General Theory of Relativity was only a few years old, yet academics were eager to put the nightmare of perhaps the most pointless large war in history behind them, and show the common bond of the old adversaries. Proving a German’s theory correct was perfect for the cause. The null hypothesis, set up by standard Newtonian Physics, was that there should by a 0.85 arc-second deflection in light from stars behind the sun, while Einstein predicted a 1.7 arc-second deflection.
In 1919, an eclipse offered a chance to measure the degree to which the sun bended the rays of light from far off stars, a prediction of General Relativity. Famous English physicist Arthur Eddington was a World War I pacifist, and so had a predisposition to mend the rift between German and English academics. He made a trek to the island of Principe, off the coast of West Africa, one of the best locations for observing the eclipse.
Eddington did not get a clear view of the stars in 1919 because it was cloudy during most of the eclipse. But he used a series of complex calculations to extract the deflection estimate from the data, and came up with an estimate of 1.6 arc-seconds from his African voyage. Data from two spots in Brazil were for 1.98 and 0.93. Eddington threw out the lower measurement because he was concerned that heat had affected the mirror used in the photograph, and so the standard error too large. Thus, his efforts proved Einstein's theory was correct.
Subsequently, scientists have concluded that Eddington’s equipment was not sufficiently accurate to discriminate between the predicted effects of the rival gravitational theories. In other words, Eddington’s reported standard errors were too low, and the point estimate was too high, given the data he had. Yet for decades this experiment was cited as the proof of the General Theory, but even in the 1960’s when they tried to redo the experiment given a similar eclipse and methodology, they found they could not.
In the late 1960’s, using radio frequencies as opposed to pictures from an eclipse, Eddington’s results were, ultimately, confirmed. That is, he was right, but he still tendentiously presented his data based on his prejudices, and a true scientist without any biases would have been more skeptical of the theory that light is deflected by mass until then.
In contrast, the early works supporting the CAPM by Fama and MacBeth (1973) and Black, Jensen and Scholes (1973), were used as proof of the success of the CAPM for decades: beta was positively related to stock returns. But, with better data, more data, better empirical methods, this relationship appears now to go the other way. A true theory should become more obvious with more data. I think this suggests that, unlike Einstein, Pasteur, and Mendel, the theory here is wrong, and all the Stochastic Discount Factor and Arbitrage Pricing Theory extensions are merely prolonging the life of what will be seen as a theory built on a flawed assumption. History is forgiving of tendentious tweaks to the data when you are right, but it quickly forgets you when wrong (see exposition on the theory of why it's wrong in Why Risk is Not Related to Return, and practical implications including beta arbitrage, and minimum variance portfolios).
It is important to have correct prejudices.
I read Einstein's Luck, and it goes over several prominent scientists, and how they often skewed their empirical work. We see that Pasteur, for example, had good reason to be skeptical of his theory against abiogenesis given the data he had available: he didn't publish his own experimental results contradicting his theory because he believed they were 'errors'. Gregor Mendel fudged his genetics data.
And take the famous confirmation of Einstein following WW1, which was, in some sense, a grand conspiracy. Einstein’s General Theory of Relativity was only a few years old, yet academics were eager to put the nightmare of perhaps the most pointless large war in history behind them, and show the common bond of the old adversaries. Proving a German’s theory correct was perfect for the cause. The null hypothesis, set up by standard Newtonian Physics, was that there should by a 0.85 arc-second deflection in light from stars behind the sun, while Einstein predicted a 1.7 arc-second deflection.
In 1919, an eclipse offered a chance to measure the degree to which the sun bended the rays of light from far off stars, a prediction of General Relativity. Famous English physicist Arthur Eddington was a World War I pacifist, and so had a predisposition to mend the rift between German and English academics. He made a trek to the island of Principe, off the coast of West Africa, one of the best locations for observing the eclipse.
Eddington did not get a clear view of the stars in 1919 because it was cloudy during most of the eclipse. But he used a series of complex calculations to extract the deflection estimate from the data, and came up with an estimate of 1.6 arc-seconds from his African voyage. Data from two spots in Brazil were for 1.98 and 0.93. Eddington threw out the lower measurement because he was concerned that heat had affected the mirror used in the photograph, and so the standard error too large. Thus, his efforts proved Einstein's theory was correct.
Subsequently, scientists have concluded that Eddington’s equipment was not sufficiently accurate to discriminate between the predicted effects of the rival gravitational theories. In other words, Eddington’s reported standard errors were too low, and the point estimate was too high, given the data he had. Yet for decades this experiment was cited as the proof of the General Theory, but even in the 1960’s when they tried to redo the experiment given a similar eclipse and methodology, they found they could not.
In the late 1960’s, using radio frequencies as opposed to pictures from an eclipse, Eddington’s results were, ultimately, confirmed. That is, he was right, but he still tendentiously presented his data based on his prejudices, and a true scientist without any biases would have been more skeptical of the theory that light is deflected by mass until then.
In contrast, the early works supporting the CAPM by Fama and MacBeth (1973) and Black, Jensen and Scholes (1973), were used as proof of the success of the CAPM for decades: beta was positively related to stock returns. But, with better data, more data, better empirical methods, this relationship appears now to go the other way. A true theory should become more obvious with more data. I think this suggests that, unlike Einstein, Pasteur, and Mendel, the theory here is wrong, and all the Stochastic Discount Factor and Arbitrage Pricing Theory extensions are merely prolonging the life of what will be seen as a theory built on a flawed assumption. History is forgiving of tendentious tweaks to the data when you are right, but it quickly forgets you when wrong (see exposition on the theory of why it's wrong in Why Risk is Not Related to Return, and practical implications including beta arbitrage, and minimum variance portfolios).
It is important to have correct prejudices.
Wednesday, May 28, 2008
surgery highlights strategy over tactics
so yesterday i had surgery on my shoulder to repair a torn labrum and rotator cuff. it is hard to do much, because i can't move my dominant arm. but what was interesting was that everyone i met when i got there, nurses 1 thru 3, anesthesiologist, etc, all asked which shoulder (right)? three people even initialed the shoulder, so we would have an evidence trail if they did the wrong one, i guess.
i told the nurse if i left with a hysterectomy, they would be in big trouble. she said if i left with a hysterectomy i would be famous.
Tuesday, May 27, 2008
Why FARK is a Daily Read
In a post noting that a 'Motivational speaker arrested for attempted murder, reminds us to chase our dreams', about an ex-felon who fell off the wagon in a major way, a commenter notes:
Did you realize the Chinese symbol for "alcohol- and methamphetamine-fueled shooting spree" is the same as the symbol for "opportunity"?
Monday, May 26, 2008
Memorial Day, Courage, and Risk Taking
Overcoming Bias asks, why celebrate war heroes? As Robin Hanson notes:
But in modern conflict, it's not so obvious. The past 200 years of fighting seem pretty silly, as our old foes, Germans and Japanese, are so much just like us today its hard to think that even if they won they would have not turned out the same through some other means. The US Civil War killed a lot of people, but Brazil and Canada were able to move on from slavery without such a mess. And the Revolutionary war with Britain, seems a lot like an adolescent chaffing under the control of a parental unit (Tony Blair gave a speech in the US congress and made a joke about the War of 1812, funny because the cause of that war has zero resonance today, seemingly pointless).
Venerating soldiers reminds us to celebrate courage, but physical courage is somewhat easy to venerate because it is so intuitive. However, I think we need to remember most of us deal with intellectual, not physical, courage, in the same way most of us today are involved in intellectual, not physical, work.
There’s an important distinction between physical and intellectual courage. Physical courage is the ability to act despite the risk of pain, injury or death. Most ancient texts on courage are examples from male warrior culture. Old Testament heroes like Joshua and David and the warriors of Greek and Roman mythology demonstrated the heart and mind that led them to persist in the face of danger or hardship. Intellectual courage, in contrast, is mainly based on facing humiliation, the thought that one’s beliefs or actions might cause a loss of reputation or status, because rash risk-taking does lower one’s status. For an intellectual the risks and challenges of advocating courageous ideas is the likelihood of feeling or looking foolish, of not being accepted by colleagues and people one respects, and reputations have a lot of inertia.
If you have objectively low alpha in what you are attempting, your willingness to attempt it will not be seen positively by your status group. They will see, based on their cues, you had no chance, and thus will be mocked for the obtuseness implied in such an objectively absurd form of risk-taking. To seriously try to dance with flair or wear a really eye-catching new outfit invites the scorn, the ridicule, of failing so bad, the joke is not the failure, but your mind-numbingly-clueless thought that you are John Travolta or Jennifer Lopez. And this holds for perverse persistence as well. An 18-year-old aspiring rock star is cool, a 43-year-old aspiring rock star is pathetic. Courage is therefore not viewed in isolation, because if it is rash or excessive it is considered merely foolhardy, not admirable. Physical courage perhaps is even more context dependent, as fighting for a dumb idea is courageous but shameful.
It should be remembered that intellectual courage is only admired ex post for those who were right, and were doing something that was unorthodox at the time. My kids all learn about civil rights heroes as examples of courage, but it should be remembered that while Ruby Bridges and other people who fought for civil rights in the 50s and early 60s were courageous, supporting civil rights today is about one of the easiest things to do. It is easy to forget that Galileo's famous observation that all objects accelerate at the same speed was not so obvious. If you push something faster, it accelerates--a heavy weight pushes harder on your hand, ergo, it should push downward faster. Add to that the observation that leaves fall to the ground more slowly than rocks, and I could see why people would assume weight is positively correlated with acceleration. Around the same time, Tycho Brahe, the man whose measurements allowed Kepler to formulate his laws of motion, did not accept the heliocentric model of the solar system, in spite of his very good data and good natured persuasion from Kepler--isn't it obvious we are at rest?
Intellectual courage in real time means the average respectable, knowledgeable person will lessen his estimation of you, like those who believe in cold fusion or intelligent design. And it will only be respected, if you turn out to have been presciently correct. Of course, this is obviously true for investing, where those who called the internet boom in 1996, or bust in 2000, are considered courageous. Those who were bearish in 1996, or bullish in 2000, however, are considered merely foolish.
Courage is very important to one’s self discovery, finding one’s niche, and making real breakthroughs, and it is only admirable in combination with other virtues, like prudence (ie, being right on scientific facts or prediction) and justice (being right on morality). If you broke conventional wisdom on something wrong, like those who fought for communism, or eliminated DDT from Africa, you were self-righteous, courageous, and also a fool whose actions created a lot of harm. We venerate physical courage of soldiers because, unlike intellectual courage, it is easier to measure, and generally presume it was for a righteous cause.
Yes warriors, dead and otherwise, deserve some honor, but to me this seems all out of proportion... How about warriors who died on other sides, or in other wars? How about civilians who died or sacrificed in wars? How about those who prevented wars?I think we primarily celebrate military heroes out of a tradition of doing so, because in the past such people truly were responsible for our survival. For example, if the Mongol horde won the battle, they killed all the men and raped all the women. So guys willing to put their life on the line to prevent this were pretty important, and owed a debt of annual gratitude.
But in modern conflict, it's not so obvious. The past 200 years of fighting seem pretty silly, as our old foes, Germans and Japanese, are so much just like us today its hard to think that even if they won they would have not turned out the same through some other means. The US Civil War killed a lot of people, but Brazil and Canada were able to move on from slavery without such a mess. And the Revolutionary war with Britain, seems a lot like an adolescent chaffing under the control of a parental unit (Tony Blair gave a speech in the US congress and made a joke about the War of 1812, funny because the cause of that war has zero resonance today, seemingly pointless).
Venerating soldiers reminds us to celebrate courage, but physical courage is somewhat easy to venerate because it is so intuitive. However, I think we need to remember most of us deal with intellectual, not physical, courage, in the same way most of us today are involved in intellectual, not physical, work.
There’s an important distinction between physical and intellectual courage. Physical courage is the ability to act despite the risk of pain, injury or death. Most ancient texts on courage are examples from male warrior culture. Old Testament heroes like Joshua and David and the warriors of Greek and Roman mythology demonstrated the heart and mind that led them to persist in the face of danger or hardship. Intellectual courage, in contrast, is mainly based on facing humiliation, the thought that one’s beliefs or actions might cause a loss of reputation or status, because rash risk-taking does lower one’s status. For an intellectual the risks and challenges of advocating courageous ideas is the likelihood of feeling or looking foolish, of not being accepted by colleagues and people one respects, and reputations have a lot of inertia.
If you have objectively low alpha in what you are attempting, your willingness to attempt it will not be seen positively by your status group. They will see, based on their cues, you had no chance, and thus will be mocked for the obtuseness implied in such an objectively absurd form of risk-taking. To seriously try to dance with flair or wear a really eye-catching new outfit invites the scorn, the ridicule, of failing so bad, the joke is not the failure, but your mind-numbingly-clueless thought that you are John Travolta or Jennifer Lopez. And this holds for perverse persistence as well. An 18-year-old aspiring rock star is cool, a 43-year-old aspiring rock star is pathetic. Courage is therefore not viewed in isolation, because if it is rash or excessive it is considered merely foolhardy, not admirable. Physical courage perhaps is even more context dependent, as fighting for a dumb idea is courageous but shameful.
It should be remembered that intellectual courage is only admired ex post for those who were right, and were doing something that was unorthodox at the time. My kids all learn about civil rights heroes as examples of courage, but it should be remembered that while Ruby Bridges and other people who fought for civil rights in the 50s and early 60s were courageous, supporting civil rights today is about one of the easiest things to do. It is easy to forget that Galileo's famous observation that all objects accelerate at the same speed was not so obvious. If you push something faster, it accelerates--a heavy weight pushes harder on your hand, ergo, it should push downward faster. Add to that the observation that leaves fall to the ground more slowly than rocks, and I could see why people would assume weight is positively correlated with acceleration. Around the same time, Tycho Brahe, the man whose measurements allowed Kepler to formulate his laws of motion, did not accept the heliocentric model of the solar system, in spite of his very good data and good natured persuasion from Kepler--isn't it obvious we are at rest?
Intellectual courage in real time means the average respectable, knowledgeable person will lessen his estimation of you, like those who believe in cold fusion or intelligent design. And it will only be respected, if you turn out to have been presciently correct. Of course, this is obviously true for investing, where those who called the internet boom in 1996, or bust in 2000, are considered courageous. Those who were bearish in 1996, or bullish in 2000, however, are considered merely foolish.
Courage is very important to one’s self discovery, finding one’s niche, and making real breakthroughs, and it is only admirable in combination with other virtues, like prudence (ie, being right on scientific facts or prediction) and justice (being right on morality). If you broke conventional wisdom on something wrong, like those who fought for communism, or eliminated DDT from Africa, you were self-righteous, courageous, and also a fool whose actions created a lot of harm. We venerate physical courage of soldiers because, unlike intellectual courage, it is easier to measure, and generally presume it was for a righteous cause.
Babies are Not Smart
My baby Izzy will be sui generis someday. But right now, she's pretty much like every other 12 month old human. As The Onion noted in a seminal 1997 study, babies are cognitively challenged, totally incapable of doing the most rudimentary things for themselves. In contrast, a 12 month old octopus, despite zero parental instruction, can find its own food, avoid predators, and mate (without a 'birds and bees' talk).
We were at a neighbors having a barbecue, and Izzy was bouncing rhythmically to the music being played. Clearly, she likes music, as most babies do. She responds to touch, faces, and baby-talk of adults (high in pitch, shortened words). She does not appreciate abstract reasoning in the least, currently only accessing the medullary core of her brain. But somehow, incapable of the most elementary logical proofs, she will become a better thinker than any computer at things that really matter to people. Now, Cyc is a project to create a set of propositions (all men are mortal) and rules (men get older as time goes forward), to create a proto-human, something that could pass the Turing test, something that Izzy will do well before Cyc (see Lenet's talk here on Google tech talks). This project has been much more difficult than imagined.
It therefore seems either our sensual appreciations are necessary for creating the logic of an adult, or our sensual perceptions merely help us figure out the basics of survival, the 4 F's (fight, flight, feed, and mate), that allow us access to the next level of human thought. That is, like acquiring language, humans tap into this after certain level of functional competence has developed, and our innate sensual appreciations are only necessary for logic, irony, and other high-level human thought processes, in the sense that they keep us alive long enough to reach this reservoir [like the way the alien monolith was revealed to Earthlings in 2001: A Space Odyssey only after we had developed sufficient technology to reach the moon).
Why do Libraries Have Crappy Websites?
In this Bloggingheads link, Aaron Swartz notes that libraries have crappy websites. Swartz invented RSS and Reddit, so I think he's an authority on literature databases. It is interesting that Libraries, who should have the most pointed experts in this subject, are manifestly inferior to Google, Amazon and Netflix in creating a database that users can access to find things they want. They seem happy with the objective of merely providing information, not providing it in an efficient way (ie, it's there, if you are patient enough). I guess, 'Library Science' is a good example of the aphorism that any field with 'science' in its title is not a science. I know in my library system, there's a lot of stuff in it, but if I haven't been to the site in the past 6 months, I forget how to get access to the right pages, and I'm stuck. The search tool is terribly unforgiving if you don't put the search phrase spelled correctly, or gives you way too many items ranked arbitrarily.
It's sort of like if you found that economists were the worst investors, or psychologists the least happy people, accountants made the most tax mistakes. It tells you there's something rotten in the academic paradigm, and that for generating good ideas, science needs competition and feedback from outside the field [this is one reason why I'm skeptical of String Theory, where there are no new predictions, and it is only even understood by fellow string theorists, who clearly don't want to throw away years of specialized training by saying the field is a waste]. Cloistered, self-referential fields of study that seem to go on happily, demonstrating nothing of interest to those outside their field, are all too common, and this applies to many sub-threads in even the most successful academic fields.
It's sort of like if you found that economists were the worst investors, or psychologists the least happy people, accountants made the most tax mistakes. It tells you there's something rotten in the academic paradigm, and that for generating good ideas, science needs competition and feedback from outside the field [this is one reason why I'm skeptical of String Theory, where there are no new predictions, and it is only even understood by fellow string theorists, who clearly don't want to throw away years of specialized training by saying the field is a waste]. Cloistered, self-referential fields of study that seem to go on happily, demonstrating nothing of interest to those outside their field, are all too common, and this applies to many sub-threads in even the most successful academic fields.
Sunday, May 25, 2008
Career Advice From Jenna Jameson
I saw UFC 84, which was quite good. A lot of people, like John McCain, call it human cock fighting, but it's actually less dangerous than boxing, because a hurt boxer gets more punches to the head, where a hurt fighter is pounced upon and usually submits to a choke quickly. The combination of jujitsu--with emphasizes joint locks--wrestling, and boxing, make it the most awesome sport. It takes more than just talent, because joint locks are really tricky. Brock Lesnar found this out when he was pummeling Frank Mir, who nimbly grabbed his ankle in a jujitsu hold and finished Lesnar.
BJ Penn, the headliner in this event, is truly amazing, simply a natural genius at this sport. I had a suspicion his opponent, Sean Sherk, was in for trouble when he asserted that because Penn is from a wealthy family, he is soft. Comfort without struggle—and the sense of insecurity that motivates it—leads to disproportionate amount of self destructive decadence, and presents a unique challenge to wealthy people raising children. But that's a stereotype, and just as we should override group average information when assessing an individual we know a lot about, BJ Penn has issues, but I don't think his affluence is really operative right now. To think the poor kid will beat the rich kid because 'he wants it more' is simply not true.
The card also had the last fight for Tito Ortiz in the UFC, because Tito has been continually carping about the UFC for underpaying its fighters, publicly feuding with the President of the UFC Dana White in a way that makes all bosses crazy. Tito Ortiz was the champion from 2000-03, but that was a long time ago. He's now, probably, the 10th ranked fighter and moving down. Unfortunately, his girlfriend, Jenna Jameson, is giving him career advice. Tito basically argues that as he helped create the UFC brand, and it now generates hundreds of millions in revenue a year, he wants a piece. At the press conference after the fight, Jameson was heard yelling that Ortiz should be at the top table because he's a champion. It's all kind of sad, because Ortiz lost, and he is no longer champion, and having your girlfriend yelling like that just makes for an embarrassing situation for everyone
Jameson is one of the more successful adult film stars today, as her website ClubJenna.com grosses $30 million per year, and she clearly figured out that as talent she could get a lot more as a part-owner than merely an employee. But she's a headliner, and used that Q-rating to brand her company. Ortiz was a headliner. He can't break out on his own, because no one is going to pay $40 to see the 10th ranked fighter moving down. As the head of the competing IFL stated, if he thinks the UFC doesn't pay him enough, he should understand that no other league will be able to pay him anything close. Further, say the UFC is worth $1B, should Ortiz have a piece of that? Well, no. Sure he helped build the brand, but he had a contract, and was an employee. Perhaps he should have negotiated some options, but you can't argue for them ex post. So, while I'm sure Jameson is qualified to give advice on all sorts of things, her flawed analogy between his status as 'talent' and hers, is just wrong. Bad analogies are at the bottom of every bad business decision.
But mixed martial arts is changing. Upon graduating college, Johny Hendricks and Jake Rosholt recently decided to join professional mixed martial arts, rather than international wrestling, and they were 2 and 3 time national champs, respectively. That's a big datapoint, when the best college wrestlers are dreaming of the UFC rather than the Olympics. Thus, at first you had people like Matt Hughes, Don Frye, and Randy Couture, who were merely very good college wrestlers, but the next generation will be the cream of the collegiate wrestling. This is a sport on the upswing.
Saturday, May 24, 2008
Who Knew? '3' is a Really Cool Number.
Among all the numbers, 3 is right up there with 1 and 2. Did you ever consider that 3 is the first digit of Pi? The sixteenth digit of e? Well, I read The Number Three in American Culture By Professor Alan Dundes of Berkeley University, and was much informed. I am not sure if it's a joke, but the author notes the following singular cultural properties of 3:
I agree that 3 is important. I would put it way up there. Not above 2, but before 4.
- In folk speech one can give three cheers for someone, but not two or four. (And each cheer may consist of "Hip, Hip, Hooray.")
- The starter for a race will say "One, two, three, go." He will not count to two or four. (Cf. the three commands "On your mark, get set, go.)
- The alphabet is referred to as the ABC's; one does not speak of learning his AB's or his ABCD's.
- In jokes, there are commonly three principals: a minister, a priest, and a rabbi; or a blonde, a brunette, and a redhead.
- In horse racing the three possibilities are win, place, and show. In many American games there is more than the binary possibility of winning or losing. The third alternative, that is, drawing or tying, allows the choices win, lose, or draw.
I agree that 3 is important. I would put it way up there. Not above 2, but before 4.
Zoo with Horses
So we went to the zoo today, and I guess because it's cold in Minnesota (like Moscow), the zoos here are not so good. No lions, alligators, elephants, hippos. Instead, lots of ungulates, which simply are not the world's coolest animals. Whatever, zoos are for kids. But then we get to the 'Wild Horses' exhibit, and I just think it's lame. What's next, cats and dogs? I ask the Zoo Volunteer, what is the difference between these horses, and ones in barns across the US? The answer: 'these are wild'. OK. The 'wild horses' look just like regular horses, only a little scruffier. I get the sense it's one of those Hans Christian Andersen stories.
Friday, May 23, 2008
Family Crests
There's a castle Falkenstein, and my Falkenstein ancestors came over around 1860 or so. I figure they were probably not the proprietors of the Falkenstein lands, merely serfs or something. Why else move to America? Further, my name represents merely the patrilineal ancestry, on branch in a big bush of genetic contributors, which, though all from the Germanic/Swiss areas of Europe, this implies that the 'Falkenstein' branch gets too much credit or blame for my genetic inheritance, as there were Bucholzs, Horstmanns, Boughs, Bylers and lots of other names that are just as much of me genetically. Perhaps, if I'm just trying to feel good about my ancestry, I should pick the most successful branch and emphasize that (eg, my mother's father's mother).
Indeed, the Castle Falkenstein was built around 1280. Assume that the most awesome, smart, sexy, Falkenstein from that period was my direct ancestor. That's about 30 generations ago. 2^30 is the number of parents of my parents etc. at that point, about 536 million people (obviously, this implies lots of incestuous repeats). Even if my father's father's father was The super cool Falkenstein, it's pretty immaterial in my genetic makeup. I'm sure they were lots of losers without castles in that list of 536 million. Thus, in the big scheme of things, I should be concerned about the those 536 million ancestor's allele frequencies which would be less a function of surname, than the geography of my ancestors, which is approximated I suppose my the surname of the, say, 16 great-great-grandparents I know.
So, a surname is probably not nearly as important, genetically, as we presume. But, just for fun, I looked online at my family crest. Interestingly, it seems every family crest website has a different Falkenstein crest. One site lists the German and Jewish version of the Falkenstein crest, the difference being merely the colors. I bet both the Jewish and German side feels the other stole their crest [I'm not Jewish]. I have a suspicion these family crest websites are just making this stuff up.
See different crests here, here, and here.
Indeed, the Castle Falkenstein was built around 1280. Assume that the most awesome, smart, sexy, Falkenstein from that period was my direct ancestor. That's about 30 generations ago. 2^30 is the number of parents of my parents etc. at that point, about 536 million people (obviously, this implies lots of incestuous repeats). Even if my father's father's father was The super cool Falkenstein, it's pretty immaterial in my genetic makeup. I'm sure they were lots of losers without castles in that list of 536 million. Thus, in the big scheme of things, I should be concerned about the those 536 million ancestor's allele frequencies which would be less a function of surname, than the geography of my ancestors, which is approximated I suppose my the surname of the, say, 16 great-great-grandparents I know.
So, a surname is probably not nearly as important, genetically, as we presume. But, just for fun, I looked online at my family crest. Interestingly, it seems every family crest website has a different Falkenstein crest. One site lists the German and Jewish version of the Falkenstein crest, the difference being merely the colors. I bet both the Jewish and German side feels the other stole their crest [I'm not Jewish]. I have a suspicion these family crest websites are just making this stuff up.
See different crests here, here, and here.
Thursday, May 22, 2008
The Lesson of Don Patinkin on Global Warming
Don Patinkin’s exposition of Keynesian macroeconomics in the 1960’s, with page after page of equations, created what appeared to be a very compelling theory, because it was consistent and logical at every step. All the partial derivatives 'made sense' [a partial derivative is what happens when you change one variable, and assume all others remain constant]. It was not until the empirical failures of the 1970’s that people began to ignore, not refute, Patinkin, and this is the fate of bad theory: based on faulty assumptions, all the nice logic is not wrong, but irrelevant. Yet Wikipedia describes Don Patinkin’s dissertation, Money, Interest and Prices (1956) as a 'tour-de-force', also noting that 'his 1956 treatise remains an example of Neo-Keynesian theory at its best'. Funny, Eugene Fama called the application of standard utility assumptions and statistics to create the Capital Asset Pricing Model (CAPM) a ‘theoretical tour de force’, but also acknowledged it has virtually no empirical support. I guess 'tour de force' means, 'impressive but useless'. The exceptional veneration of once popular work that turned out to be a dead end, is common in economics. Too many influential, almost-dead professors to annoy, better to say nothing at all and hope the geriatric bastards don't get it.
I was a student of Larry Meyer, who went on to become a member of the Federal Reserve Governor, and he wrote a textbook for undergrads with a Keynesian model building approach. His firm, Macroeconomic Advisers, still generates forecasts and sells them. But this was not emulated, and grad student's don't flock to this line of research, because if you do simple Vector Auto Regressions, they do as well as these seemingly much more logical Keynesian models. Both do poorly, but the Keynesian approach is worse. Patinkin's magnum opus, like the CAPM, is impressive, but a dead end. Surely there is enough written in both genres to find somethings in it that are useful, but anything large is bound to have these.
The bottom line is that the economy is a complex interdependent system. Modelers knew about the Lucas critique, feedbacks, and nonlinearities, such as the liquidity effect, but all to no avail. The models were, with hindsight, nobel efforts, but ultimately failures. Thus, I am very skeptical of Global Warming models because I sense they also have similar issues. The only real way to judge these models, is for them to have objective shorter term forecasts that hold them accountable. I don't see any database of forecasts for temperatures next year, so I sense they are unfalsifiable.
In my experience studying macro economics, you see that before out-of-sample data arise (as when models were initially built based on historical data), lots of people with great credentials, logic, and sophisticated mathematics, are highly certain these models will work. I have not seen any complex economic models work. I don't see why the global climate should be any better, and as they are more untestable. I suspect Global Warming is used as a pretext to implement policies many people want, just as the Polar Bear's appearance on the endangered species list will be used to prevent development, especially oil drilling.
I was a student of Larry Meyer, who went on to become a member of the Federal Reserve Governor, and he wrote a textbook for undergrads with a Keynesian model building approach. His firm, Macroeconomic Advisers, still generates forecasts and sells them. But this was not emulated, and grad student's don't flock to this line of research, because if you do simple Vector Auto Regressions, they do as well as these seemingly much more logical Keynesian models. Both do poorly, but the Keynesian approach is worse. Patinkin's magnum opus, like the CAPM, is impressive, but a dead end. Surely there is enough written in both genres to find somethings in it that are useful, but anything large is bound to have these.
The bottom line is that the economy is a complex interdependent system. Modelers knew about the Lucas critique, feedbacks, and nonlinearities, such as the liquidity effect, but all to no avail. The models were, with hindsight, nobel efforts, but ultimately failures. Thus, I am very skeptical of Global Warming models because I sense they also have similar issues. The only real way to judge these models, is for them to have objective shorter term forecasts that hold them accountable. I don't see any database of forecasts for temperatures next year, so I sense they are unfalsifiable.
In my experience studying macro economics, you see that before out-of-sample data arise (as when models were initially built based on historical data), lots of people with great credentials, logic, and sophisticated mathematics, are highly certain these models will work. I have not seen any complex economic models work. I don't see why the global climate should be any better, and as they are more untestable. I suspect Global Warming is used as a pretext to implement policies many people want, just as the Polar Bear's appearance on the endangered species list will be used to prevent development, especially oil drilling.
Wednesday, May 21, 2008
Moody's Glitch Transposes A2 to Aaa
Well, one response by Moody's could be, how many of you guys know what A2 means anyway? Is it better, or worse, than A1? Aaa? When John Moody left S&P to start his own firm, he had to create a new scale, so S&P kept the more intuitive AAA, AA+ scale while Moody came up with his wicked nomenclature.
Anyway, the stock fell 15% today because Moody's admitted that a AAA security should have been 'several notches' lower but for a problem in their computer code. 'Several' means 'more than 3, but not many'. Let's say 5. So five notches lower is A2. Nice, as Borat would say.
The problem was with Constant Proportion Debt Obligations, instruments so complex Fitch wouldn't even rate them. An analyst at CreditSights in April 2006 noted that "though we cannot pinpoint exactly where the flaw in the ratings methodology is, there are a number of things to give us unease". [I should note that these things are infinitely less complicated than global climate models].
As mentioned in my post on why bank examiners should have access to pnl by line of business, a simple sniff test should have exposed this instantly. Even in 2005, these things had AAA ratings but 200 basis point spreads to Treasuries, while regular AAA debt had only a 20 basis point spread. Great opportunity, you might ask? Well, no. Moody's rates things based on an expected loss methodology. It puts collateral through scenarios, and notes the various trigger points that cause structures to collapse, in calculating the expected loss of various senior notes from these pools of collateral. That is, the expected loss on a AAA rated note on a pool of B rated credit is highly nonlinear, accounting for the fact that AAA pieces often have zero losses until several barriers are breached.
But Moody's is consistent, and so, the expected loss on these AAA rated senior pieces should be the same as the expected losses on all AAA rated debt, if they ran their models correctly. Now, one can imagine the market, due to illiquidity, might gives these things a 20 or 40 basis point premium, but 200? That's the market saying 'your models are wrong', which was, in effect, the absence of S&P and Fitch also said.
Agency ratings are best thought of in natural log scale, so that the natural log of the average default rates is about 1.5 unit different from 'grades' of AAA to B (.01%, .04%, .13%, .36%, 1.1%, 4.9%). So being off by several notches is material, because it goes from a 'notch' to a 'grade', which is discernible intuitively.
The key is, how can you trust something you don't understand? One easy way, would be not to. But this would leave one pretty catatonic. We rely on things everyday we don't understand, like how an airplane flies, or how my prescription drugs work, or even how my brain tells me to do something. And so we rely a lot of brand names and the reputation for integrity and thoroughness they represent. Thus, a hit like this should be very costly to Moody's, but that suggests the solution is endogenous, in that it should be in Moody's self-interest to correct this error.
In the meantime, when evaluating risk look at returns too. If they seem out of proportion to other returns on seemingly similar risks, you're doing it wrong.
Anyway, the stock fell 15% today because Moody's admitted that a AAA security should have been 'several notches' lower but for a problem in their computer code. 'Several' means 'more than 3, but not many'. Let's say 5. So five notches lower is A2. Nice, as Borat would say.
The problem was with Constant Proportion Debt Obligations, instruments so complex Fitch wouldn't even rate them. An analyst at CreditSights in April 2006 noted that "though we cannot pinpoint exactly where the flaw in the ratings methodology is, there are a number of things to give us unease". [I should note that these things are infinitely less complicated than global climate models].
As mentioned in my post on why bank examiners should have access to pnl by line of business, a simple sniff test should have exposed this instantly. Even in 2005, these things had AAA ratings but 200 basis point spreads to Treasuries, while regular AAA debt had only a 20 basis point spread. Great opportunity, you might ask? Well, no. Moody's rates things based on an expected loss methodology. It puts collateral through scenarios, and notes the various trigger points that cause structures to collapse, in calculating the expected loss of various senior notes from these pools of collateral. That is, the expected loss on a AAA rated note on a pool of B rated credit is highly nonlinear, accounting for the fact that AAA pieces often have zero losses until several barriers are breached.
But Moody's is consistent, and so, the expected loss on these AAA rated senior pieces should be the same as the expected losses on all AAA rated debt, if they ran their models correctly. Now, one can imagine the market, due to illiquidity, might gives these things a 20 or 40 basis point premium, but 200? That's the market saying 'your models are wrong', which was, in effect, the absence of S&P and Fitch also said.
Agency ratings are best thought of in natural log scale, so that the natural log of the average default rates is about 1.5 unit different from 'grades' of AAA to B (.01%, .04%, .13%, .36%, 1.1%, 4.9%). So being off by several notches is material, because it goes from a 'notch' to a 'grade', which is discernible intuitively.
The key is, how can you trust something you don't understand? One easy way, would be not to. But this would leave one pretty catatonic. We rely on things everyday we don't understand, like how an airplane flies, or how my prescription drugs work, or even how my brain tells me to do something. And so we rely a lot of brand names and the reputation for integrity and thoroughness they represent. Thus, a hit like this should be very costly to Moody's, but that suggests the solution is endogenous, in that it should be in Moody's self-interest to correct this error.
In the meantime, when evaluating risk look at returns too. If they seem out of proportion to other returns on seemingly similar risks, you're doing it wrong.
Tuesday, May 20, 2008
Good Definition
George Orwell noted the real test of character is how you treat someone who has no possibility of doing you any good. That's a pretty good definition, one many (most?) people I think would fail. See here on the sociobiology of ethics that is not strictly self-interested. I think it is important to note that Dawkin's wonderful Book, The Selfish Gene, argued that George Williams and William Hamilton found that the real mechanism wasn't at the species level, but the individual, if not genetic level, and was targeted at those who argued that species somehow act in their collective self interest. But whereas Dawkins was railing against an overstretch, the group selection mechanism is not totally bankrupt. Selection can operate at several levels: gene, individual, group.
Northern Trust on a Tear
So I was at this presentation today by the CFO of Northern Trust, a 'bank' based in Chicago, with about $80B in assets on its balance sheet, and about $800B in assets under management. You should check out their material here. This company has a current P/E of around 16, which basically means the market thinks a dollar of earnings here is worth about 50% more than a dollar from its peers. This is an example of a finance company that avoided the subprime mess, and really it has been the past 12 months, where they have risen 20% while the financial indices have fallen 25%, that has generated the separation.
The CFO, Steve Fradkin, had several themes that I found really refreshing. First, he noted that if you look at JPMorgan you see that it only recently combined with Chase, which had just recently joined with Chemical, etc. The cumulative effects of all these undigested pieces makes for several problems. First, it screws up earnings, as special charges are created, amortized, etc., making it very difficult to know how you are doing, because earnings contain, or exclude, so many special charges, 80% of the earnings are always 'special'. Secondly, you have turf wars, and different cultures fighting for the soul of the company. This can really lead one to take one's eye off the ball.
And 'the ball', is customers. Financial companies are in the business of intermediation, connecting savers to companies that need the capital. They aren't in the business of speculating, or arbitraging a seemingly great spread in auto ABS.
Fradkin noted, with a nice display of corporate candor, that in the early 1990's, they were a standard, smaller bank, that didn't have some master plan to become what they became. By merely focusing on their niche, institutional clients and wealthy individuals, and forsaking the infinite number of other ways to make money, the growth took them to someplace they did not anticipate. They became one of the leading firms in assets under management, though you don't much hear about them because they don't have a retail focus.
Fradkin also noted that when he became CFO five years ago, he visited the nasdaq, where his stock trades (ticker NTRS), the trading guys there said they could give him information on block trades, etc. What would he do with this, he asked? Good question. I wish I could get information on CFO visits to exchanges. Those CFOs that are interested in daily stock market activity would be great shorts, because you simply can't build a business looking at that kind of information.
By sticking to one thing, understanding it, and doing it well, they have grown fantastically, adding a lot of shareholder value. Contrast that with CitiCorp, which is involved in everything, and will probably sell itself off in parts so that they can finally understand what business they are in.
The CFO, Steve Fradkin, had several themes that I found really refreshing. First, he noted that if you look at JPMorgan you see that it only recently combined with Chase, which had just recently joined with Chemical, etc. The cumulative effects of all these undigested pieces makes for several problems. First, it screws up earnings, as special charges are created, amortized, etc., making it very difficult to know how you are doing, because earnings contain, or exclude, so many special charges, 80% of the earnings are always 'special'. Secondly, you have turf wars, and different cultures fighting for the soul of the company. This can really lead one to take one's eye off the ball.
And 'the ball', is customers. Financial companies are in the business of intermediation, connecting savers to companies that need the capital. They aren't in the business of speculating, or arbitraging a seemingly great spread in auto ABS.
Fradkin noted, with a nice display of corporate candor, that in the early 1990's, they were a standard, smaller bank, that didn't have some master plan to become what they became. By merely focusing on their niche, institutional clients and wealthy individuals, and forsaking the infinite number of other ways to make money, the growth took them to someplace they did not anticipate. They became one of the leading firms in assets under management, though you don't much hear about them because they don't have a retail focus.
Fradkin also noted that when he became CFO five years ago, he visited the nasdaq, where his stock trades (ticker NTRS), the trading guys there said they could give him information on block trades, etc. What would he do with this, he asked? Good question. I wish I could get information on CFO visits to exchanges. Those CFOs that are interested in daily stock market activity would be great shorts, because you simply can't build a business looking at that kind of information.
By sticking to one thing, understanding it, and doing it well, they have grown fantastically, adding a lot of shareholder value. Contrast that with CitiCorp, which is involved in everything, and will probably sell itself off in parts so that they can finally understand what business they are in.
Monday, May 19, 2008
Academics Document Dead Strategies
In the latest Journal of Finance, there's an article by Ni, Pan and Poteshman about volatility trading. Basically, they argue that increases in demand for options by non-market makers helps predict realized volatility. OK, fair enough. Someone hears a rumor, buys options to capitalize, the rumor is realized, and volatility increases. Implied volatility and option volume help predict realized vol.
But here's where I lose interest. They use daily data from 1990 to 2001. The option market has become so much more liquid and efficient since 2001, I doubt anything in this period is relevant. Prior to 2001, it was pretty hard to trade this stuff algorithmically, picking up on things, because spreads were very wide (and still are), and quotes posted weren't very deep (ie, you couldn't do much size at the bid or ask), and systems for trading these things with electronic algorithms were very difficult in those days. Heck, even Nassim Taleb's fund made a lot of money in 2000.
There are lots of strategies that made money in the 1990's, such as any short-term mean-reverting strategy in stocks. Most of these are merely of historical interest, because they are long gone. High frequency data from 10 years ago is about as interesting as reading that if you created a search algorithm for the interweb (aka World Wide Web), you would be rich! It's true. And sharing videos online, another money maker.
But here's where I lose interest. They use daily data from 1990 to 2001. The option market has become so much more liquid and efficient since 2001, I doubt anything in this period is relevant. Prior to 2001, it was pretty hard to trade this stuff algorithmically, picking up on things, because spreads were very wide (and still are), and quotes posted weren't very deep (ie, you couldn't do much size at the bid or ask), and systems for trading these things with electronic algorithms were very difficult in those days. Heck, even Nassim Taleb's fund made a lot of money in 2000.
There are lots of strategies that made money in the 1990's, such as any short-term mean-reverting strategy in stocks. Most of these are merely of historical interest, because they are long gone. High frequency data from 10 years ago is about as interesting as reading that if you created a search algorithm for the interweb (aka World Wide Web), you would be rich! It's true. And sharing videos online, another money maker.
Signs of Brain Damage
This double-knockout would be funny, except when a person is hit in the head, and then lies on his back with his arms sticking straight out, it is a sign of severe brain trauma. So, the guy on the right is in big trouble, it indicates possible brain damage. There was a fight with Tank Abbott early in the UFC where he did this to a guy (see here). Time to switch to your day job.
Sunday, May 18, 2008
Bank Examiners Need to See Detailed PnL
Every bank crisis is a good time for reevaluating how banks are measured, monitored, and managed. Recently, UBS produced a detailed Report to Shareholders that is remarkably informative description of how to lose $37B in one year. It would be great if every major corporation had the courage to generate this kind of informative mea culpa, and suggests there is little cognitive dissonance of this error. The report highlights several key issues that have applicability beyond the scope of the current subprime debacle, to a more general information systems issue.
I think the UBS experience highlights the importance of bank examiners being able to evaluate the business lines of a bank the way banks do, or should, look at their business lines. That is, large financial companies are composites of perhaps a hundred of lower line businesses, defined by a combination of currency, industry, instrument type, etc. The problem is, outsiders merely see top-line numbers for revenues and expenses, sometimes going down to lines like ‘trading’, ‘investment banking’, and ‘asset management’, still an incredibly high level of aggregation. Even Regulators and rating agency representatives are given very high, and selective, data. When a disaster hits it exposes a particular activity that was perhaps not given much thought: who knew UBS had $50 billion in US residential mortgages on their balance sheet?
The problem is that the most common information given out by banks is insufficient to see what is really going on. Assets, for example, are merely broken into large groupings based on Agency rating, maturity, and for non-rated assets whether they are performing or not. This is like assessing the health of someone by looking at their current body temperature and weight. VaR is applied selectively in a bank, leading to issues of adverse selection and moral hazard, as selective risky assets are rationalized as being on the ‘banking book’ instead of the ‘trading book’, so that the final top-down number excludes the majority of actual risk, and a biased majority at that. Another problem is that credit risk is still very difficult to measure in the VaR framework, so that given the particular collateral, obligor, and facility, can have a very different risk profile depending on details that banks are not obligated to supply, let alone validate. When you try to find comparables, you are acting more like a historian than a quant, and so you should expect these measures to be off by a factor of 2.
This focus would put more weight on revenues than direct risk measures. Now, a ‘best practices’ approach to internal profitability reporting uses what First Manhattan calls a Net Income After Capital Charge approach (NIACC). This combines the net income, and the return-on-equity, into a single number useful for capital allocation and performance evaluation. Net income alone ignores risk, while ROE is indifferent to the size of the effect; the NIACC combines them (see here for how NIACC, or EVA, is related to income).
When I was head of capital allocation at a regional bank, I discovered two insights not mentioned in risk management literature. First, profitability reporting was just as important as risk measures--for measuring risk! This is because you could pair the income generated with your bottom up risk measure, and see if it was reasonable. Often, your risk measure was off by a factor of 3 because you forgot certain assets had guarantees, or had off-balance sheet exposures. Most of these errors weren’t subtle, they merely omitted a significant factor. Most complex problems, like examining the risk of a large financial institution, are not really fundamentally complex, merely detailed, and so getting good approximations was more a function of asking for the right set of details, rather than working out the mathematics of a complex non-recombining lattice. Secondly, the aggregate numbers were pretty irrelevant. The bottom-up, aggregate required capital estimate was so dependent on uncertain assumptions about correlations, and the relation between our franchise value versus the value of assets on and off our balance sheet, that it was rightfully ignored.
The unfortunate reality is that we can’t measure risk nearly as good as we measure revenue. While we should work to make these risk measures better, in the meantime, looking at metrics based partially on revenues is very informative. Indeed, the average annualized volatility for the major 7 money center banks in the US had annualized 99% VaRs of about $1.8B in 2007, but still managed to lose an average of $42B in market value over the past 12 months. This suggests that, as a top-down number, reported VaR is about as useful as knowing the CEO’s favorite color.
The main problem in UBS's business model was that the senior RMBS were funded improperly, because if these securities were seen as having negative carry they would have been evaluated all their mortgages more stringently. UBS applied the bank funding rate to all their activities, and so Aaa rated mortgage paper had a positive carry. There was an internal arb in the bank, in that almost any paper funded at such rates generates positive carry, and given the infrequent (but massive) nature of credit losses, in combination with bonuses based on annual revenues, created an incentive to put these assets on the books.
The net result was therefore a combination of a credit risk, market risk, and operational risk, as incentives led to an inefficient allocation of resources that ended with a disaster.
If every bank showed its profitability reporting by line of business, an analyst would be able to flag these issues much better. That is, say there was a business group in charge of the RMBS on the bank’s balance sheet. Taking into account the capital applied to this business, and its funding rate, what was its NIACC? What was its ROE? If the bank reported that the NIACC, or Sharpe ratio, was significantly positive, one could note the asset class in question, and deduce something fishy that necessitated further inquiry. This is because such a large asset class simply does not have a large Sharpe ratio, and one could easily note this because no hedge funds warehouse large amounts of high-rated ABS. An assumption in the business model is clearly wrong, and given the amount of residential mortgages on the balance sheet, this error is significant.
One should expect financial institutions to have modest Sharpe ratios for the income generated by assets sitting on their balance sheets, because merely holding a common asset class simply does not have a Sharpe much above 0.25. The average Sharpe ratio of the stock market is about 0.4, and this is the ‘equity premium puzzle’, because it is so high. Thus, we should expect very few assets to be above this number, and if so, assumptions need to be examined.
There are confidentiality issues, in that some investments may have a proprietary business advantage, so it may be necessary to allow this informational reporting only for rating agencies and regulators, who sign nondisclosure agreements. The key is that specific net revenue, and capital allocation figures can lead to better questions by those outsiders like investors, and taxpayers, rely on to monitor these opaque institutions. Linking revenues to risk allows you to understand the bottom level risk numbers better, and find the landmines before they blow up.
At UBS, much of the growth in the Fixed Income group came from repackaging residential mortgages from the US into mezzanine (e.g., Ba rated) CDOs and reselling them. This generated fees of 125 to 150 basis points, compared to fees of only 40 basis points on senior tranches. But the 150 basis points on the mezzanine piece necessitated keeping 60% of the RMBS on their books, the senior pieces. These assets supported great trading revenue, but if there were properly funded and assessed the appropriate capital, it would have been obvious that while 150 basis points is great business, the costs generated by warehousing the supporting collateral put a limit on how much of this stuff was optimal. Instead, the residual assets warehoused seemed to have a positive NIACC (equal to an above-hurdle rate Sharpe), and thus at the margin added to the bank’s value, so there was effectively no limit to how much was optimal: as many as their mezzanine CDO group could sell. But no matter whether these senior piece were put in a separate business line, or kept within the group generating the restructuring fees, it would have set off red flags that a liquid and large asset class like RMBS was capable of generating significant alpha. Like taking 50 basis points out of a Treasury Bill trade, some familiarity with the spreads and returns of these assets suggests that some large tail risk is being assumed, because there just isn’t enough spread to generate this kind of profit without some error in transfer pricing or risk estimation.
Clearly, they rationalized the positive carry on super senior RMBS via the fat fees on the CDOs, but this had deleterious spill-over effects as well. UBS put on tens of billions of Aaa-rated ABS for things like autos and credit cards based on the same, flawed, funding model. After all, one could say, if you are going to fund Aaa RMBS at positive carry, why not Aaa auto loans? And so, when credit spreads for all structured finance widened, UBS took a considerable hit there as well, losing about $4B, all in an investment that never made sense to begin with. The origin of this loss was operational risk at its core.
The conventional corporate bond puzzle is that Investment Grade spreads are too high.[1] The most conspicuous bond index captures US Baa and Aaa bond yields going back to 1919, which generates enough data to make it ‘the’ corporate spread measure, especially when looking at correlations with business cycles. Yet Baa bonds are still ‘investment grade’, and have only a 4.7% 10 year cumulative default rate after initial rating. As the recovery rate on defaulted bonds is around 50%, this annualizes to a mere 0.23% annualized loss rate. Since the spread between Baa and Aaa bonds has averaged around 1.2% since 1919, this generates an approximate 0.97% annualized excess return compared to the riskless Aaa yield, creating the puzzle that spreads are ‘too high’ for the risk incurred.
Though a puzzle, it would be a mistake for Aaa rated companies to actually assume this spread is investment alpha. The other two such return anomalies, the short end of the yield curve, and the equity premium [2], clearly do not imply one should take yield curve risk, or borrow to invest in equities. Public companies are not necessary for taking these bets, and so an inefficient way to address them even if an investor thought the represented merely ‘behavioral biases’. We may not fully understand these particular risk premiums, but they are not ‘arbitrage’. Banks would be wise to fund at best the A rate internally, to avoid this kind of gaming. When the UBS funded AAA rated assets at a positive carry, this error was essential for supporting the wrong track they went down.
A well-run bank should have income tied to a capital allocation based on economic risk, at the business line level. Thus it should be able to provide this information. An examiner would then see the various Sharpes, for the various business lines, and their assets on the balance sheet, and see if these make sense. A very high Sharpe invites investigation. Was it funded correctly? Are there off-balance sheet liabilities? Is the VaR, or capital allocation, correct? By having this on a specific asset class, such as super senior residential mortgages, or fixed-receive swaps in the Treasury account, the analyst can address the issue. With a top-down VaR, one has no way of asking relevant questions about how the VaR was estimated, but rather general questions about VaR methodology that are not likely to be informative.
If a bank is putting on a large amount of assets onto their balance sheet, or retaining exposure to off-balance sheet liabilities, there are two general paths to detect this: from a bottom up risk calculation, and through a top-down revenue examination. Most bank businesses are not amenable to a reasonably precise economic capital estimation based merely on the asset’s characteristics, and economic capital algorithms are applied inconsistently in the current framework. Thus, the revenues from these businesses add important information because in general we have good intuition on what reasonable risk-adjusted rates of return are, especially for prosaic asset classes like residential mortgages. A financial institution making exorbitant returns on these things implies some assumption within the bank is incorrect. Only through the obscurity of aggregation were these positions allowed to fester into the problems they became, and so at least regulators, but perhaps rating agency representatives, should be able to demand data on the profits, regulatory, and economic capital, at the lowest level line of business. If they say 'we manage at the 'asset management' level', allow the examiners to discuss in detail what they found. As shareholders learn of this, they will realize they are heading towards disaster, because you can't manage a bank at that level, so it's either reflects ignorance or mendacity.
1 Chen,L.; D.Lesmond; and J.Wei, 2007, Corporate Yield Spreads and Bond Liquidity. Journal of Finance, 62 (1), 119-149.
2 Mehra, Rajnish; Edward C. Prescott, 2003. "The Equity Premium Puzzle in Retrospect", in G.M. Constantinides, M. Harris and R. Stulz: Handbook of the Economics of Finance. Amsterdam: North Holland, 889-938. David Backus, A. Gregory, and Stanley Zin, 1989, Risk premiums in the term structure: Evidence from artificial economies,"Journal of Monetary Economics, 24, 371-399.
I think the UBS experience highlights the importance of bank examiners being able to evaluate the business lines of a bank the way banks do, or should, look at their business lines. That is, large financial companies are composites of perhaps a hundred of lower line businesses, defined by a combination of currency, industry, instrument type, etc. The problem is, outsiders merely see top-line numbers for revenues and expenses, sometimes going down to lines like ‘trading’, ‘investment banking’, and ‘asset management’, still an incredibly high level of aggregation. Even Regulators and rating agency representatives are given very high, and selective, data. When a disaster hits it exposes a particular activity that was perhaps not given much thought: who knew UBS had $50 billion in US residential mortgages on their balance sheet?
The problem is that the most common information given out by banks is insufficient to see what is really going on. Assets, for example, are merely broken into large groupings based on Agency rating, maturity, and for non-rated assets whether they are performing or not. This is like assessing the health of someone by looking at their current body temperature and weight. VaR is applied selectively in a bank, leading to issues of adverse selection and moral hazard, as selective risky assets are rationalized as being on the ‘banking book’ instead of the ‘trading book’, so that the final top-down number excludes the majority of actual risk, and a biased majority at that. Another problem is that credit risk is still very difficult to measure in the VaR framework, so that given the particular collateral, obligor, and facility, can have a very different risk profile depending on details that banks are not obligated to supply, let alone validate. When you try to find comparables, you are acting more like a historian than a quant, and so you should expect these measures to be off by a factor of 2.
This focus would put more weight on revenues than direct risk measures. Now, a ‘best practices’ approach to internal profitability reporting uses what First Manhattan calls a Net Income After Capital Charge approach (NIACC). This combines the net income, and the return-on-equity, into a single number useful for capital allocation and performance evaluation. Net income alone ignores risk, while ROE is indifferent to the size of the effect; the NIACC combines them (see here for how NIACC, or EVA, is related to income).
When I was head of capital allocation at a regional bank, I discovered two insights not mentioned in risk management literature. First, profitability reporting was just as important as risk measures--for measuring risk! This is because you could pair the income generated with your bottom up risk measure, and see if it was reasonable. Often, your risk measure was off by a factor of 3 because you forgot certain assets had guarantees, or had off-balance sheet exposures. Most of these errors weren’t subtle, they merely omitted a significant factor. Most complex problems, like examining the risk of a large financial institution, are not really fundamentally complex, merely detailed, and so getting good approximations was more a function of asking for the right set of details, rather than working out the mathematics of a complex non-recombining lattice. Secondly, the aggregate numbers were pretty irrelevant. The bottom-up, aggregate required capital estimate was so dependent on uncertain assumptions about correlations, and the relation between our franchise value versus the value of assets on and off our balance sheet, that it was rightfully ignored.
The unfortunate reality is that we can’t measure risk nearly as good as we measure revenue. While we should work to make these risk measures better, in the meantime, looking at metrics based partially on revenues is very informative. Indeed, the average annualized volatility for the major 7 money center banks in the US had annualized 99% VaRs of about $1.8B in 2007, but still managed to lose an average of $42B in market value over the past 12 months. This suggests that, as a top-down number, reported VaR is about as useful as knowing the CEO’s favorite color.
The main problem in UBS's business model was that the senior RMBS were funded improperly, because if these securities were seen as having negative carry they would have been evaluated all their mortgages more stringently. UBS applied the bank funding rate to all their activities, and so Aaa rated mortgage paper had a positive carry. There was an internal arb in the bank, in that almost any paper funded at such rates generates positive carry, and given the infrequent (but massive) nature of credit losses, in combination with bonuses based on annual revenues, created an incentive to put these assets on the books.
The net result was therefore a combination of a credit risk, market risk, and operational risk, as incentives led to an inefficient allocation of resources that ended with a disaster.
If every bank showed its profitability reporting by line of business, an analyst would be able to flag these issues much better. That is, say there was a business group in charge of the RMBS on the bank’s balance sheet. Taking into account the capital applied to this business, and its funding rate, what was its NIACC? What was its ROE? If the bank reported that the NIACC, or Sharpe ratio, was significantly positive, one could note the asset class in question, and deduce something fishy that necessitated further inquiry. This is because such a large asset class simply does not have a large Sharpe ratio, and one could easily note this because no hedge funds warehouse large amounts of high-rated ABS. An assumption in the business model is clearly wrong, and given the amount of residential mortgages on the balance sheet, this error is significant.
One should expect financial institutions to have modest Sharpe ratios for the income generated by assets sitting on their balance sheets, because merely holding a common asset class simply does not have a Sharpe much above 0.25. The average Sharpe ratio of the stock market is about 0.4, and this is the ‘equity premium puzzle’, because it is so high. Thus, we should expect very few assets to be above this number, and if so, assumptions need to be examined.
There are confidentiality issues, in that some investments may have a proprietary business advantage, so it may be necessary to allow this informational reporting only for rating agencies and regulators, who sign nondisclosure agreements. The key is that specific net revenue, and capital allocation figures can lead to better questions by those outsiders like investors, and taxpayers, rely on to monitor these opaque institutions. Linking revenues to risk allows you to understand the bottom level risk numbers better, and find the landmines before they blow up.
At UBS, much of the growth in the Fixed Income group came from repackaging residential mortgages from the US into mezzanine (e.g., Ba rated) CDOs and reselling them. This generated fees of 125 to 150 basis points, compared to fees of only 40 basis points on senior tranches. But the 150 basis points on the mezzanine piece necessitated keeping 60% of the RMBS on their books, the senior pieces. These assets supported great trading revenue, but if there were properly funded and assessed the appropriate capital, it would have been obvious that while 150 basis points is great business, the costs generated by warehousing the supporting collateral put a limit on how much of this stuff was optimal. Instead, the residual assets warehoused seemed to have a positive NIACC (equal to an above-hurdle rate Sharpe), and thus at the margin added to the bank’s value, so there was effectively no limit to how much was optimal: as many as their mezzanine CDO group could sell. But no matter whether these senior piece were put in a separate business line, or kept within the group generating the restructuring fees, it would have set off red flags that a liquid and large asset class like RMBS was capable of generating significant alpha. Like taking 50 basis points out of a Treasury Bill trade, some familiarity with the spreads and returns of these assets suggests that some large tail risk is being assumed, because there just isn’t enough spread to generate this kind of profit without some error in transfer pricing or risk estimation.
Clearly, they rationalized the positive carry on super senior RMBS via the fat fees on the CDOs, but this had deleterious spill-over effects as well. UBS put on tens of billions of Aaa-rated ABS for things like autos and credit cards based on the same, flawed, funding model. After all, one could say, if you are going to fund Aaa RMBS at positive carry, why not Aaa auto loans? And so, when credit spreads for all structured finance widened, UBS took a considerable hit there as well, losing about $4B, all in an investment that never made sense to begin with. The origin of this loss was operational risk at its core.
The conventional corporate bond puzzle is that Investment Grade spreads are too high.[1] The most conspicuous bond index captures US Baa and Aaa bond yields going back to 1919, which generates enough data to make it ‘the’ corporate spread measure, especially when looking at correlations with business cycles. Yet Baa bonds are still ‘investment grade’, and have only a 4.7% 10 year cumulative default rate after initial rating. As the recovery rate on defaulted bonds is around 50%, this annualizes to a mere 0.23% annualized loss rate. Since the spread between Baa and Aaa bonds has averaged around 1.2% since 1919, this generates an approximate 0.97% annualized excess return compared to the riskless Aaa yield, creating the puzzle that spreads are ‘too high’ for the risk incurred.
Though a puzzle, it would be a mistake for Aaa rated companies to actually assume this spread is investment alpha. The other two such return anomalies, the short end of the yield curve, and the equity premium [2], clearly do not imply one should take yield curve risk, or borrow to invest in equities. Public companies are not necessary for taking these bets, and so an inefficient way to address them even if an investor thought the represented merely ‘behavioral biases’. We may not fully understand these particular risk premiums, but they are not ‘arbitrage’. Banks would be wise to fund at best the A rate internally, to avoid this kind of gaming. When the UBS funded AAA rated assets at a positive carry, this error was essential for supporting the wrong track they went down.
A well-run bank should have income tied to a capital allocation based on economic risk, at the business line level. Thus it should be able to provide this information. An examiner would then see the various Sharpes, for the various business lines, and their assets on the balance sheet, and see if these make sense. A very high Sharpe invites investigation. Was it funded correctly? Are there off-balance sheet liabilities? Is the VaR, or capital allocation, correct? By having this on a specific asset class, such as super senior residential mortgages, or fixed-receive swaps in the Treasury account, the analyst can address the issue. With a top-down VaR, one has no way of asking relevant questions about how the VaR was estimated, but rather general questions about VaR methodology that are not likely to be informative.
If a bank is putting on a large amount of assets onto their balance sheet, or retaining exposure to off-balance sheet liabilities, there are two general paths to detect this: from a bottom up risk calculation, and through a top-down revenue examination. Most bank businesses are not amenable to a reasonably precise economic capital estimation based merely on the asset’s characteristics, and economic capital algorithms are applied inconsistently in the current framework. Thus, the revenues from these businesses add important information because in general we have good intuition on what reasonable risk-adjusted rates of return are, especially for prosaic asset classes like residential mortgages. A financial institution making exorbitant returns on these things implies some assumption within the bank is incorrect. Only through the obscurity of aggregation were these positions allowed to fester into the problems they became, and so at least regulators, but perhaps rating agency representatives, should be able to demand data on the profits, regulatory, and economic capital, at the lowest level line of business. If they say 'we manage at the 'asset management' level', allow the examiners to discuss in detail what they found. As shareholders learn of this, they will realize they are heading towards disaster, because you can't manage a bank at that level, so it's either reflects ignorance or mendacity.
1 Chen,L.; D.Lesmond; and J.Wei, 2007, Corporate Yield Spreads and Bond Liquidity. Journal of Finance, 62 (1), 119-149.
2 Mehra, Rajnish; Edward C. Prescott, 2003. "The Equity Premium Puzzle in Retrospect", in G.M. Constantinides, M. Harris and R. Stulz: Handbook of the Economics of Finance. Amsterdam: North Holland, 889-938. David Backus, A. Gregory, and Stanley Zin, 1989, Risk premiums in the term structure: Evidence from artificial economies,"Journal of Monetary Economics, 24, 371-399.
High School Essay, or Senatorial Initiative?
Recently, Amy Klobuchar was elected senator of the great state of Minnesota. Her main attribute is being very reasonable, and she wasn't a Republican. Her opinions are so hackneyed, unoffensive and ultimately unworkable, I think it highlights that in order to lead a large organization, you have to be above average, but not too much. If overall competence could be summed up in IQ, an optimal 'leader' would be about 120. I imagine many of them love to read Bob Herbert's take on current events.
Klobuchar's bold solution to the energy crisis was written up in our smarmy Minnesota StarTribune, and typifies the infantile thought processes of our public servants. I don't think its a Democrat thing so much (eg, McCain's gas tax holiday).
First, she notes John F. Kenned called us to put a man on the moon, and we did it! Ergo, all complex problems are a matter of will. QED.
Her bold plan is multifaceted:
1) improve efficiencies
2) develop renewable resources
-A strong start. By taking this bold pro-efficiency initiative, she puts Republicans on the defensive. And renewables? Two words: cold fusion.
3) stop giving billions in tax giveaways to oil companies
-These giveaways are specially targeted for pet projects of congressmen to create jobs. When we realize these tend to often merely subsidize business, it's embarrassing. Why not just pay people to dig holes, others to fill them, and avoid pretending these programs actually create stuff.
4) Electric cars
-All we need are outlets, which don't seem to create any pollution. And batteries. There has never been a real need for batteries before, but now that we know they are useful in cars, I'm sure we can design a powerful, lightweight battery. Then again, one would think that a better battery would be useful for cellphones and computers.
5) Raise the fuel-economy standards, saving $1000 per family in gas.
-Sounds great. Details, unimportant. I say, double it, and save $2000 per fam.
6) reduced speculation that drives the price of gas too high, by preventing futures traders from routing their trades through offshore markets.
-The old 'sand in the gears' approach. Clearly, if we made it harder to trade, prices would fall. Look at housing, that's an expensive asset to trade, and no bubbles there. Plus, reducing energy demand to protect the environment, in concert with low prices, is win-win.
7) cut down on price-gouging
-Everything should be sold at cost.
8) Don purchase more gas for the Strategy Petroleum Preserve.
-Huh? I actually agree with that.
This is the kind of bold, nay, heroic, outside-the-box thinking that we need. Or at the least, will make me not want to talk about it anymore. I imagine earnest high school seniors everywhere will note a surprising similarity between their own recent social studies essays, and this senatorial missive. It just highlights the fact even though an issue like energy involves a lot of science and statistical analysis, complex general equilibrium effects involving incentives, moral hazard, and adverse selection, the practical discussion of public policy is pretty simple. Everything is a matter of will, and partial equilibrium analysis. Reduce demand. Lower prices. More money to technologies that don't pollute when we exclude how such technologies are created (eg, electric cars).
Klobuchar's bold solution to the energy crisis was written up in our smarmy Minnesota StarTribune, and typifies the infantile thought processes of our public servants. I don't think its a Democrat thing so much (eg, McCain's gas tax holiday).
First, she notes John F. Kenned called us to put a man on the moon, and we did it! Ergo, all complex problems are a matter of will. QED.
Her bold plan is multifaceted:
1) improve efficiencies
2) develop renewable resources
-A strong start. By taking this bold pro-efficiency initiative, she puts Republicans on the defensive. And renewables? Two words: cold fusion.
3) stop giving billions in tax giveaways to oil companies
-These giveaways are specially targeted for pet projects of congressmen to create jobs. When we realize these tend to often merely subsidize business, it's embarrassing. Why not just pay people to dig holes, others to fill them, and avoid pretending these programs actually create stuff.
4) Electric cars
-All we need are outlets, which don't seem to create any pollution. And batteries. There has never been a real need for batteries before, but now that we know they are useful in cars, I'm sure we can design a powerful, lightweight battery. Then again, one would think that a better battery would be useful for cellphones and computers.
5) Raise the fuel-economy standards, saving $1000 per family in gas.
-Sounds great. Details, unimportant. I say, double it, and save $2000 per fam.
6) reduced speculation that drives the price of gas too high, by preventing futures traders from routing their trades through offshore markets.
-The old 'sand in the gears' approach. Clearly, if we made it harder to trade, prices would fall. Look at housing, that's an expensive asset to trade, and no bubbles there. Plus, reducing energy demand to protect the environment, in concert with low prices, is win-win.
7) cut down on price-gouging
-Everything should be sold at cost.
8) Don purchase more gas for the Strategy Petroleum Preserve.
-Huh? I actually agree with that.
This is the kind of bold, nay, heroic, outside-the-box thinking that we need. Or at the least, will make me not want to talk about it anymore. I imagine earnest high school seniors everywhere will note a surprising similarity between their own recent social studies essays, and this senatorial missive. It just highlights the fact even though an issue like energy involves a lot of science and statistical analysis, complex general equilibrium effects involving incentives, moral hazard, and adverse selection, the practical discussion of public policy is pretty simple. Everything is a matter of will, and partial equilibrium analysis. Reduce demand. Lower prices. More money to technologies that don't pollute when we exclude how such technologies are created (eg, electric cars).
Saturday, May 17, 2008
Sachs Refutes Easterly With Trenchant Anecdote
OK, I haven't read the book, but the NYT book reviewer notes the following:
Sheez. I hope there's more than this. Easterly notes that we've spent like 1 Trillion dollars on Africa, with zero, if not negative returns. And Sachs refutes Easterly's criticisms with a specific subsidy that worked. Well, with 1 trillion, I would hope he would have several anecdotes.
But the reviewer, Daniel Gross, reveals his hand when he notes:
The old straw man that the other side says markets are sufficient to fix everything. No one who believes in market solutions thinks they are perfect, or don't require good institutions, which necessarily involve governments. It amazes me how incredibly dumb such people can be, yet write for the New York Times book review, surely one of the mosts prized positions in book reviewing. And people complain that our process for choosing a President is suboptimal.
In a particularly trenchant passage, he gently fillets critics, like William Easterly, who have argued that foreign aid doesn’t work. (Aid money spent bringing fertilizer to India in the 1960s, Sachs notes, yielded spectacular returns.)
Sheez. I hope there's more than this. Easterly notes that we've spent like 1 Trillion dollars on Africa, with zero, if not negative returns. And Sachs refutes Easterly's criticisms with a specific subsidy that worked. Well, with 1 trillion, I would hope he would have several anecdotes.
But the reviewer, Daniel Gross, reveals his hand when he notes:
And it’s refreshing to hear a distinguished economist declare that markets alone can’t get us out of the mess markets have created.
The old straw man that the other side says markets are sufficient to fix everything. No one who believes in market solutions thinks they are perfect, or don't require good institutions, which necessarily involve governments. It amazes me how incredibly dumb such people can be, yet write for the New York Times book review, surely one of the mosts prized positions in book reviewing. And people complain that our process for choosing a President is suboptimal.
Thursday, May 15, 2008
On a Scale of 1 to 10
I went to the doctor about my SLAP tear, and they asked me to rate my pain on a scale of 1 to 10. Now, it hurts quite a bit, especially when I don't take massive doses of ibuprofen. I say, it hurts so bad I can't think (explains a lot, I know). The nurse asks, "is that a 10"? I think, no, probably an 8. You see, I can imagine a 10 would be something diabolical, like burning the skin off my flesh with a blow torch, poring honey on my wounds, and allowing rabid animals to lick and nibble at my charred, sugary, flesh. Or scraping the back of my legs with a cheese grater. Or sticking a syringe down the back of my fingernails. Or pounding my toes with a hammer, and then methodically breaking my bones from one end to another while keeping my internal organs fully functioning. But if I say "8", they don't give me anything for the pain, so I lie, and say, "yeah, it's a 10".
I remember some guys from an Irish bank came by to talk about risk management, and one guy said their little trick was to have scales that went from 1 to 4. That way, people couldn't put in the median number for everything. I remember how proud he was of this, and you know, for a middle manager in risk, it wasn't a bad idea. It was not a great idea, but it wasn't bad.
I remember some guys from an Irish bank came by to talk about risk management, and one guy said their little trick was to have scales that went from 1 to 4. That way, people couldn't put in the median number for everything. I remember how proud he was of this, and you know, for a middle manager in risk, it wasn't a bad idea. It was not a great idea, but it wasn't bad.
Wednesday, May 14, 2008
Betting with House Money
Many people are more risk loving when playing with 'house money' than when they have lost money. That is, if you go to Vegas, and win $1000, you are more willing to bet with that $1000 (the house's money), than when you initially came, or if you lost money. This is often explained via Prospect Theory, but the theory is really quite a kluge, and was meant to describe the following behaviors:
1. Very risk-averse behavior in gains involving moderate probabilities (concave for right-hand side)
2. Risk-loving towards small losses (convex for left-hand side)
3. Risk averse toward large losses (steeper for far left-hand side)
These facts are consistent with the graph of one's utility function at the upper right, where the y-axis is the reference point. But to make prospect theory apply to this 'house money' phenomenon, you need Cumulative Prospect Theory, because regular Prospect Theory says you should be risk averse with house money. In Cumulative Prospect Theory, if you have house money (ie, are up), you are in the convex portion of your utility function (convex utility=risk loving). Given the plasticity, and parochial nature of how Prospect Theory is applied (Cumulative here, regular there), I'm rather unimpressed with the power of this theory. I'm shocked at how many smart people think that this type of reasoning is going to generate lots of really useful insights, because it doesn't predict anything, and is always applied piecemeal.
Assume you are playing black jack. You are up $1000. Now you bet more aggressively. Why? Well, to say 'it's because you are now risk loving', isn't a theory, is a description of what we know to be true. Rather, someone who gambles, and wins, thinks they have alpha. Now, if they aren't counting cards, or are playing slots or roulette where no alpha is conceivable, this alpha is delusional. But I still think many people think they have alpha in these circumstances, thinking they have invoked a 'lucky streak', or have somehow drawn favor with the gods of chance. Or, they might be counting cards, and thus truly have a 2% edge. The key is being up, having house money, increases their initial estimate of having alpha, and so, when you have alpha, you should bet more aggressively.
Consider this in terms of a trader. You give him some capital and he applies his strategy. After 6 months, he is up big on the year. He wants to bet more. His boss wants him to bet more. Why? Because they both are more reassured he has alpha, and you give people with alpha more money. People winning big against the house have a higher probability of having alpha than someone just starting out, or, someone who has lost money. Luck and alpha are always at work, but, making money should always increase the probability its alpha, not merely luck. Not to a probability of 1, of course, but higher than the unconditional odds. And this is true at every time dimension, leading to the phenomenon of betting more when you are playing with 'house money'.
But back to dissing Prospect Theory. Huang, Barberis and Santos (2002), argue that Prospect Theory is applicable to the changing expected return in stocks, in that after a market rise, people are less risk averse, while after a fall, they are more risk averse. If after a market rise, people are less risk averse, they should accept, and in equilibrium will receive less. This is a theory of endogenous cyclicality. Seems reasonable. But then figure an outstanding puzzle in momentum in cross-sectional asset pricing: stocks that have risen over the past year tend to rise more than average the next year, and this momentum is also true for underperformers. If 1 year is the relevant horizon for investors in the market in aggregate, why is it contradicted when applied to individual stocks? Further, if after a good run-up in stocks, such as in 1999, people are less risk averse and supposedly tolerate lower expected returns as a result, why do they expect returns on stocks to be higher after run-ups (Sharpe and Amromin)? In this model, they should expect a lower return. It’s like whack-a-mole--fix a problem here, create a new one there. No one applies Prospect Theory consistently, just to little issues here and there. That's not theorizing, that's describing.
Tuesday, May 13, 2008
Skeptic Magazine Pokes Holes in Global Warming Models
Lubos Motl, the Czech string theorist and blogger, noted that in Russia, they don't really care about global warming, and Russian scientists are not stupid. He thinks this is because they aren't influenced by Western political correctness, a nice contrast to the politically influenced Russian scientists during the Cold War. And of course its not just political correctness: the US gives about $2B a year to study global warming, which means a lot of objective scientists are paid to basically come up with studies that support the hypothesis. That's a lot of money, and it's naive to think this doesn't influence the findings.
I think that since 85% of Americans believe the standard global warming story (it's anthropogenic, and threatens the Earth), given the sheer stupidity of the average American, that's reason enough to be skeptical. Consider that only 75% of Americans believe OJ killed his wife, and only 67% of American believe the US was NOT involved in 9/11, I think this kind of majority for a much more nonobvious fact indicates the 'facts' are not dominating the public debate, because isn't OJ's guilt more obvious than Global Warming? That is, the models that generate the scary anthropogenically caused global warming scenario are so complicated, how could so many Americans be so certain?
Part of it is the simple story. CO2 is a greenhouse gas. Cars and such create CO2. Ergo, more CO2, more warming. It's like Y2K, which was so plausible because the story was simple enough to explain to 60 year old executive (who had never coded in his life). But CO2 is only 0.036% of the atmosphere, and that is only 15% of the greenhouse gasses (water vapor and methane being the most). As humans only produce 2% of the CO2 annually (most is from water evaporation), I think its reasonable to assume that the natural variability of CO2 makes it difficult to assert that human activity is directly causing the temperature increase. After all, historically, you can't really infer from ice core samples whether the CO2 temperature relation is driven by the temperature causing CO2 to rise (say, via a Milkankovitch cycle), or vice versa. Further, there is the natural variability in the other greenhouses gases. For example, a one percent change in water vapor does the same thing as doubling the carbon dioxide in the air, and water vapor can vary by a factor of 2 day to day, so I'm sure it varies at the scales that matter for these doomsday scenarios. Then there's the CO2 mechanism. CO2 absorbs all radiation available to it in 10 meters. More CO2 only shortens the distance. Either way, anything that can be absorbed by CO2 is already being absorbed.
There are lots of positive and negative feedback loops in an ecosystem like the Earth. Given the persistence of life over hundreds of millions of years, one should think there are many more negative feedback loops (dampening effects), because if we were always on this unstable inflection point, we should see massive extinctions every million years. But the global warming alarmists just assume that the system is mainly full of positive feedback effects (wikipedia mentions 7 feedback loops on their website; 6 are amplifiers).
Anyway, Skeptic Magazine has a neat article on Global Warming, and they note that the current Global Warming Models greatly understate their uncertainty, because the standard errors refer merely to the model's errors--assuming they are true. That is, no uncertainty due to parameter uncertainty, or functional form uncertainty. The author estimates that the uncertainty in the part of the model that addresses cloud cover generates a 100 degree uncertainty band over 100 years. That's right, 100 degrees. With this much uncertainty inherent in these models, for this one aspect, one could reasonably assume these models are pretty worthless for long-run forecasting.
Indeed, to give an example of the uncertainty in the system they are modeling, consider that global temperatures have increased by 0.6°C this century. How much is due to human-based CO2? Maybe 0.2°C. Now compare that to natural variations. Seasonal variations are typically 20°C. Localized effects vary wildly between the equator and poles. Then there are random variations in such things as the Gulf Stream in the Atlantic Ocean which heats Europe, and El Nino which affects the US. But supposedly, out of that maelstrom, climate scientists can deduce the human-created CO2 is sufficient to take us to a tipping point. It's like saying, if you eat that French Fry, you're going to die. Uh huh.
Models are maps of reality, not reality. Often they capture a select subset of what is going on, and so have little overall relevance. For example, the major 7 money center banks in the US had annualized 99% VaRs of about $1.8B in 2007, but still managed to lose an average of $42B in market value over the past 12 months. Now, either their models were really wrong, or, more likely, they weren't that bad, they just excluded a lot of really important stuff (you should read the fine print by the asterisks). Such models are often useful in piecemeal application, but until one can demonstrate realistic, out-of-sample correlations on meaningful applications, they should be applied and discussed piecemeal.
I think that since 85% of Americans believe the standard global warming story (it's anthropogenic, and threatens the Earth), given the sheer stupidity of the average American, that's reason enough to be skeptical. Consider that only 75% of Americans believe OJ killed his wife, and only 67% of American believe the US was NOT involved in 9/11, I think this kind of majority for a much more nonobvious fact indicates the 'facts' are not dominating the public debate, because isn't OJ's guilt more obvious than Global Warming? That is, the models that generate the scary anthropogenically caused global warming scenario are so complicated, how could so many Americans be so certain?
Part of it is the simple story. CO2 is a greenhouse gas. Cars and such create CO2. Ergo, more CO2, more warming. It's like Y2K, which was so plausible because the story was simple enough to explain to 60 year old executive (who had never coded in his life). But CO2 is only 0.036% of the atmosphere, and that is only 15% of the greenhouse gasses (water vapor and methane being the most). As humans only produce 2% of the CO2 annually (most is from water evaporation), I think its reasonable to assume that the natural variability of CO2 makes it difficult to assert that human activity is directly causing the temperature increase. After all, historically, you can't really infer from ice core samples whether the CO2 temperature relation is driven by the temperature causing CO2 to rise (say, via a Milkankovitch cycle), or vice versa. Further, there is the natural variability in the other greenhouses gases. For example, a one percent change in water vapor does the same thing as doubling the carbon dioxide in the air, and water vapor can vary by a factor of 2 day to day, so I'm sure it varies at the scales that matter for these doomsday scenarios. Then there's the CO2 mechanism. CO2 absorbs all radiation available to it in 10 meters. More CO2 only shortens the distance. Either way, anything that can be absorbed by CO2 is already being absorbed.
There are lots of positive and negative feedback loops in an ecosystem like the Earth. Given the persistence of life over hundreds of millions of years, one should think there are many more negative feedback loops (dampening effects), because if we were always on this unstable inflection point, we should see massive extinctions every million years. But the global warming alarmists just assume that the system is mainly full of positive feedback effects (wikipedia mentions 7 feedback loops on their website; 6 are amplifiers).
Anyway, Skeptic Magazine has a neat article on Global Warming, and they note that the current Global Warming Models greatly understate their uncertainty, because the standard errors refer merely to the model's errors--assuming they are true. That is, no uncertainty due to parameter uncertainty, or functional form uncertainty. The author estimates that the uncertainty in the part of the model that addresses cloud cover generates a 100 degree uncertainty band over 100 years. That's right, 100 degrees. With this much uncertainty inherent in these models, for this one aspect, one could reasonably assume these models are pretty worthless for long-run forecasting.
Indeed, to give an example of the uncertainty in the system they are modeling, consider that global temperatures have increased by 0.6°C this century. How much is due to human-based CO2? Maybe 0.2°C. Now compare that to natural variations. Seasonal variations are typically 20°C. Localized effects vary wildly between the equator and poles. Then there are random variations in such things as the Gulf Stream in the Atlantic Ocean which heats Europe, and El Nino which affects the US. But supposedly, out of that maelstrom, climate scientists can deduce the human-created CO2 is sufficient to take us to a tipping point. It's like saying, if you eat that French Fry, you're going to die. Uh huh.
Models are maps of reality, not reality. Often they capture a select subset of what is going on, and so have little overall relevance. For example, the major 7 money center banks in the US had annualized 99% VaRs of about $1.8B in 2007, but still managed to lose an average of $42B in market value over the past 12 months. Now, either their models were really wrong, or, more likely, they weren't that bad, they just excluded a lot of really important stuff (you should read the fine print by the asterisks). Such models are often useful in piecemeal application, but until one can demonstrate realistic, out-of-sample correlations on meaningful applications, they should be applied and discussed piecemeal.
Monday, May 12, 2008
VaR, Capital, and Default Rates
I was reading the Jorion's book Value at Risk, and was struck by the little note where he pointed out that, say, if you want to target a BBB default rate, you merely look at the capital sufficient so that the probability of hitting this extreme event is equal to an annualized default rate of 0.3% or so (the BBB annual default rate). I see this mentioned a lot, and I think it is pretty silly. VaR has its purposes, but setting top-down capital allocations is not one of them.
My problem here is that he implicitly assumes people are looking at the issue as a Master Trust versus like a set of static pools of debt. In a master trust, you have a sequence of new business coming on, and so the value of the master trust, if ever below liabilities, implies default. But no businesses have their present value calculated like a master trust. Instead, it is like a static pool: assets that are market to market are not the present value of the future strategy, but merely the value of the particular assets on the book at this time. You can mark to market various vintages of assets: those put on in various months, each with its own vintage. The present value of this strategy is not a direct function of those static pools, because you change your underwriting strategy over time, often because over time a strategy becomes obsolete. Even a true Master Trust might have an endogenously shifting underwriting standards.
Or to use a different example, lets say I were to buy low volatility assets with positive momentum. This is what I did in the Falken Fund from 1996-2001, which outperformed the S&P by 10% per year, and which Telluride tried to get me to never do again (one would think prior use allows one to use this idea going forward, but this might take a jury of my peers). Now lets say this strategy of longs had a mark-to-market below the value of my liabilities. Does that mean I'm bankrupt? Only if I am stuck with these specific longs forever. In that case, my current negative NPV means my strategy's NPV is also negative. But if I can adjust my algorithm, this just means that iteration of the strategy is a loser. Or, maybe this is only a temporary setback for my current strategy applied to this particular set of longs. When I throw on the next batch of new longs, it might work in the next period, and I might use a new variation that throws out longs beginning with the letter X.
The key is that one never marks to market a strategy, but merely its current constituents. Thus it is not the algorithm itself that is being marked to market because it is always in flux. Thus, there is really no relation between bankruptcy, the Value-at-Risk number, and capital.
My problem here is that he implicitly assumes people are looking at the issue as a Master Trust versus like a set of static pools of debt. In a master trust, you have a sequence of new business coming on, and so the value of the master trust, if ever below liabilities, implies default. But no businesses have their present value calculated like a master trust. Instead, it is like a static pool: assets that are market to market are not the present value of the future strategy, but merely the value of the particular assets on the book at this time. You can mark to market various vintages of assets: those put on in various months, each with its own vintage. The present value of this strategy is not a direct function of those static pools, because you change your underwriting strategy over time, often because over time a strategy becomes obsolete. Even a true Master Trust might have an endogenously shifting underwriting standards.
Or to use a different example, lets say I were to buy low volatility assets with positive momentum. This is what I did in the Falken Fund from 1996-2001, which outperformed the S&P by 10% per year, and which Telluride tried to get me to never do again (one would think prior use allows one to use this idea going forward, but this might take a jury of my peers). Now lets say this strategy of longs had a mark-to-market below the value of my liabilities. Does that mean I'm bankrupt? Only if I am stuck with these specific longs forever. In that case, my current negative NPV means my strategy's NPV is also negative. But if I can adjust my algorithm, this just means that iteration of the strategy is a loser. Or, maybe this is only a temporary setback for my current strategy applied to this particular set of longs. When I throw on the next batch of new longs, it might work in the next period, and I might use a new variation that throws out longs beginning with the letter X.
The key is that one never marks to market a strategy, but merely its current constituents. Thus it is not the algorithm itself that is being marked to market because it is always in flux. Thus, there is really no relation between bankruptcy, the Value-at-Risk number, and capital.
Missed Diagnosis
I am currently in a lot of pain, because I tore my labrum, a cartilaginous structure that acts as the cup for the upper arm in the shoulder socket. For weeks I have had this vague but excruciating pain, and saw various specialists, all useless. The pain just kept getting worse, causing my right triceps and lat to twitch like in the seminal experiments of Luigi Galvani using electricity to prod a frog gastrocnemius. A chiropractor thought that after an untold number of visits, the pain would disappear ('one leg is shorter than the other!'). After two weeks and no relief, I stopped going. Massage felt good at the time, but the pain or range of motion was unaffected. My various doctors basically gave me the 'back-shoulder pain' google slap: try ice then heat, aspirin, Aleve, Ibuprofen, rest. Physical therapy was also pointless, various simple motions that did nothing. It was only after an MRI found the SLAP tear that I feel a solution is getting somewhat feasible.
Another problem was that ibuprofen only generates the desired effects when taken in mega-doses: 800 to 1200 mg. Like trying to get drunk on one beer, the standard 2 ibuprofen (400 mg) doesn't give you any relief. Taking 1000 mg 3 times daily has made a world of difference.
I think its a common issue. You have a vague problem. Experts all mention that if you repeat their prescription, in the long run, the problem will be solved. In the long run, 'this too shall pass', and so everything will be fixed by time, in that you will either die, or the problem will go away on its own accord, but their ministrations are not part of that solution. And then, when going for the solution, you need to do it balls to the wall, not piecemeal. Piecemeal is when you are dealing with something very powerful. Modern drugs are basically created so that a 110 lb person who's allergic to everything won't have a bad reaction and sue you. Thus standard dosages are useless, though its kind of scary to take 3 times the recommended dose.
An expensive, pointless lawsuit with trade secrets yet 'to be defined' after 15 months, having to construct a case that affirms my right to use off-the-shelf programs that do mean-variance optimization, use insights from my PhD dissertation, or have to vet all my new ideas by Telluride Asset Management in perpetuity (with full rights of refusal, of course). Now a labral tear. What's next, boils? They should just send me to prison, and I could get free medical care, and recuperate via all the time I would have in my 6x12 cell doing PT. In a prior life, I must have pulled wings off butterflies.
Another problem was that ibuprofen only generates the desired effects when taken in mega-doses: 800 to 1200 mg. Like trying to get drunk on one beer, the standard 2 ibuprofen (400 mg) doesn't give you any relief. Taking 1000 mg 3 times daily has made a world of difference.
I think its a common issue. You have a vague problem. Experts all mention that if you repeat their prescription, in the long run, the problem will be solved. In the long run, 'this too shall pass', and so everything will be fixed by time, in that you will either die, or the problem will go away on its own accord, but their ministrations are not part of that solution. And then, when going for the solution, you need to do it balls to the wall, not piecemeal. Piecemeal is when you are dealing with something very powerful. Modern drugs are basically created so that a 110 lb person who's allergic to everything won't have a bad reaction and sue you. Thus standard dosages are useless, though its kind of scary to take 3 times the recommended dose.
An expensive, pointless lawsuit with trade secrets yet 'to be defined' after 15 months, having to construct a case that affirms my right to use off-the-shelf programs that do mean-variance optimization, use insights from my PhD dissertation, or have to vet all my new ideas by Telluride Asset Management in perpetuity (with full rights of refusal, of course). Now a labral tear. What's next, boils? They should just send me to prison, and I could get free medical care, and recuperate via all the time I would have in my 6x12 cell doing PT. In a prior life, I must have pulled wings off butterflies.
Sunday, May 11, 2008
Housing Prices Over Past 20 years
Above is a scatter plot of annualized excess geometric average returns (above the Fed Funds) against annualized standard deviation in returns, for housing prices using the Case-Shiller index. The index has monthly observations back to 1987, for 17 municipalities (some start in 1991), for low, medium, and high-priced sectors. Thus, every observation is for a sector/city, over the 20 year sample.
As you can see, the average annual return is about 0.5%. That's the annual excess returns. After expenses, which for houses include property taxes of about 1-2%, plus broker fees (about 6% to sell), I think its safe to say the returns are indistinguishable from zero. Residential real estate has generally been thought to be a superior investment over the past 20 years, but I think this is mainly due to selective anecdotal inference.
There does seem to be a small volatility premium, in that the more volatile series had slightly higher returns. The volatility was low, consistent with intuition, and higher vol does seem to go with higher return.
Here we see a housing price index from Case-Shiller, and a housing price index that subtracts the opportunity cost, as reflected by the Fed Funds rate, labeled the Tot Ret Index. The main reason why our intuition about home prices is much rosier than reality, is that the returns that seem so certain, includes a long stretch in the nineties where the cumulative return was minuscule while Fed Funds averaged about 5% annually. A benchmark is essential, because, like an equity fund manager making money in 1999, if you ignore the opportunity cost, many investments look like great, when in fact they were not.
But what is truly amazing is that we see the improbable event that are all to common for financial time series. Prices have fallen about 20% cumulatively over the past year and a half(these are excess returns). Thus, from a standard deviation perspective, it was one of those five-sigma events (annualized vol was about 4%). Fat tails are part of almost every financial time series.
As you can see, the average annual return is about 0.5%. That's the annual excess returns. After expenses, which for houses include property taxes of about 1-2%, plus broker fees (about 6% to sell), I think its safe to say the returns are indistinguishable from zero. Residential real estate has generally been thought to be a superior investment over the past 20 years, but I think this is mainly due to selective anecdotal inference.
There does seem to be a small volatility premium, in that the more volatile series had slightly higher returns. The volatility was low, consistent with intuition, and higher vol does seem to go with higher return.
Here we see a housing price index from Case-Shiller, and a housing price index that subtracts the opportunity cost, as reflected by the Fed Funds rate, labeled the Tot Ret Index. The main reason why our intuition about home prices is much rosier than reality, is that the returns that seem so certain, includes a long stretch in the nineties where the cumulative return was minuscule while Fed Funds averaged about 5% annually. A benchmark is essential, because, like an equity fund manager making money in 1999, if you ignore the opportunity cost, many investments look like great, when in fact they were not.
But what is truly amazing is that we see the improbable event that are all to common for financial time series. Prices have fallen about 20% cumulatively over the past year and a half(these are excess returns). Thus, from a standard deviation perspective, it was one of those five-sigma events (annualized vol was about 4%). Fat tails are part of almost every financial time series.
Friday, May 09, 2008
Concentrated Bets are Risky
Bloomberg has an article about how some star investment managers are having really bad returns over the past year. John Wood's SRM Global Fund lost about 70 percent through March 31. That's a drawdown. Pirate Capital had $2B under management, and 50% of its holdings in only 4 firms. Now, clearly some of these guys were simply lucky in the past, and taking large, undiversified bets that scored big made them appear like geniuses. Others, got more money than they could invest, and had to change their investment focus from looking at certain subtleties of the business model to industry bets, which are such different approaches they require a separate skill--one that even fewer people have.
Indeed, I worked for a convertible bond desk, and was struck by the fact that for a large convert book, say over $1B long, you have so many positions that stock selection alpha is necessarily very small. The only way to outperform in a big way is to take industry bets, and these are, like macro forecasting, invariably alpha-less. You can still have edge managing a convert desk, but for any large book, looking at individual tilts, you can't add more than a percent to you returns this way. A better, scalable focus, is simply to be a patient trader, willing to act as a liquidity provider, so that you are usually buying at the bid and selling at the offer. When your I-bank calls, and wants to dump $50MM in an issue, you are there, but at a price discount. The edge there is more certain, and larger, than picking the better companies. Thus, an analyst at a desk may be great at picking a handful of converts, but a lousy porrtfolio manager because he does not understand this, while a portfolio manager clueless about relative value, but a disciplined trader and tough negotiator, may be a great portfolio manager (though invariably he markets himself as a convert selector, making for a marketing cluster-puck).
Many of these ex-superstars, I think, are merely having bad luck. Lampert's ESL Investments Inc. dropped 27 percent last year and an additional 1.3 percent in the first three months of 2008. Sounds bad, but Lampert has reportedly produced an average annual return of almost 30 percent since 1988. If that's true, he's got alpha. But it appears his skill is at looking at specific turnaround deals, and so, his edge is necessarily part of a volatile strategy. There is simply no way to take a dozen or so insights, and lower the vol to the 10-12% that most hedge fund investors market. I don't see a problem with that. Not every hedge fund will have an edge in trading systems, like Renaissance or DE Shaw, and able to produce a minor edge via their massive technology investment, which is necessarily a well-diversified strategy. Most investors with alpha, in fact, are like Eddie Lampert. They can identify stocks based on some qualitative insight, and such insights can only be made over a handful of companies because to read the annual reports, listen the executives describe their business models, know the set of competitors, or even get involved with management, and this involves a lot of time, which is a limited resource no matter how rich you are. This does not scale, so one should accept the fact that big losses are going to happen.
Nevertheless, a loss is still a bad signal, and given a Bayesian prior where one puts some weight on the probability that a manager no longer has, if might have never had, alpha, a bad return should imply some rational investment outflows.
Indeed, I worked for a convertible bond desk, and was struck by the fact that for a large convert book, say over $1B long, you have so many positions that stock selection alpha is necessarily very small. The only way to outperform in a big way is to take industry bets, and these are, like macro forecasting, invariably alpha-less. You can still have edge managing a convert desk, but for any large book, looking at individual tilts, you can't add more than a percent to you returns this way. A better, scalable focus, is simply to be a patient trader, willing to act as a liquidity provider, so that you are usually buying at the bid and selling at the offer. When your I-bank calls, and wants to dump $50MM in an issue, you are there, but at a price discount. The edge there is more certain, and larger, than picking the better companies. Thus, an analyst at a desk may be great at picking a handful of converts, but a lousy porrtfolio manager because he does not understand this, while a portfolio manager clueless about relative value, but a disciplined trader and tough negotiator, may be a great portfolio manager (though invariably he markets himself as a convert selector, making for a marketing cluster-puck).
Many of these ex-superstars, I think, are merely having bad luck. Lampert's ESL Investments Inc. dropped 27 percent last year and an additional 1.3 percent in the first three months of 2008. Sounds bad, but Lampert has reportedly produced an average annual return of almost 30 percent since 1988. If that's true, he's got alpha. But it appears his skill is at looking at specific turnaround deals, and so, his edge is necessarily part of a volatile strategy. There is simply no way to take a dozen or so insights, and lower the vol to the 10-12% that most hedge fund investors market. I don't see a problem with that. Not every hedge fund will have an edge in trading systems, like Renaissance or DE Shaw, and able to produce a minor edge via their massive technology investment, which is necessarily a well-diversified strategy. Most investors with alpha, in fact, are like Eddie Lampert. They can identify stocks based on some qualitative insight, and such insights can only be made over a handful of companies because to read the annual reports, listen the executives describe their business models, know the set of competitors, or even get involved with management, and this involves a lot of time, which is a limited resource no matter how rich you are. This does not scale, so one should accept the fact that big losses are going to happen.
Nevertheless, a loss is still a bad signal, and given a Bayesian prior where one puts some weight on the probability that a manager no longer has, if might have never had, alpha, a bad return should imply some rational investment outflows.
Tuesday, May 06, 2008
Theories of Everything
Well-documented empirical puzzles are great for science, because they expose opportunities for refinement and extensions. For example, the famous Michelson-Morley experiment showed that the 'aether', was a suspect concept, and Einstein's special relativity, which introduced the idea that the speed of light was constant, could explain this anomaly (and the anomaly implied with Maxwell's equations on a similar point). The heat of the earth was an anomaly, because Lord Kelvin calculated that the Earth was only 20 million years old. This was explained via the discovery of nuclear fission in the earth's core. When Alexander Fleming saw that penicillin spores caused his petri dishes of bacteria to have little dead spots, that was a puzzle.
So in finance, we have a bevy of anomalies, and a bold paper by Xavier Gabaix intends to solve them all in one fell swoop. Specifically, the anomalies:
1 equity premium puzzle (too high)
2 risk-free rate-puzzle (too low)
3 excess volatility puzzle (equity prices are too volatile relative to consumption or dividends)
4 value-growth puzzle (stocks with high price-dividend ratios have abnormally low future returns)
5 yield curve to upward sloping at short end (overnight to 2 years)
6 Fama-Bliss findings that a higher slope of the yield curves predicts higher risk premia on bond returns (steep curve good for stock and bond prices)
7 corporate bond spread puzzle (the spread between Baa corporate and government bond rates are too high)
8 counter cyclical equity premium (lose money in bad times)
9 characteristics vs covariance puzzles (simple numbers such as the price-dividend ratio of stocks predict future returns better that covariances with economic factors)
10 predictability of aggregate stock market returns by price/dividend ratios, and the consumption-asset ratio has explanatory power for future returns
11 high price of deep out-of the money puts, and, last but not least,
12 forward exchange rate premium puzzle (uncovered interest rate parity violated).
OK, what's the trick? Basically, extreme but improbable events, for both inflation and stock market moves. All it takes is to put in a large, one in a hundred year adverse event, and these anomalies are rationalized. Of course, with data of only one hundred years, we often don't realize these events, but that proves nothing. They could happen, and indeed if we look cross sectionally at other countries, they do (eg, Russia, Japan, Poland, Germany).
My criticism of this approach is that it leaves many of the puzzles unaffected. For example, if this explains the equity premium, why is it when you create a synthetic equity position by going long 1.5 beta stocks, and short 0.5 beta stocks, you have a negative expected return? Why does the small chance of large inflation explain the overnight to 2 year yield spread, but not the flatness of the curve from 2 to 30 years? I don't think a solution incompatible with these data is correct.
My own theory, that's been called obvious, wrong, and illegal, is here: risk is not related to return. This is an implication of people investing with respect to 'the market' so every risk taking position (deviation from the market) is like idiosyncratic risk, avoidable, and so unpriced. That's my story, and I'm sticking to it.
So in finance, we have a bevy of anomalies, and a bold paper by Xavier Gabaix intends to solve them all in one fell swoop. Specifically, the anomalies:
1 equity premium puzzle (too high)
2 risk-free rate-puzzle (too low)
3 excess volatility puzzle (equity prices are too volatile relative to consumption or dividends)
4 value-growth puzzle (stocks with high price-dividend ratios have abnormally low future returns)
5 yield curve to upward sloping at short end (overnight to 2 years)
6 Fama-Bliss findings that a higher slope of the yield curves predicts higher risk premia on bond returns (steep curve good for stock and bond prices)
7 corporate bond spread puzzle (the spread between Baa corporate and government bond rates are too high)
8 counter cyclical equity premium (lose money in bad times)
9 characteristics vs covariance puzzles (simple numbers such as the price-dividend ratio of stocks predict future returns better that covariances with economic factors)
10 predictability of aggregate stock market returns by price/dividend ratios, and the consumption-asset ratio has explanatory power for future returns
11 high price of deep out-of the money puts, and, last but not least,
12 forward exchange rate premium puzzle (uncovered interest rate parity violated).
OK, what's the trick? Basically, extreme but improbable events, for both inflation and stock market moves. All it takes is to put in a large, one in a hundred year adverse event, and these anomalies are rationalized. Of course, with data of only one hundred years, we often don't realize these events, but that proves nothing. They could happen, and indeed if we look cross sectionally at other countries, they do (eg, Russia, Japan, Poland, Germany).
My criticism of this approach is that it leaves many of the puzzles unaffected. For example, if this explains the equity premium, why is it when you create a synthetic equity position by going long 1.5 beta stocks, and short 0.5 beta stocks, you have a negative expected return? Why does the small chance of large inflation explain the overnight to 2 year yield spread, but not the flatness of the curve from 2 to 30 years? I don't think a solution incompatible with these data is correct.
My own theory, that's been called obvious, wrong, and illegal, is here: risk is not related to return. This is an implication of people investing with respect to 'the market' so every risk taking position (deviation from the market) is like idiosyncratic risk, avoidable, and so unpriced. That's my story, and I'm sticking to it.
Monday, May 05, 2008
Who is James Hansen?
I was watching Jeffrey Sach's give a lecture on global poverty, and I was amused when he noted the foremost scientist on global warming is...James Hansen [note: not my martial arts instructor Jim Hanson, who is 6 foot 3, weighs 250, can kill you with his left pinky, and also generally carries a knife and has a concealed carry permit--talk about a belt and suspenders approach]. This guy rocketed to fame because he alleged the Bush administration censored him, a strange claim from someone on 60 Minutes (could you get a bigger platform?). But anyway, he's a trained as an astrophysicist. He's clearly smart, and understands the second law of thermodynamics, but I don't think this makes him the foremost expert on earth's climate. I would think oceanographers, or people more directly studying these things, would be. I guess its no different than the fact that Krugman and Stiglitz are experts at very stylized models of information asymmetries, and all the sudden their views on immigration and bank regulation are considered expert.
Regulations Killed by Secondary Objectives
Avinash Persaud's mention that a bank tax related to the price of the 'price of risk', or Tyler Cowen's mention of better regulation of derivatives, or Joe Stiglitz's idea that the UN take over financial regulation, I think misses a very practical problem. That even when regulation has a good objective, it gets overloaded with other objectives. For example, we want to lower our energy dependence and lower carbon emmissions, and help US farmers! So we are building massive infrastructure for ethanol. When everyone figures out what a nightmare it is in terms of total costs, environmental costs, and the impracticality of this lousy alternative (low octane, not storable, corrosive), it will have to be written off on the taxpayer's dime.
I went to a baseball game Saturday night. Minneapolis's public transportation has several objectives, one of which is obviously to save on traffic and fuel. But there's also the hope that people will work, live, and spend more money in cities, which social planners think is great. So, they have 13 stops prior to getting downtown, hoping you stop along the way and spend money at the quaint, uh, strip malls, in the city. Further, you don't want to disrespect the vibrant city dwellers with express routes. So guess what: no one uses the train, because it stops every mile, and takes twice as long as driving. It's basically free ($1 for a round-trip downtown), and people only use it on big games. I could be very valuable, but given every major public action has multiple objectives, they take something that could be good, and make it worthless.
Any proposed regulation should be expected to become saddled with myriad other objectives, helping address the root causes of poverty and other inequities creating a solution worse than doing nothing. Like ethanol as a solution to our energy problems.
I went to a baseball game Saturday night. Minneapolis's public transportation has several objectives, one of which is obviously to save on traffic and fuel. But there's also the hope that people will work, live, and spend more money in cities, which social planners think is great. So, they have 13 stops prior to getting downtown, hoping you stop along the way and spend money at the quaint, uh, strip malls, in the city. Further, you don't want to disrespect the vibrant city dwellers with express routes. So guess what: no one uses the train, because it stops every mile, and takes twice as long as driving. It's basically free ($1 for a round-trip downtown), and people only use it on big games. I could be very valuable, but given every major public action has multiple objectives, they take something that could be good, and make it worthless.
Any proposed regulation should be expected to become saddled with myriad other objectives, helping address the root causes of poverty and other inequities creating a solution worse than doing nothing. Like ethanol as a solution to our energy problems.
Sunday, May 04, 2008
Bank Value-at-Risk Reporting
I'm a big Value at Risk fan. As a method to report risk for a desk with diverse financial factors, and accommodating fat tails, it is unsurpassed. Many of the criticisms are of deficiencies 'within the box'. Indeed, many claim that VaR assumes normality, but nothing could be further than the truth. A particular modeler can assume normality, but he shouldn't. And because the VaR focuses on the tail--usually a 95% or 99% event--it teases out nonlinear exposures that are not shown in things like standard deviations. So it isn't ignorant of nonlinear exposures and extreme events, it was designed with them in mind. Most importantly, it replaced the old method of reporting, which was a myriad of stress tests particular to various products, such as 5% moves in currencies, or 100 basis point shifts for bonds, and puts them in a standard, common denominator. Riskmetrics was the pioneer of this risk reporting, and they continue to put out a great deal of common sense on this topic.
But there are two large holes in VaR. First, banks are reluctant to apply this to nontraded assets, because banks traditionally treat assets in their 'banking book' different than in their mark-to-market book: in the former it is kept at book value, in the latter it is marked to market quarterly. Secondly, credit risk is very cyclical, so you have several years, if not decades, of low or zero defaults, but then a big boom in defaults. In this case, your horizon should really be not days, but rather years, and instead of using historical data for a particular asset class, they should use a variety of similar producs. In a sense, estimating credit risk, is like estimating country default risk: you don't look at the daily time series of, say, Greek defaults (zero, every day this century!), to estimate their future default rate. Instead, you look at a panel of data, that has both a cross section (many countries) and a time-series aspect to it.
But for a desk, it is a great way to get a 'little' top down view. That is, if you are very parochial desk, trading, say, convertible bonds, you have a handful of sensitivities you monitor: yield curve shifts, twists, credit spread shocks, implied volatility shocks, realized volatility shocks, stock market shocks. But if you have a handful of these desks, VaR sums them up, and allows you to see, day to day, if any significant new exposures were put on. But it only aggregates so well. The problem with the Value-at-Risk reported in major bank's annual reports are farcical. The big 7 US money center banks (Goldman, Citi, etc.) show an average daily VaR (99%) of about $120MM, which annualizes to about $1.9B. But the average market value decline in those same banks over the past 12 months has been about $42B, a 22 fold difference. A seemingly reasonable interpretation, that VaR measures a once in 100 year scenario as applied to the bank, is clearly not reasonable, not by a longshot.
But of course, if you read the fine print, you'll find that the VaR was actually pretty good. The problem is that banks tend to apply VaR only to their market making operations, and so, in the recent case, all those subprime mortgages were simply off the radar screen. Quite the asterisk. Further, given most market making operations are more like selling widgets than speculating on risk factors, these activities generate insanely high pnl/VaR numbers, so if the annualized trading revenue is $20B, and the annualized VaR $2B, the ridiculous Sharpe ratio implies that the VaR doesn't matter. Double the VaR, and the bank would do exactly the same thing, because it's like an internal ROE of 56%--well over the hurdle rate. VaR is not constraining, or directing, capital. It is merely measuring incidental risk taking by a select subset of its inventory. It's like measuring the cost of electricity at headquarters--not without interest, surely better minimized, but generally not a strategic issue. Indeed, if you are going to exclude the major risks from a top-down number, it is misleadingly irrelevant. Mid-level management should know the VaR of trading operations, to monitor significant changes to their books, but until they include most, if not 'almost all' risks within a VaR-type approach, don't even report a top-level VaR.
But there are two large holes in VaR. First, banks are reluctant to apply this to nontraded assets, because banks traditionally treat assets in their 'banking book' different than in their mark-to-market book: in the former it is kept at book value, in the latter it is marked to market quarterly. Secondly, credit risk is very cyclical, so you have several years, if not decades, of low or zero defaults, but then a big boom in defaults. In this case, your horizon should really be not days, but rather years, and instead of using historical data for a particular asset class, they should use a variety of similar producs. In a sense, estimating credit risk, is like estimating country default risk: you don't look at the daily time series of, say, Greek defaults (zero, every day this century!), to estimate their future default rate. Instead, you look at a panel of data, that has both a cross section (many countries) and a time-series aspect to it.
But for a desk, it is a great way to get a 'little' top down view. That is, if you are very parochial desk, trading, say, convertible bonds, you have a handful of sensitivities you monitor: yield curve shifts, twists, credit spread shocks, implied volatility shocks, realized volatility shocks, stock market shocks. But if you have a handful of these desks, VaR sums them up, and allows you to see, day to day, if any significant new exposures were put on. But it only aggregates so well. The problem with the Value-at-Risk reported in major bank's annual reports are farcical. The big 7 US money center banks (Goldman, Citi, etc.) show an average daily VaR (99%) of about $120MM, which annualizes to about $1.9B. But the average market value decline in those same banks over the past 12 months has been about $42B, a 22 fold difference. A seemingly reasonable interpretation, that VaR measures a once in 100 year scenario as applied to the bank, is clearly not reasonable, not by a longshot.
But of course, if you read the fine print, you'll find that the VaR was actually pretty good. The problem is that banks tend to apply VaR only to their market making operations, and so, in the recent case, all those subprime mortgages were simply off the radar screen. Quite the asterisk. Further, given most market making operations are more like selling widgets than speculating on risk factors, these activities generate insanely high pnl/VaR numbers, so if the annualized trading revenue is $20B, and the annualized VaR $2B, the ridiculous Sharpe ratio implies that the VaR doesn't matter. Double the VaR, and the bank would do exactly the same thing, because it's like an internal ROE of 56%--well over the hurdle rate. VaR is not constraining, or directing, capital. It is merely measuring incidental risk taking by a select subset of its inventory. It's like measuring the cost of electricity at headquarters--not without interest, surely better minimized, but generally not a strategic issue. Indeed, if you are going to exclude the major risks from a top-down number, it is misleadingly irrelevant. Mid-level management should know the VaR of trading operations, to monitor significant changes to their books, but until they include most, if not 'almost all' risks within a VaR-type approach, don't even report a top-level VaR.
Subscribe to:
Posts (Atom)