Tuesday, January 29, 2013

Signal and the Noise Seems Better Than it Is

I read The Signal and The Noise because I got two copies as Christmas presents.  It was an enjoyable read, like popcorn in that while I liked it afterward I wasn't very satisfied.

There were some neat anecdotes, like about how the McLaughlin Group's forecasts were about 50-50, but I didn't think it meant these guys were clueless so much as McLaughlin was basically asking for opinion where the true probability is about 50-50. As Steve Sailer notes, no one wants to hear depressing statistics like the murder rate in Detroit will be higher than the suburbs, or that per capital income in Somalia will be lower than in Estonia in 2010 or lots of other really important facts about the future, because we take those for granted. Instead, we want to know if iPhones will be more popular than the Samsung phone, which is more objectively about 50-50.

Then there was the section on poker, and what I thought most interesting was that to capitalize on your skill here you need to play an ungodly number of hands.  I know some people play a couple hours of Texas Hold'em every night, and concede that skill to them, happily.  It just isn't that interesting to me.

In weather forecasting there are structural models that look at where weather is now and how its moving, and forecast the interaction.  These work well for about 8 days, but after that long term averages from the Farmer's Almanac work better.  In economics, simple vector-autoregressions that are basically regression-to-the-mean models work better than averages for a year, and then long-term averages dominate.  Structural models are worse in the short and long term.  This should give one pause when you listen to a macroeconomist, and indeed their stock as public pundits has depreciated quite a bit since the 1970's heyday.

For example, when I joined a bank as a economic analyst around 1987, the economists all remembered when their departments had a whole floor of economists generating forecasts on everything (employment by sector in Modesto California, PPI by commodity type, interest rates over the next 5 years).  By then, then group was down to about 8 economists, and now most big banks have one economist, and he's just used for PR to put on CNBC, and the internal decision makers don't even pretend to listen to him (at Moody's, we had an  economist and that position is still strangely respected; I don't remember once anyone mentioning his work in internal discussions). We had debates between monetarist-based models, and Keynesian models, and they all were no better than guessing.

I would have liked to hear some inside scoop on his baseball model, which seems to have been pretty good, but I guess he figured that was too much inside baseball.  As for his election model, it's kind of interesting he simply averaged polls, a pretty straightforward approach.  Most things that work are like that: so simple they don't seem really profound. It would be nice if the key to forecasting was something really tricky like a Kalman filter or a 100 equation macro-model, but I know these don't work because I've seen them fail first hand. Nonetheless, one often meets someone with more education than experience who thinks their new technique will work, and if it's complicated enough, they won't ever have to admit it doesn't.

As per Silver's take on Tetlock's Hedgehogs vs. Foxes, I think John Cochrane had a better take-away:
Milton Friedman was a hedgehog. And he got the big picture of cause and effect right in a way that the foxes around him completely missed. Take just one example, his 1968 American Economic Association presidential speech, in which he said that continued inflation would not bring unemployment down, but would lead to stagflation. He used simple, compelling logic, from one intellectual foundation. He ignored big computer models, statistical correlations, and all the muddle around him. And he was right. 
In political forecasting, anyone’s success in predicting cause and effect is even lower. U.S. foreign policy is littered with cause-and-effect predictions and failures—if we give them money, they’ll love us; if we invade they will welcome us as liberators; if we pay both sides they will work for peace, not keep the war and subsidies going forever. 
But the few who get it right are hedgehogs. Ronald Reagan was a hedgehog, sticking to a few core principles that proved to be right.

4 comments:

John said...

I worked for Otto Eckstein's Data Resources (DRI) in the early 80's selling large econometric modelling services to Fortune 500 enterprises. They all had large Econ departments forecasting everything imaginable ("product line forecasting"). It was a lot of fun with computers and statistics, even if not very valuable.

Anonymous said...

anything that's got a political angle, even if it's science or anything else, is not to be believed.

because there is always an agenda.

InjuredE said...

"anything that's got a political angle, even if it's science or anything else, is not to be believed."

What if objective methods lead to a conclusion that has a political angle? I wouldn't go throwing away science yet. I'd say we find better methods of eliminating political bickering in economics.

As objective methods as possible->conclusions that can be related to politics.

Not:

Inherent political bias in the assumptions and methods-> pseudo-scientific conclusions.

As for the book, would you give it an overall thumbs up?

bjdubbs said...

There was some good stuff, especially the poker chapter, Tom Dwan is an interesting guy with unexpected things to say about poker. But I got suspicious with the "butterfly effect" example, that is such a chestnut of popsci books (first read about it in Chaos about 20 years ago) that it has to be wrong (butterfly effect or just margin of error?)