Prediction expert Phillip Tetlock and his colleague Andy Gardiner have a piece over at Cato Unbound, discussing the failure of experts to foresee this year's events:
Even now, only halfway through the year, The World in 2011 bears little resemblance to the world in 2011. Of the political turmoil in the Middle East—the revolutionary movements in Tunisia, Egypt, Libya, Yemen, Bahrain, and Syria—we find no hint in The Economist‘s forecast. Nor do we find a word about the earthquake/tsunami and consequent disasters in Japan or the spillover effects on the viability of nuclear power around the world. Or the killing of Osama bin Laden and the spillover effects for al Qaeda and Pakistani and Afghan politics. So each of the top three global events of the first half of 2011 were as unforeseen by The Economist as the next great asteroid strike.
Killing Osama, revolution in Tunisia, and the tsunami in Japan are all pretty specific outcomes that were unforeseen. But I would say that rather than this year being really crazy, it is rather similar to last year: US mired in Iraq and Afghanistan, deficits as far as the eye can see, no shovel-ready government spending, inner city America pathetic, Harvard outdoing Mississippi State academically, etc. Predicting broad trends is eminently doable but we often just take those for granted. Indeed, later they point out that experts tend to do worse than simple extrapolation models, which just highlights the futility of predicting outliers. So what should one do: predict specifics, or forecast broad trends that necessarily miss specifics?
When I worked as an economist I remember that the statistically optimal forecast was generally an exponential curve from where we are to the long-run historical average. But that's pretty boring, so we would add some little wiggles at the end based on some theory (A causes B which causes C in 18 months), because that kind of reasoning really resonated with the audience; they wanted to learn some new story to apply to their understanding of the world, something more novel than the future will be a lot like the past. Similarly, Tetlock appears to want tsunami forecasts, surely fun to have.
Later, Tetlock notes that statistical models outperform alternatives, but 'have an unfortunate tendency to work well until they don’t.' This, supposedly, is a deep thought. What Tetlock seems to lament is the absence of perfect foresight, because dominant approaches he finds poor. This isn't a very logical conclusion, because he is making the best the enemy of the good here.
Tetlock is fond of grouping experts into hedgehogs and foxes: 'the fox knows many things but the hedgehog knows one big thing.' The experts with modest but real predictive insight were the foxes, but sometimes not. In an interview Tetlock was asked whether he was a hedgehog or a fox, and replied, 'both!', as if everyone else would not also choose this description, highlighting the futility of his schema. Tetlock greatly admires Nassim Taleb, another who, ironically, criticizes pundits who make lots of nonfalsifiable predictions and selectively report their record. I think this highlights Nietzsche's warning that 'He who fights with monsters should look to it that he himself does not become a monster.' In other words, people who make a life out of criticizing overconfidence, dogma, hate, hypocrisy, are often filled with overconfidence, dogma, hate, and hypocrisy.
It's good to dislike error and vices, but to thirst too much for these points just highlights an intellectual diabetes.