Sunday, October 11, 2009

Radical Uncertainty's Long History

A debate between Bryan Caplan and Peter Boettke on whether Austrian economics is really fruitful is available on sequence of YouTube videos. Austrian economics is based primarily on the works of Friedrich Hayek and Ludwig von Mises, (Carl Menger, Eugen von Böhm-Bawerk, Henry Hazlitt, and Murray Rothbard are also big). I like a lot of what the Austrians write about, and generally have their biases. But some areas of their focus I find less appealing.

What I found most interesting was the crux of the discussion seemed to be on how to treat uncertainty. Caplan finds the Austrian conception rather unhelpful. True uncertainty, to an Austrian economist is 'radical' uncertainty, not amenable to mathematical manipulation. If you look at the work of George Shackle, you see him defining uncertainty as that which generates a potential surprise in some mysterious way that seems only defined ex post. I tend to agree with Caplan that this isn't helpful.

Boettke, however, highlights the old saw that it's better to be approximately right than precisely wrong, yet a precise answer leads to corrections, whereas the fuzzy answers that are surely approximately right are so vague it is not clear how to make better forecasts. The bottom line is the Austrians have been talking about uncertainty of this kind for a couple generations now, with not much to show for it. In that way, it is identical to Keynesian or Knightian uncertainty, concepts that have a certain indubitably true idea, that the the uncertainty we face is quite different than the objective probabilities generated by a fair roulette wheel, yet ultimately I think one has to put this into some formal footing, say by introducing Bayesian priors.

This thread has a very long history (Keynes going back to 1921, Knight to 1919, I'm sure one could go back further), and so it's a main reason why I find Nassim Taleb rather tiresome, because he brings up these old arguments in new contexts as if they are a radical break, and so pregnant with practical application. It is a radical critique--outside standard probability models--but it is not new, so based on history, a rather barren insight by itself. The 'uncertainty' thread remains outside the canon because no one has figured out how to amend standard statistics to incorporate the realistic idea that sometimes we are 'wrong'. It seems reasonable to ask that such criticism can be formalized using the very general tools available in standard probability theory.

It is important to remember that a theory that is approximately right (ie, precisely wrong in some cases) is better than a vague criticism. What is needed is something constructive, something the Austrians, Post-Keynesians, or Taleb, have failed to do.

2 comments:

michael webster said...

I think that you are making some good points here, the vagueness about non probabilistic uncertainty is hardly to be welcomed.

I have thought, for awhile, that it would be a useful exercise in axiomatics to start with a notion of x is more riskier than y, the equivalent of a preference relation, and build up what is necessary for that measure to be a probability measure. (I am sure that this has already been done.)

Then, those people who wanted some form of uncertainty could simply point to which axiom or axioms they denied were true for their particular risk measure.

This would move the talk from conversation to debate.

Anonymous said...

Whilst language is a funny thing & so people generally prefer to be precise rather than approximate, but why do you think that approximately right is so vague as to be useless whilst being precisely wrong will get you close to the truth?

Approximately wrong is, by definition, an answer close to being precisely right. Precisely wrong is, by definition not right, & conveys no idea whether you are close to being right.

Corrections will happen with both outcomes, but with precisely wrong they may still be far from being right, indeed even further than originally, whilst with approximately right, even if you get further away you're never far away! Surely your corrections are more likely to get to the precise right answer if they are approximately right than they are if the original starting point was precisely but unhelpfully wrong.

When trying to find the keys you had when you left home, what would you rather know - that your hunch they're in your left coat pocket is definitely wrong or that those missing keys are somewhere in the car?