Wednesday, November 19, 2008

Bias vs. Competence

Two weeks ago Arnold Kling had this rather cynical view of the peer review process in academia:
I put no special value in peer review. You can have a peer who discards a paper for purely idiosyncratic reasons, or because he just does not understand key aspects of the paper. More often, you have a peer who approves a paper because it cites his own work favorably, which makes it immune to criticism in the eyes of the reviewer.

He is describing the his frustration with the popularity of certain strains of economic thought that are dictating current research, but to him seem a dead end. Such things often happen because as a field gets more advanced it takes considerable investment just to participate in the debates, but then, after investing a couple years in such learning, one is in somewhat of a dilemma, because noting 'this is all hairsplitting nonsense' both alienates you from your colleagues (presumably, many of whom are friends), and also trashes a lot of your human capital. Thus, many in a position to make strong criticisms tend to merely make inside the paradigm criticisms.

Obviously this is bad, but the problem is current paradigms don't have neat labels that allow us to objectively distinguish between fads and facts. It is the trade-off between the incompetent many versus the corrupt few, the fact that experts are biased, but the masses are ignorant. I think moderation is the key to this debate as any other, and so there will be an optimum in the messy middle. That is, I'm glad we don't have referendums from the general populace, university students, or even the American Economic Association (AEA) on basic arguments. Yet I'm also troubled by the way certain threads within economics become citation circles that waste resources and hurt the economics brand.

As much as scientists like to 'think outside the box', they generally think inside a very small box, the more so the more they think they don't. The box is rarely based on the policy implications, but rather the methodological and presentational protocols they consider appropriate, and like to think of them as the essence of logic and rigor as opposed to bias. All biases are basically held as beliefs about what is true, in the William James sense of those beliefs that you think have the highest Net Present Value. A 'bias' is merely such a belief you disagree with, and although on a meta level we all understand people have biases, we think our beliefs are true, otherwise we wouldn't believe them. I don't know how to fix the problem more quickly than Planck's observation about waiting for the vanguard to die (OK, I do know how, but I think we have to reject that on first principles).

One path to success in dealing with the highly self-interested gatekeepers is going straight to the public, around the bias, but potentially through the ignorant. With the success of Freakonomics, many economists have taken this approach, though I think it's effectiveness is ambiguous. I'm with Ariel Rubinstein when he notes that there is not much economics, as opposed to mere cleverness, in such tomes. Popular economist John Kenneth Galbraith noted he never had a journal submission rejected, but this was because he wrote only books and invited articles, meaning this immensely popular author did not have to mind the academic peer review process. In contrast, I remember TA-ing for the then President of the AEA Robert Eisner and his submission to the main AEA journal (the AER) was rejected while he was President--kudos to Economics integrity! As popular as Galbraith was in his life through his writings and omnipresence as a public intellectual, I don't think Galbraith's ideas have held up very well: check out Industrial Organization journals or graduate textbooks, and you won't find his name very often if at all. For example, Galbraith argued a lot that large firms have a lot of market power over competitors and consumers, but the small size effect suggests being large is not a net advantage, but rather a disadvantage (eg, why couldn't GM crush Toyota in the crib?).

There is some research, however, that did gain a more solid footing outside of economics experts, but gained solid influence in economics. Hernando de Soto's early non peer reviewed work on property rights in South America (The Other Path, published in 1989), really predates future serious work on the importance of institutions and rules, and helped provide valuable empirical support for the overthrow the older focus on monetary and fiscal policy as being the most important issue in development. One might also note that behavioral economics really gained overwhelming momentum outside economics before becoming a mainstay within. That's not really the same as 'not peer reviewed', but it's not 'economics peer reviewed'. Kahneman, Slovic, and Tversky's Judgment Under Uncertainty: Heuristics and Biases, published in 1982, was first introduced to me by a psychology grad student in 1988 who said it was all the rage in her field.

In sum, its a mixed bag, there is no obviously better way than our current system of using specialist gatekeepers.

3 comments:

Anonymous said...

Eric, I suggest a couple of distinctions to help sort out the "mixed bag" of peer review problems and solutions you mention:

1. Distinguish between experimental results that can be repeated versus those that can't be repeated under the same conditions. I think this would roughly correspond to a physical vs. social science distinction. In particular, one could make "hairsplitting nonsense" criticism about a quantitative method used in financial time series data. But a peer questioning whether de Soto's own pamphlets affected the economic outcomes in Peru should be viewed through a different lens, since we can't re-create the initial conditions in Peru pre-de Soto.

2. Take your observation about psychologists being interested in behavioral economics to the next level. I personally find reading *outside* of finance to be more valuable in building my human and career capital. Perhaps having peer review come from at least a little bit outside of one's field would be helpful. A psychologist could review a behavioral economics paper, or a mathematician review a finance paper. The problem, however, goes back to your comment about wasting your human capital and time. How would a psychologist advance her career by reviewing economics papers?

The Rioja Kid said...

[Galbraith argued a lot that large firms have a lot of market power over competitors and consumers, but the small size effect suggests being large is not a net advantage, but rather a disadvantage]

I'm not seeing this at all; surely a lower expected rate of return on large-cap stocks (which aren't the same thing as large firms) would imply a lower cost of equity capital, which is an advantage to the firm?

Eric Falkenstein said...

bruschettaboy: clearly the empirical issue is quite complicated, but in any case there is zero evidence in favor of his hypothesis. A firm with market power should generate abnormal profits, which might not necessarily be abnormal returns, so one could argue that returns are not relevant. Interestingly, I have never read that the lower returns of large cap stocks historically was from anticipated market power and thus lower risk, though it is a plausible explanation. But either way, conditional profitability (ebit/assets) is not a function of size.