Monday, December 07, 2009

Peer Review All About Politics

With the Global Warming debate, 'peer review' is mentioned as the essence of objective science. Yet the Shannon Love makes an interesting point. Peer review is basically a mechanism to avoid a certain kind of politics, not to evaluate a paper's value.
It is not a journal’s responsibility to confirm or refute experimental conclusions, but it is their responsibility to check for basic errors in math or methodology, just as they would check for errors in grammar or spelling. Peer review offloads any responsibility for publishing bad papers onto anonymous members of the scientific community. It’s a perfect form of blame passing that everyone else wishes they could use.

This blame passing also keeps journals and editors from being accused of taking sides in personal and professional quarrels. It is also the reason that reviewers themselves prefer to remain anonymous. No scientist wants to suffer the professional and personal consequences from either refusing or accepting a paper they should not have refused or accepted. It is also why peer review is a superficial review. The reviewers do not wish to be dragged into the minutia of scientific debates and quarrels. Instead, they concentrate on the basics that everyone can agree on.

So, to avoid the politics in choosing which papers are worthy, the anonymous peer review allows an editor to reject papers without having everyone hate him. Peer reviewers correct obvious errors, and make recommendations about the usefulness of a paper. The former point is objective and straightforward, but note this does not involve checking the raw data for fraud, or replicating an algorithm. I've refereed many papers, and I never independently tried to replicate their results with their algorithm and data. If they faked their data subtly, only posterity would punish them, not a referee.

But a referee also crucially opines on a paper's usefulness, and this involves guessing what other people would like to reference. Most models do not have straightforward empirical implications, so this is often an assessment of which toolkit is considered cutting edge. Economics often builds huge Rube Goldberg machines that potentially are useful, which are never refuted, but rather, fade away as the professors who made their reputations on these models retire, and the new generation sees that they are quite useless.

Input-output models, large scale macroeconomic models, second order difference equations modeling the GDP. These were all considered the apex of 'good form', and so any results in these frameworks, if sufficiently rigorous, were published. If you submitted a paper today using these frameworks, you would get rejected out of hand because they are no longer considered useful. But that came through long experience, and not any definitive rejection. Even today, some results based on dynamic programming, and using vector autoregressions, are published merely for getting a result, not an interesting one, because the technique is difficult, rigorous, and takes economics a leap equivalent to the leap from astrology to astronomy. Who says economists don't work on faith?

This is an example how something can be peer reviewed, and not even wrong.

9 comments:

J said...

...never independently tried to replicate their results with their algorithm and data...

Yes, I also found that normally it would require an enormous effort to replicate anything except the simplest proposition, and no one is paid to check other team's work.

So anything that look reasonable, passes. Very bad.

Anonymous said...

Authors could, of course, make replication easy -- "here are my data, and the R script I ran on them" -- but that would take all the mystery and romance out of it...

Eric Falkenstein said...

not everyone can run R. Some only know SAS, GAUSS, Matlab, C++, Fortran, Eviews, RATS, etc.

Anonymous said...

For 100% point/click goodness, wrap up the script in a batch-file that runs R from the command line ;)

Eric Falkenstein said...

But then, they are trusting your code. Perhaps your code fudges the output to look good in some way?

Anonymous said...

In finance and economics research, a lot of the value added comes from the programming work. Everybody has the same samples. So, it's very rare for researchers to just share their code.

Anonymous said...

"Value add" - ?

I thought academics cared only about bettering mankind by advancing human knowledge .. .

Jim Glass said...

Humor: Hitler tries to get a paper through peer review, Youtube.

(Not entirely suitable for the workplace if the volume is up.)

Anonymous said...

"bettering mankind by advancing human knowledge" is adding value, right?