Back finally to an old topic leftover from the climate inquisition a few weeks back. One of our lefty commenters thought it important to raise the issue that I don’t publish “peer-reviewed” articles about climate issues in the academic literature, which is true. It’s something I have in common with Al Gore. (Heh.) Besides, I prefer to write in plain English for human beings rather than the 10 people who read most academic journals.
Actually, it is not quite true: I have published a couple of peer-reviewed articles about science and policy a long time ago that I had completely forgotten about, one of them in an otherwise mostly left-wing academic journal, Social Research. (I also once got an article in Mother Jones and wondered whether I should cash the check or frame it when it came in the mail. But I didn’t wonder long: I cashed it before they could figure out their blunder and send a stop payment order to the bank.)
More to the point, I am currently a regular peer reviewer for one of the leading academic journals in the energy and climate policy domain, though I shouldn’t say which one since peer reviews are supposed to be anonymous. And finally, about ten years back I was an invited peer reviewer for the EPA (!!) on one of their larger data analysis projects, though it had nothing to do with climate change. So it turns out that I have infinitely more peer review experience than Al Gore (“infinitely” since his peer review record is precisely Zero).
It is probably generous to paraphrase Churchill’s phrase about democracy to suggest that peer review is the worst form of academic quality control except for all the others that have ever been tried. First of all, how does “peer review” actually work in practice? Journal editors send out queries to academics asking if they’d agree to referee a submitted article. Even journals with lots of editors may not be intimately familiar with everyone working in a specialized subfield of science, so how do they know whom to query? Often the authors of the submitted articles suggest people—their friends and allies—as peer reviewers. (That’s how I got my first peer review assignment from a journal, and now the editors seem to like my reviews sufficiently that they are asking me to referee papers from people I’ve never heard of; right now it is two Chinese authors.) Even though their peer reviews will come in anonymously, helping to select at least some of your peer reviewers will increase the chances of a partially favorable review panel. And even if the articles are sent to you with the author or authors’ names stripped out, you can usually guess who the authors are easily enough from a careful reading of the bibliography.
Second, keep in mind that peer reviewers are not paid for their reviews. They are done strictly on a volunteer basis, though it can look good on an academic CV to say you’ve been a peer reviewer for the Journal of Irreproducible Results or some such. Maybe some referees re-run the regressions in a quantitative article, but do you think any referees actually check the quality or accuracy of the raw data in article submissions? Most of the time an editor merely wants a referee on hand to see that the author is familiar with the main literature on the topic, and that the article at least purports to make an original contribution. Hence a lot of peer reviews are really bibliography reviews (“the author hasn’t included Jones  on this point”), which is why the bibliographies of many articles are longer than the articles themselves, even though few articles ever engage in any sustained discussion of the existing literature. (Those rare articles that actually engage the existing literature in a serious way are called something else. They are called “books.”)
Third, a typical peer review process only asks referees for one of three choices: Accept, Accept with Changes, or Reject. In all three cases, the referee is supposed to attach a short explanation (in the case of “Accept” or “Reject”) or suggestions for revision in the case of “Accept with Changes.” I don’t know whether there are any statistics kept in how peer reviews come in on the first pass, but it seems like nearly every scientific article I read says that the article has been revised before final acceptance. I am guessing very few articles get recommended for acceptance without revision; what peer reviewer isn’t going to split hairs about something? (In my case, I typically flag things for clarification or additional information, but this is chiefly because so many academics are such bad writers, and journal editors are hoping the referees will do some of the heavy lifting to fix the problems of clarity in many submissions. If that’s their hope, the process is failing badly.)
These problems don’t even get to the deeper problem of the extent to which peer review is an insider’s racket. One of the more damaging revelations of the “Climategate” email scandal in 2008 was the admission of Phil Jones, head of the Climate Research Unit at East Anglia University, that he would seek to keep contrarian or skeptic climate literature out of the IPCC process “even if we have to redefine what the peer-review literature is!” Nothing says confidence in science like manipulating the article inclusion process.
Then there’s this, from the Washington Post a few days ago:
A major publisher of scholarly medical and science articles has retracted 43 papers because of “fabricated” peer reviews amid signs of a broader fake peer review racket affecting many more publications.
The publisher is BioMed Central, based in the United Kingdom, which puts out 277 peer-reviewed journals. A partial list of the retracted articles suggests most of them were written by scholars at universities in China, including China Medical University, Sichuan University, Shandong University and Jiaotong University Medical School. But Jigisha Patel, associate editorial director for research integrity at BioMed Central, said it’s not “a China problem. We get a lot of robust research of China. We see this as a broader problem of how scientists are judged.”
Meanwhile, the Committee on Publication Ethics, a multidisciplinary group that includes more than 9,000 journal editors, issued a statement suggesting a much broader potential problem. The committee, it said, “has become aware of systematic, inappropriate attempts to manipulate the peer review processes of several journals across different publishers.” Those journals are now reviewing manuscripts to determine how many may need to be retracted, it said.
This is not an isolated incident. There’s a whole website, RetractionWatch.com, that follows the increasing number of retractions of bad articles.
But perhaps the most revealing retraction of recent years was The Lancet finally, in 2010, acknowledging that the 1998 Andrew Wakefield article linking vaccines to autism was phony. This article was the principal justification for anti-vaxxers for several years. Following a long investigation, the British Medical Journal called Wakefield’s findings “an elaborate fraud,” and a British court found that “there is now no respectable body of opinion which supports [Dr. Wakefield’s] hypothesis, that MMR vaccine and autism/enterocolitis are causally linked.”
Just how did this article pass peer review? I think the editors of The Lancet, one of the world’s premier medical journals, ought to explain that in detail.
But notice something else about the Wakefield article pictured nearby: The article lists 12 co-authors along with Wakefield. This seems to be the typical mode with scientific articles: casts of thousands sign on as “co-authors” of an article even if they did little work on the actual research. You almost never see a long cast of authors in social science articles (three or four seem to be the outer limit), even when a professor may use a large team of graduate students to conduct field research. In purely scientific publishing it seems to be a way of casting science as a majoritarian enterprise—a means of granting false authority and certainty. (And it’s an easy way of listing another publication on your CV.) I’ve come to adopt a rule of inverse judgment: the more authors listed on a science article, the more skeptical I am. How many co-authors did Einstein have for his breakthrough paper on general relativity?
Meanwhile, as Washington Post reporter Jim Tankersley pointed out last week, the most devastating critique of the Thomas Piketty hypothesis about income inequality, currently setting the entire controversy on its head within the highest reaches of academic economists, has come from an MIT graduate student who published his 459-word critique . . . on a blog. Heh.