Behind Science Fraud, Chapter 10

Time to update our series on science fraud from a few months ago, with news of a blockbuster research review effort that is making waves this week. The Chronicle of Higher Education reports today:

A decade ago, John P.A. Ioannidis published a provocative and much-discussed paper arguing that most published research findings are false. It’s starting to look like he was right.

The results of the Reproducibility Project are in, and the news is not good. The goal of the project was to attempt to replicate findings in 100 studies from three leading psychology journals published in the year 2008. The very ambitious endeavor, led by Brian Nosek, a professor of psychology at the University of Virginia and executive director of the Center for Open Science, brought together more than 270 researchers who tried to follow the same methods as the original researchers — in essence, double-checking their work by painstakingly re-creating it.

Turns out, only 39 percent of the studies withstood that scrutiny.

Here’s the complete report in Science magazine (out from behind Science’s usual paywall). It should be pointed out that not all of the failed replications are because of dishonesty or outright fraud on the part of the original researchers. But the careful Science magazine write up makes clear that there are significant problems with the social science publishing process:

No single indicator sufficiently describes replication success, and the five indicators examined here are not the only ways to evaluate reproducibility. Nonetheless, collectively these results offer a clear conclusion: A large portion of replications produced weaker evidence for the original findings despite using materials provided by the original authors, review in advance for methodological fidelity, and high statistical power to detect the original effect sizes. . .

Reproducibility is not well understood because the incentives for individual scientists prioritize novelty over replication. Innovation is the engine of discovery and is vital for a productive, effective scientific enterprise. However, innovative ideas become old news fast. Journal reviewers and editors may dismiss a new test of a published idea as unoriginal. The claim that “we already know this” belies the uncertainty of scientific evidence. Innovation points out paths that are possible; replication points out paths that are likely; progress relies on both. (Emphasis added.)

But this ought to raise deeper questions about the probity of social science methodology itself—questions that are seldom much debated any more. Perhaps we’re coming to the point, like requiring DNA evidence for death penalty murder convictions, where replication by an outside party should be required before new findings can be published? (This might put a lot of climate science articles out of business.)

And as I said back in June, “So why don’t researchers skip all this trouble and just publish in the Journal of Irreproducible Results in the first place?”