Behind Science Fraud, Chapter 3

Our first installment in this series took note of the NY Times op-ed by Adam Marcus, managing editor of Gastroenterology & Endoscopy News, and Ivan Oransky, global editorial director of MedPage Today (both are co-founders of retractionwatch.com), but now they’re back with another, longer piece at Nautilus that goes into more detail, and offers more shocking examples (such as the Japanese scientist who fabricated a whopping 183 papers that got published), about science fraud.

After reviewing the painstaking work that unraveled several serial fraudsters (it’s great reading if you have the time), Marcus and Oransky get down to business:

But this [careful statistical review of the raw data] is an approach that requires journal editors to be on board—and many of them are not. Some find reasons not to fix the literature. Authors, for their part, have taken to claiming that they are victims of “witch hunts.” It often takes a chorus of critiques on sites such as PubPeer.com, which allows anonymous comments on published papers, followed by press coverage, to generate any movement.

In 2009, for example, Bruce Ames—made famous by the tests for cancer-causing agents that bear his name—performed an analysis similar to Carlisle’s together with his colleagues. The target was a group of three papers authored by a team led by Palaninathan Varalakshmi. In marked contrast to what later resulted from Carlisle’s work, the three researchers fought back, calling Ames’ approach “unfair” and a conflation of causation and correlation. Varalakshmi’s editors sided with him. To this day, not a single one of the journals in which the accused researchers have published their work have done anything about the papers in question.

Sadly, this is the typical conclusion to a scholarly fraud investigation. The difficulty in pursuing fraudsters is partly the result of the process of scholarly publishing itself. It “has always been reliant on people rather than systems; the peer review process has its pros and cons but the ability to detect fraud isn’t really one of its strengths,” Yentis says.

Publishing is built on trust, and peer reviewers are often too rushed to look at original data even when it is made available. Nature, for example, asks authors “to justify the appropriateness of statistical tests and in particular to state whether the data meet the assumption of the tests,” according to executive editor Veronique Kiermer. Editors, she notes, “take this statement into account when evaluating the paper but do not systematically examine the distribution of all underlying datasets.” Similarly, peer reviewers are not required to examine dataset statistics.

When Nature went through a painful stem cell paper retraction last year, which led to the suicide of one of the key researchers, they maintained that, “we and the referees could not have detected the problems that fatally undermined the papers.” The journal argued that it took post-publication peer review, and an institutional investigation. And pushing too hard can create real problems, Nature wrote in another editorial. [Emphasis added.]

Remind me again why exactly are we supposed to trust the journal article review process?

Notice: All comments are subject to moderation. Our comments are intended to be a forum for civil discourse bearing on the subject under discussion. Commenters who stray beyond the bounds of civility or employ what we deem gratuitous vulgarity in a comment — including, but not limited to, “s***,” “f***,” “a*******,” or one of their many variants — will be banned without further notice in the sole discretion of the site moderator.

Responses