Academic Absurdity of the Week: Fake Peer Reviews

I’m guessing that the number of Power Line readers who take in the academic series Lectures in Computer Science doesn’t approach zero, but is zero, so we might put it in scientific notation this way: PLR(N=0)~0. In which case you would have missed this gem:

Your Paper has been Accepted, Rejected, or Whatever: Automatic Generation of Scientific Paper Reviews

Alberto Bartoli, Andrea De Lorenzo, Eric Medvet , Fabiano Tarlao

Abstract

Peer review is widely viewed as an essential step for ensuring scientific quality of a work and is a cornerstone of scholarly publishing. On the other hand, the actors involved in the publishing process are often driven by incentives which may, and increasingly do, undermine the quality of published work, especially in the presence of unethical conduits. In this work we investigate the feasibility of a tool capable of generating fake reviews for a given scientific paper automatically. While a tool of this kind cannot possibly deceive any rigorous editorial procedure, it could nevertheless find a role in several questionable scenarios and magnify the scale of scholarly frauds.

A key feature of our tool is that it is built upon a small knowledge base, which is very important in our context due to the difficulty of finding large amounts of scientific reviews. We experimentally assessed our method 16 human subjects. We presented to these subjects a mix of genuine and machine generated reviews and we measured the ability of our proposal to actually deceive subjects judgment. The results highlight the ability of our method to produce reviews that often look credible and may subvert the decision.

Inside Higher Education provides more detail:

Using automatic text generation software, computer scientists at Italy’s University of Trieste created a series of fake peer reviews of genuine journal papers and asked academics of different levels of seniority to say whether they agreed with their recommendations to accept for publication or not.

In a quarter of cases, academics said they agreed with the fake review’s conclusions, even though they were entirely made up of computer-generated gobbledygook — or, rather, sentences picked at random from a selection of peer reviews taken from subjects as diverse as brain science, ecology and ornithology.

“Sentences like ‘it would be good if you can also talk about the importance of establishing some good shared benchmarks’ or ‘it would be useful to identify key assumptions in the modeling’ are probably well suited to almost any review,” explained Eric Medvet, an assistant professor in Trieste’s department of engineering and architecture, who conducted the experiment with colleagues at his university’s Machine Learning Lab. . .

Mixing the fake reviews with real reviews was also likely to distort decisions made by academics by making weak papers appear far stronger thanks to a series of glowing reviews, the paper found.

The research team was able to influence the peer review process in one in four cases by throwing fake reviews into the mix, it said.

Sounds like peer review could use some . . . peer review.

Notice: All comments are subject to moderation. Our comments are intended to be a forum for civil discourse bearing on the subject under discussion. Commenters who stray beyond the bounds of civility or employ what we deem gratuitous vulgarity in a comment — including, but not limited to, “s***,” “f***,” “a*******,” or one of their many variants — will be banned without further notice in the sole discretion of the site moderator.

Responses