Did any readers take note of the recent stories appearing in the news media that eating chocolate is actually good for weight loss, such as the June issue of Shape magazine which ran an article entitled “Why You Must Eat Chocolate Daily”?
Yesterday on the science website io9.com, German molecular biologist Johannes Bohannon explained how he pulled it off with a statistically weak study that several science journals accepted with minimal or perfunctory review:
“Slim by Chocolate!” the headlines blared. A team of German researchers had found that people on a low-carb diet lost weight 10 percent faster if they ate a chocolate bar every day. It made the front page of Bild, Europe’s largest daily newspaper, just beneath their update about the Germanwings crash. From there, it ricocheted around the internet and beyond, making news in more than 20 countries and half a dozen languages. It was discussed on television news shows. . .
The Bild story quotes the study’s lead author, Johannes Bohannon, Ph.D., research director of the Institute of Diet and Health: “The best part is you can buy chocolate everywhere.”
I am Johannes Bohannon, Ph.D. Well, actually my name is John, and I’m a journalist. I do have a Ph.D., but it’s in the molecular biology of bacteria, not humans. The Institute of Diet and Health? That’s nothing more than a website.
Other than those fibs, the study was 100 percent authentic. My colleagues and I recruited actual human subjects in Germany. We ran an actual clinical trial, with subjects randomly assigned to different diet regimes. And the statistically significant benefits of chocolate that we reported are based on the actual data. It was, in fact, a fairly typical study for the field of diet research. Which is to say: It was terrible science. The results are meaningless, and the health claims that the media blasted out to millions of people around the world are utterly unfounded.
Bohannon goes on to explain how he pulled it off, how several journals accepted the study for publication with little or no review at all, and why he expected journalists to spot a weak or phony story a mile away, but of course didn’t. Clearly he underestimated the credulity of reporters. He did reach the realization that “The key is to exploit journalists’ incredible laziness.”
His explanation of how basic statistical methodology is frequently abused is useful:
Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result. . . Whenever you hear that phrase, it means that some result has a small p value. The letter p seems to have totemic power, but it’s just a way to gauge the signal-to-noise ratio in the data. The conventional cutoff for being “significant” is 0.05, which means that there is just a 5 percent chance that your result is a random fluctuation. The more lottery tickets, the better your chances of getting a false positive. So how many tickets do you need to buy?
P(winning) = 1 – (1 – p)n
With our 18 measurements, we had a 60% chance of getting some “significant” result with p < 0.05. (The measurements weren’t independent, so it could be even higher.) The game was stacked in our favor.
It’s called p-hacking—fiddling with your experimental design and data to push p under 0.05—and it’s a big problem. Most scientists are honest and do it unconsciously. They get negative results, convince themselves they goofed, and repeat the experiment until it “works”. Or they drop “outlier” data points.
Gee—if only there was some kind of quality control mechanism for science publishing and journalism, like qualified peer reviewers and knowledgeable editors. Oh, wait. . .
This seems like a good time to re-post my three-minute interview with science journalist Ron Bailey from three years ago about science fraud:
Oh what the heck: