How a Data Detective Exposed Suspicious Medical Trials

2
ByCrossFitSeptember 16, 2019

This 2019 Nature piece documents the work of English anesthetist John Carlisle, who has developed and used statistical methods to identify published research papers with questionable results.

In 2012, Carlisle and a colleague found the data from Japanese anesthesiological researcher Yoshitaka Fujii (Toho University) was “too clean to be true.” Subsequent investigations uncovered widespread data fabrication; Fujii was fired and 183 of his papers were retracted.

Carlisle has subsequently found similar issues within and outside the anesthesiological research space. Notably, the high-profile retraction of PREDIMED, a study that drove increased interest in the Mediterranean diet in 2013, was due in part to Carlisle’s questioning of its randomization schemes. In 2016, Carlisle and a colleague found similar errors in the work of Mario Schietroma (University of L’Aquila, Italy), with data seemingly replicated across multiple trials. Some of these trials had been the basis for a 2016 WHO recommendation that anesthetists give patients higher levels of oxygen during and after surgery. The papers were subsequently retracted, and the WHO reduced the strength of its recommendation from “strong” to “conditional.”

Carlisle’s method involves looking at the baseline measurements of the treatment and control groups within controlled trials. If the groups are too different, the method suggests randomization failed. If they are too similar or the data does not exhibit the random variations expected within natural systems, it suggests artificial data has been inserted. Similar methods were first popularized by Frank Benford in 1938, and their use goes back to the late 1800s. Notably, Carlisle does not accuse researchers of fraud when he finds errors; he merely notes the statistical inconsistencies. However, others have used his data to make claims about fraudulent activity on the part of the researchers in question.

As noted within the article, Carlisle’s methods have faced criticism from various parties concerned that his statistical analysis has resulted in “unjustified suspicion” toward papers that are not fraudulent or erroneous in nature. The larger challenge, however, in the context of the current crisis of journal credibility fueled by mass error, irreproducibility, and retractions may be scaling the manual efforts of Carlisle and a few others like him to scan the millions of papers published each year, and so to catch widespread fraud and data errors.

Examples of Carlisle’s work can be found here:

Comments on How a Data Detective Exposed Suspicious Medical Trials

2 Comments

Comment thread URL copied!
Back to 190917
Zdb Crossfit
September 17th, 2019 at 2:38 am
Commented on: How a Data Detective Exposed Suspicious Medical Trials

Can the man please turn his focus on pharmaceutical companies in particular vaccine studies?

Comment URL copied!
Toni Anne Washington
September 17th, 2019 at 1:54 am
Commented on: How a Data Detective Exposed Suspicious Medical Trials

It used to amaze to me how these studies got through medical boards... now I’m thinking, we don’t learn statistics in medical school in case anyone is wondering...

Comment URL copied!