Why Do Scientists Cheat?

9
ByAnonymousOctober 22, 2019

Science, as everybody knows, is immersed in a crisis of reproducibility. The crisis was launched in 2005 when John Ioannidis of Stanford University published his now famous paper, “Why most published research findings are false,” in which he showed that, by the manipulation of statistical conventions, most published research findings are indeed false.

This unfortunate finding was confirmed by Brian Nosek and his colleagues at the University of Virginia in 2015. On trying to replicate 100 psychological experiments that had been published in good journals, they discovered that only 39 of the 100 replications produced statistically significant results (Figure 1).

                                                Figure 1

These findings are particularly worrying in nutrition research, where the replication rate would certainly be lower than 39 percent. Indeed, the successful overthrow by journalist Gary Taubes of the long-dominant hypothesis that dietary fat causes heart disease, strokes, and obesity showed how an entire discipline could be captured by papers that would have an almost 0% replication rate.

How can science go so badly wrong? One answer is that there will always be bad apples in the barrel. Daniele Fanelli, who like Ioannidis is now at Stanford, showed in 2009 that about 2% of scientists will admit, in private, to having falsified data, while about a third will admit to “questionable research practices” such as omitting inconvenient data from their papers. But scientists know bad apples are not rare, and Fanelli found 14% of scientists knew of other scientists who had falsified data, while 70% of scientists knew of other scientists who had engaged in “questionable research practices.”

But if entire disciplines such as nutrition can go awry en bloc, then we’re not talking bad apples, for it seems that the very enterprise of research can deviate systematically from the paths of truth. What forces can push research that way?

One is funding from industry. Marion Nestle of New York University, author of Food Politics, has described how, on her first day editing the Surgeon General’s Report on Nutrition and Health, she was “given the rules: no matter what the research indicated, the report could not recommend ‘eat less meat’ … nor could it suggest restrictions on the intake of any other category of food.” The food producers, Nestle found, had Congress in their grip, and the politicians would simply block the publication of a report that threatened a commercial interest.

Research funded directly by industry and published in the peer-review literature is as suspect. David Ludwig of Harvard has shown studies that have been funded, at least in part, by drink manufacturers, are four to eight times more likely to report good news about commercial drinks than those that were funded independently — and no research paper funded wholly by the drink manufacturers reported any bad news.

Since scientific journals in some disciplines can be dominated by papers funded by industry — and since (not many people realize this) the great medical charities and foundations can be startlingly dependent on commercial funding — the food industry can ensure that little is published that would “suggest restrictions on the intake” of any category. And the little that does manage to get published can be swamped by the mass of commercially helpful papers. Other industries can monopolize their relevant scientific literature almost as exhaustively.

But government funding is just as distorting. Daniel Sarewitz of Arizona State University, in an essay entitled “Saving science,” found the model by which governments fund science is flawed, because the whole point of government funding is to isolate scientists from the real world and put them (cliché alert) into ivory towers. But, Sarewitz says, “It’s technology that keeps science honest” — i.e., only if scientists know their ideas will be tested against reality will they stay honest.

But too many government-funded scientists are not tested against reality. Rather, they are judged by their peers within the government funding agencies. So, if those agencies believe fat causes cardiovascular disease and obesity, they will preferentially award grants to researchers who select their findings to confirm the agencies’ paradigm. Those grantees will then get promoted until they too join the panels of the government granting agencies, whereupon the cycle of error will reinforce itself.

Max Planck once said science advances funeral by funeral as individual pieces of bad science die with their progenitors, but once an entire field has been infected with error, bad science can become self-perpetuating in a process Paul Smaldino and Richard McElreath described in a Royal Society journal as “The natural selection of bad science.”

Everything I’ve written here is well known (indeed, every statement comes from well-cited papers from the peer-reviewed literature), but I now want to make two personal statements that are safest made anonymously.

I did my Ph.D. in a world-famous research university, where I witnessed many examples of “questionable research practices,” such as omitting inconvenient data from published work. Initially, I was shocked, but not as shocked as I might have been, because as an undergraduate at a prestigious university, I’d already been taught how to cheat.

As undergraduates, we had to perform a host of laboratory “experiments” that were not experiments at all. They were laboratory exercises whose ultimate findings were predetermined, and the closer our reported results approached those predetermined findings, the better marks we received from our professors.

We undergraduates were obliged to attend demonstrations where a professor might, for example, inject an anaesthetized animal with adrenaline, whereupon the creature’s heart rate and blood pressure would rise. We’d then be provided with photocopies of the readings, which we’d tape into our lab books (this was some years ago) and which we’d write up to demonstrate our understanding of the basic physiological principles. But occasionally an animal wouldn’t respond as it was meant to, and its blood pressure might, say, fall on being injected with adrenaline. Whereupon the professor would circulate pre-prepared photocopies of experiments that had “worked,” which we were expected to write up as if the demonstration had gone to expectation.

So my exposure  to questionable research practices as a Ph.D. student was no surprise: I’d long understood that was how science worked. Nonetheless, during my Ph.D. years, I learned to temper my own questionable research practices with common sense. If I was confident I knew how a drug worked, then I might tidy a graph for publication by removing the odd outlying data point, but I’d never “tidy” a graph whose message was actually obscure, because the person I’d be fooling would be myself.

And then I did a post-doc at a good, respectable university. And it was good and it was respectable, but it wasn’t stellar the way my Ph.D. university had been. And at the good and respectable university, I encountered a culture I’d never previously encountered in research, namely a culture of scrupulous honesty. Reader, it was hopeless. The construction of statistically significant graphs that during my Ph.D. years might have taken six weeks now took nine.

Lingering so long over the construction of those graphs didn’t speed the discovery of truth, but constructing them was like wading through treacle. It was all so slow and laborious. And that’s when I got my insight: At my stellar Ph.D. university, I’d learned that great scientists know which corners to cut (didn’t Einstein invent the cosmological constant to get out of an empirical hole?). But had I done my Ph.D. at the good and respectable university, I’d have learned that science was meticulous and scrupulous, and I’d have spent my career being scooped by competitors from stellar institutions. There is such a thing as the natural selection of career stagnation.

The other personal statement I’d like to make anonymously is that I’ve met many of the heroes of the reproducibility crisis (men such as Ioannidis, Nosek, and Fanelli), and they’re different from the usual run of researchers who lead big and successful labs. They’re not alpha males. Instead, the reproducibility heroes are quiet, modest, and thoughtful. Frankly, there’d be no place for them at the forefront of most scientific disciplines, which is perhaps why they’re quietly undermining the structure of an enterprise that might not otherwise be too welcoming.

Comments on Why Do Scientists Cheat?

9 Comments

Comment thread URL copied!
Back to 191023
Bruce Warren
October 23rd, 2019 at 1:46 pm
Commented on: Why Do Scientists Cheat?

One obvious correlation is the rampant fraud in "climate science". In this genre, the fraud is driven by a religiously held belief in ACG - for which there is little to no reproducible research in support. Yet, computer models are constantly published as "science".

Comment URL copied!
Leonardo Nascimento
October 23rd, 2019 at 4:39 pm

First of all, computers models are obviously science. Most advancements in physics, chemistry, engineering, etc are based on computer models. Second, climate science is not only based on climate models, there is a bunch of experimental data involved. Also there is no religiously held belief as criticisms keep being being refuted time after time. While the point of "low standard journals" is valid and there is a lot of shitty research out there (specially in medical sciences and the likes that take correlation as causation) climate science is not one of them. Do you research, buddy.

Comment URL copied!
Richard Feinman
October 23rd, 2019 at 11:53 am
Commented on: Why Do Scientists Cheat?

The problem is real and extensive but this piece is not accurate or appropriate. First, "cheat" means intention and the standard witticism in this business is referred to as Hanlon’s Razor which states that you should not invoke malice until you’ve excluded stupidity. Especially in nutrition and medicine but even in hard, that is, mathematical science, there is honest but limited understanding of scientific and statistical principals. One problem rests substantially with the journals in maintaining very low standards. To take one extreme example, medical journals accept, may even demand, an “intention-tp-treat” analysis of data. The principle says that if a subject is randomly assigned to an intervention, their data must be included in the analysis even if they dropped out of the study; even if you don’t take the pill, your outcome will tell us whether the pill is good or not. The idea is as foolish as it sounds. I and several others, including professional statisticians, have called it out for what it was (Intention-to-treat. What is the question? Nutr Metab (Lond). 2009; 6: 1; doi: 10.1186/1743-7075-6-1) yet the practice persists. Intention-to-treat analysis answers the question “what is the effect of being assigned to an intervention?” but most of us are more interested in the actual outcome if you do take the pill. In exasperation, many simply report both intention-to-treat and. “per protocol” data (you did what the experiment required but many simply report intention-to-treat. The motivation seems to be screwy thinking since, while good for showing that low-carb diets and control are “the same out one year,” almost always makes your data look worse than it is


The article is also somewhat inaccurate. Without in any way underestimating Gary Taubes’s contribution — I described him as the Thomas Paine of the low-carbohydrate revolution — he did not effect a “successful overthrow … of the long-dominant hypothesis that dietary fat causes heart disease, strokes, and obesity” but rather tried — with some success — to bring out to the public what numerous papers in the literature had demonstrated. Nonetheless, we are burdened with what I would call many Mulvaneys: the USDA guidelines says that total fat is no of concern but recommends low-fat products.


There are numerous problems but a single over-riding cause is the poor standards of journal editorial practice. The Journal of Biological Chemistry has begun to set up some guide to practice. A recent article explains the difference between standard deviation (SD) and standard error of the mean (SE) which I will explain in next comment.

Comment URL copied!
Bruce Warren
October 23rd, 2019 at 1:46 pm

Sorry, man. You're dead.

Comment URL copied!
Russ Greene
October 24th, 2019 at 4:33 pm

Dr. Feinman,

Regarding intentionality, I wonder what you think of the source this article cited to support that point:

"How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data"

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0005738


Yes, the study was based on survey data, but I am not sure how else one could systematically evaluate this issue.


Here are the results:

"A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices."

Comment URL copied!
Lamiece Hassan
October 23rd, 2019 at 6:45 am
Commented on: Why Do Scientists Cheat?

Good scientists know where to cut corners? There’s no place for anyone but alpha males at the forefront of science? Unless, maybe, other, more modest men. Have I misunderstood something here? As a scientist, CrossFit fan and a woman, I’m disappointed that you’re giving a worldwide platform to this kind of piece.

Comment URL copied!
Russ Greene
October 23rd, 2019 at 11:41 am

Lamiece,

I cannot tell if you misrepresented, or merely misunderstood, this piece.


Not once did "Anonymous" seriously suggest that "good scientists know where to cut corners." Instead the author lamented the fact that this is what he/she learned at his/her "stellar PhD university." The author's point is that the incentives in science do not reward "meticulous and scrupulous" research, and this is unfortunate.


Moreover, the piece does not endorse the presence of "alpha males" at big and successful labs. Quite the opposite. It warns of the problems inherent with the alpha males' aggressive, even reckless approach to science.


The three scientists the author identified as "men" are in fact men. One may not reasonably infer sexist bias from describing men as men any more than one may infer jingoistic tendencies if an author describes Americans as Americans.


Perhaps you object to the absence of female sources, but that concern too is misplaced. The author cites Marion Nestle. Marion is in fact a woman. Perhaps you assumed otherwise, but that betrays your own views, not the author's. (Please do not infer anything about my personal worldview from my accurate identification of Marion as a female).


Now, if you have any sound objections to the substance of this piece I'd love to hear them.

Comment URL copied!
Lamiece Hassan
October 24th, 2019 at 5:36 am

Russ, my point is that this piece presents a bleak, masculine, cynical view of science, and whether the author agrees with the current state of affairs or not or not, the editorial decision to publish this perpetuates a series of unhelpful, narrow stereotypes about science and what it means to be a scientist.


Science is so much more diverse than this and I would have much preferred to see constructive thoughts about how we tackle these challenges instead of using click bait titles tarnishing the image of scientists as ‘cheats’. This may have been the author’s personal experience - and I’m sorry about that because that is not universal - but I don’t think CrossFit.com is the right place for this kind of piece. In my opinion, there are a 1000 more papers and commentaries that could be more relevant to a CrossFit audience.

Comment URL copied!
Russ Greene
October 24th, 2019 at 2:28 pm

The view may be "masculine" and "bleak." I might even grant you "unhelpful," at least in the sense that it's unlikely to be followed by any meaningful reform. I'd not ever accede to "inaccurate" or "irrelevant," though, for the simple reason that neither is true. If you believe the claims to be inaccurate, the burden is on you to not merely describe them, but rebut the arguments made and the sources upon which they rely. This you have not done.


Since CrossFit began to expose bad science, it has encountered many critics like yourself. Rather than meaningfully disputing our allegations, these critics asked why CrossFit is speaking up.


How many instances of scientific misconduct, hidden funding, and other unethical behavior in science must CrossFit expose before it earns the right to speak, in your view?


Would taking down the most prolific researcher in exercise science, William Kraemer, for fraud suffice? Or proving that strength and conditioning field's premiere association had committed scientific misconduct and perjury?


Or proving that the CDC and NIH's foundations had failed to comply with their legal requirement to annually, publicly report the source, amount and restrictions associated with each payment they accepted? How about suing the Department of Health and Human Services over its failure to comply with the Freedom of Information Act, and preventing the department from retroactively redacting emails suggesting its own staff was aware of unethical behavior occurring regarding its corporate partnerships?


These are just a few instances. If you will just delineate what amount of work in the field would grant us the right to speak in your eyes, I am happy to lay out the full scope of our activities.

Comment URL copied!