Bias has become rampant in medical research. (Bodenheimer, 2000) There are several reasons medical science can bend towards fraudulent or incorrect conclusions without any overt fraud going on:
-
Researchers want to make headlines and be part of something successful, so they tend to seek splashy findings that are the most likely to be false while showing a slant towards findings that affirm a theory over those that do not. As physician Yoon Loke states, “I myself was an editor of a scientific journal, and often you want to publish interesting, positive things that people want to read. It’s an optimism bias.” (Mandelbaum, 2017)
-
Momentum builds behind the status quo. Scientists may devote a lot of time to a certain theory, and so they want to see that time count for something, which leads to a rejection of contradictory findings. As researcher John Ioannidis says, “Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it.” (Freedman, 2010)
-
Researchers need to make a living, and in order to do this they need to get funded. Since a great deal of funding comes from corporations, this means bending towards pleasing corporations in the same way magazines inevitably must sacrifice some of their editorial discretion to please advertisers.
-
Subconscious biases bend us towards our personal interests, even when we aren’t aware of it. So if a scientist is receiving money from a drug company to study a drug, that hidden bias pulls him towards interpreting the data in a more favorable way. Humans are people-pleasers by nature, and we’re very group-oriented. If I’m being supported by a drug company, I’ll feel a sense of belonging and loyalty to that group, which means a desire to please that company.
The reliability of drug research
All of these things can end up biasing medical research even when there isn’t any outright fraud going on. When researcher John Ioannidis first began looking into the problem of erroneous or fraudulent findings, he was on a mission to prove that scientific research was sound and reliable. What he discovered shook him to the core and changed his views entirely.
He was astonished at how many problems there were. “From what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals” – the entire process was open to bias. It was discovered that “researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process – in which journals ask researchers to help decide which studies to publish – to suppress opposing views.” (Freedman, 2010)
In a study Ioannidis and his colleagues published in 2005 in PLoS Medicine, the team laid out strong mathematical evidence showing that the majority of findings were either flat-out wrong or produced by chance. They concluded that 80% of non-randomized studies published (the most common type of study by far) are wrong, as are 25% of the supposedly gold-standard randomized trials and as much as 10% of the platinum-standard large randomized trials. (ibid) Yet when it comes to pharmaceuticals, randomized-controlled studies are available for fewer than 10% of medicines. (Strite & Stuart, 2011) Which means that the foundational evidence for 90% of drugs rests upon these lower quality studies that have an 80% false-positive rate.
A second article by Ioannidis’s team published in the Journal of the American Medical Association took a look at 49 published studies that were considered to be the best of the best. Of these, 45 had claimed to uncover effective interventions. Among these, 34 had been retested, and 14 of these, or 41% of those retested, had been convincingly shown to be wrong or significantly exaggerated. “If between a third and half of the most acclaimed research in medicine was proving untrustworthy,” says Freedman, “the scope and impact of the problem were undeniable.” (Freedman, 2010)
It’s a problem that seems to be trending worse, judging by the number of studies which have had to be retracted in recent years. In 2012, 415 articles in scientific journals were retracted, up from 46 in 2002, according to Thomson Reuters Web of Science, an index of peer-reviewed journals. (Inagaki, 2013)
“If we don’t tell the public about these problems, then we’re no better than nonscientists who falsely claim they can heal,” Ioannidis says. “If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.” (Freedman, 2010, p. 86)