[ad_1]
Haruko Obokata published two papers in January 2014 that described how ordinary blood cells can be turned into pluripotent stem cells.
At the time, this was a coup d’état: it greatly simplified a previously complicated process and opened up new perspectives for medical and biological research, while perfectly bypassing the bioethical considerations associated with the use of human embryos to harvest stem cells.
Plus, the process was straightforward and involved applying a weak acidic solution or mechanical pressure – eerily similar to how you clean a rust stain from a knife.
Within days, scientists noticed that some of the images in the document were jagged. And a wider skepticism started. Could it really be that simple?
As the experiments were straightforward and biologists curious, attempts to replicate the results of the papers began immediately. They missed. In February, the Obokata institute launched an investigation. In March, some of the article’s co-authors disowned the methods. In July, the papers were withdrawn.
While the documents were clearly unreliable, there was no clarity on the center of the problem. Did the authors mislabel a sample? Did they discover a method that once worked but was inherently unreliable?
Did they just invent the data? It took years longer, but the scientific community got a rough answer when other related Obokata articles were also withdrawn for image manipulation, data irregularities, and other problematic issues.
The entire episode was a perfect example of correcting science. An important result was published, it was questioned, it was tested, studied, and found insufficient… then it was retracted.
This is how we might hope that the process of organized skepticism would still work. But this is not the case.
In the vast majority of scientific work, it’s incredibly rare that other scientists even notice irregularities in the first place, let alone mobilize the global forces of empiricism to do something about them. The underlying assumption of academic peer review is that fraud is rare or insignificant enough that it does not merit a dedicated detection mechanism.
Most scientists assume that they will never encounter a single case of fraud in their careers, and hence even the idea of checking calculations in reviewable papers, rerunning analyzes, or verifying whether experimental protocols have been worked out. properly deployed is deemed unnecessary.
Worse yet, the raw data and accompanying analytical code often needed to analyze an article forensic is not routinely published, and performing this type of rigorous review is often viewed as a hostile act, the kind of arduous work reserved only for deeply motivated or congenitally disrespectful people.
Everyone is busy with their own work, so what kind of cranky would go to such extremes to invalidate someone else’s?
Which brings us neatly to ivermectin, an antiparasitic drug being tested as a treatment for COVID-19 after lab studies in early 2020 showed it to be potentially beneficial.
It rose in popularity sharply after an analysis published and subsequently withdrawn by the Surgisphere group showed a huge reduction in death rates in people who take it, triggering a massive surge in the drug’s use across the world.
More recently, the evidence for the effectiveness of ivermectin relied very heavily on a single piece of research, which was preprinted (i.e. published without peer review) in November 2020.
This study, drawn from a large cohort of patients and reporting a strong therapeutic effect, was popular: read over 100,000 times, cited by dozens of academic papers, and included in at least two meta-analytical models that showed that ivermectin was, as the authors claimed, a “wonder drug” for COVID-19.
It is no exaggeration to say that this article alone has caused thousands, if not millions, of people to receive ivermectin to treat and / or prevent COVID-19.
A few days ago, the study was withdrawn amid accusations of fraud and plagiarism. A master’s student who had been instructed to read the article as part of his degree noticed that the entire introduction appeared to be copied from previous scientific papers, and further analysis revealed that the The study’s data sheet published online by the authors contained obvious irregularities.
It’s hard to overstate how this is a monumental failure for the scientific community. We, the proud keepers of knowledge, accepted at face value a research so filled with holes that it took only a few hours for a medical student to completely dismantle it.
The seriousness given to the results was in direct contrast to the quality of the study. The authors reported incorrect statistical tests at several points, extremely implausible standard deviations, and truly mind-blowing positive efficacy – the last time the medical community found a “90% benefit” for a drug over a disease, it was was the use of antiretroviral drugs to treat people who die of AIDS.
Yet no one noticed it. For nearly a year, serious and respected researchers included this study in their journals, doctors used it as evidence to treat their patients, and governments recognized its findings in public health policy.
No one spent the 5 minutes it took to download the data file the authors had uploaded online and noticed that it was reporting many deaths before the study even began. No one copied and pasted phrases from the intro into Google, which is all it takes to notice how identical they are to previously published articles.
This inattention and inaction has perpetuated the saga – when we remain studiously disinterested in the problem, we also do not know how much scientific fraud exists, or where it can be easily located or identified, and therefore, we do not know. make no solid plans to treat or improve its effects.
A recent editorial in the British medical journal argued that maybe it is time to change our fundamental perspective on health research and assume that health research is fraudulent until proven guilty.
That is, not to assume that all researchers are dishonest, but to start receiving new information in health research from a categorically different basic level of skepticism as opposed to blind trust.
It may sound extreme, but if the alternative is to accept that sometimes millions of people are given drugs based on unverified research that are then withdrawn entirely, it can actually be a very small price to pay.
The opinions expressed in this article do not necessarily reflect the opinions of the editorial staff of ScienceAlert.
[ad_2]
Source link