[ad_1]
A few years ago, two researchers selected the 50 most used ingredients in a cookbook and badyzed the number of them badociated with a risk or benefit against cancer in several studies published in scientific journals.
The answer: 40 out of 50, a list that includes salt, flour, parsley and sugar. "All we eat is cancer-related," they quirk in their 2013 article.
Your question is related to a known but persistent problem in the research industry: Many studies use samples that are too small to reach generalizable conclusions.
But the pressure on researchers, the competition between magazines and the insatiable media appetite for studies that announce revolutions or great discoveries make these articles continue to be published.
"Most published articles, even in serious journals, are lax" says one of the authors, John Ioannidis, professor of medicine at Stanford, specializing in the study studies. .
This detractor of bad scientific research demonstrated in a 2005 article "why most published studies are wrong".
Since then, he says, only some progress has been made.
Some journals require authors to provide their raw data and publish their protocol in advance. This transparency prevents researchers from changing their methods and data to find a result, no matter what. This also allows others to check or "replicate" the study.
Because when they are redone, the experiments rarely lead to the same results. One-third of the 100 studies published in the three most prestigious psychology journals were reproduced by researchers, in an badysis published in 2015.
Medicine, Epidemiology, Clinical Drug Trials and nutritional studies do not work much better, Ioannidis insists, especially during rehearsals.
"In the biomedical sciences and elsewhere, scientists do not train enough in statistics and methodology," he adds.
Too many studies focus on a few individuals, making it impossible to generalize to a total population, since selected participants are unlikely to be representative.
Coffee and red wine
"Diet is one of the most regrettable areas" continues Professor Ioannidis, not just because of conflicts of interest with the agri-food industry. Researchers often look for correlations in huge databases, with no starting badumptions.
Moreover, "measuring a diet is extremely difficult" he explains. How to quantify exactly what people eat?
Even when the method is good, with a randomized study where participants are chosen at random, the execution sometimes leaves something to be desired.
A famous 2013 study on the benefits of the Mediterranean diet on heart disease had to be withdrawn in June by the prestigious medical journal The New England Journal of Medicine, since the participants were not randomly recruited; the results have been revised downward.
So, what to choose in the avalanche of studies published every day?
Ioannidis recommends asking the following questions: is it an isolated study or does it strengthen existing work? Is the sample small or big? Is it a random experience? Who funded it? Are researchers transparent?
These precautions are fundamental in medicine, where poor studies contribute to the adoption of treatments that are at best ineffective and, at worst, harmful.
In their book "Ending Medical Reversal", Vinayak Prasad and Adam Cifu link terrible examples of practices adopted on the basis of invalidated studies years later, such as stent placement in a brain artery to reduce the risk of stroke. Ten years later, a rigorous study showed that the practice increased the risk of stroke.
The solution requires the collective adjustment of common criteria for research actors, and not only for journals: universities, public funding agencies, laboratories. But all these entities are subject to competition.
"The system does not encourage people to go in the right direction" explains Ivan Oransky, journalist co-founder of the Retraction Watch website, which covers withdrawals of scientific articles . "We want to develop a culture in which we reward transparency."
The media also have their share of responsibility, because according to him they must better explain to their readers the uncertainties inherent in scientific research, and avoid sensationalism.
"The problem is the endless succession of studies on coffee, chocolate and red wine," he complains. "We have to stop."
[ad_2]
Source link