[ad_1]
A few years ago, two researchers selected the 50 most used ingredients in a cookbook and looked at how many had been badociated with cancer risk or benefit in various studies published in scientific journals. The answer: 40 out of 50, a list including salt, flour, parsley and even sugar. "Is everything we eat related to cancer? They then asked, ironically, in their 2013 article.
Their question touches on a known but persistent problem in the research world: too many studies use samples that are too small to arrive at generalizable conclusions. But the pressure on researchers, the competition between journals and the insatiable appetite of the media for studies announcing revolutions or major discoveries, makes these articles continue to be published.
A majority of bad articles
"The majority of articles published, even in serious journals, are bad," says one of the authors, John Ioannidis, professor of medicine at Stanford, de facto specialized in the study of studies. This defender of bad scientific research showed in 2005 in a resounding article "Why most published studies are wrong."
Since then, he says, only some progress has been made. Some journals require that authors provide their raw data and publish their protocol in advance. This transparency prevents researchers from shredding their methods and data in order to find a result, whatever it may be. They allow others to check or "replicate" the study. Because when they are redone, the experiments rarely lead to the same results.
Not the same results
Only a third of 100 studies published in the three most prestigious psychology journals were reproduced by researchers, of an badysis published in 2015. Medicine, epidemiology, clinical trials of drugs and … studies on nutrition do not do much better, insists John Ioannidis, especially during replications.
"In the biomedical sciences and moreover, scientists have only a superficial training in statistics and methodology, "adds John Ioannidis. Too many studies focus on only a few individuals, making it impossible to generalize to an entire population, because the selected participants are unlikely to be representative.
Food: an appalling field
one of the most appalling areas, "continues Professor Ioannidis, not just because of the conflicts of interest with the agri-food industry. Researchers often go in search of correlations in huge databases, with no starting hypothesis. In addition, "measuring a diet is extremely difficult," he explains.
How to quantify exactly what people eat? Even when the method is good, with a randomized study, where the participants are randomly selected, the execution sometimes leaves something to be desired. A famous 2013 study on the benefits of the Mediterranean diet against heart disease had to be withdrawn in June by the most prestigious medical journal, the New England Journal of Medicine, because not all participants were randomly recruited; the results have been revised downwards
What questions to ask?
So what to remember in the stream of published studies every day? John Ioannidis recommends asking the following questions: Is such a study isolated or is it reinforcing existing work? Is the sample small or large? Is this a randomized experiment? Who financed it? Are researchers transparent? These precautions are fundamental in medicine, where bad studies contribute to the adoption of treatments that are at best ineffective, and at worst harmful.
In their book "Ending Medical Reversal", Vinayak Prasad and Adam Cifu offer terrifying examples of practice adopted on the basis of studies that were invalidated years later, such as stenting (mini prostheses) in a brain artery to reduce the risk of stroke. It was not until after ten years that a rigorous study showed that the practice … actually increased the risk of stroke.
More transparency
The solution involves the collective tightening of the criteria for all the players in research, not just journals: universities, public funding agencies, laboratories … But these institutions are all subject to competition. "The system does not encourage people to go in the right direction," says Ivan Oransky, co-founder of Retraction Watch, who covers withdrawals of scientific articles.
"We want to develop a culture where we reward transparency ". The problem also comes from the media, which according to him must better explain to their readers the uncertainties inherent in scientific research, and resist sensationalism. "The problem is the endless succession of studies on coffee, chocolate and red wine," he complains. "We must stop."
AFP
[ad_2]
Source link