Researcher says most scientific studies are "false"



[ad_1]

  The researcher Ioannidis recommends asking the following questions to know if a study is reliable: is it an isolated investigation or does it reinforce existing work? Is the sample small or big? Is it a random experience? Who funded it? Are researchers transparent? (Photo AP / Mark Duncan) Photo for illustrative purposes

Researcher Ioannidis recommends asking the following questions to determine if a study is reliable: is it an isolated survey or is it? It reinforces existing work? Is the sample small or big? Is it a random experience? Who funded it? Are researchers transparent? (AP Photo / Mark Duncan) Photo for illustration

Washington. A few years ago, two researchers selected the 50 most used ingredients in a cookbook and badyzed the number of them badociated with a risk or benefit against cancer in several studies published in scientific journals

40 out of 50, a list that includes salt, flour, parsley and sugar. "All we eat is related to cancer," they quibble in their article published in EL 2013.

Your question is related to a known but persistent problem in the world of research: many studies they use samples too small to reach generalizable conclusions

But pressure on researchers, competition between journals and the insatiable appetite of the media for studies announcing revolutions or great discoveries make these articles continue to be published . 19659005] "Most of the articles published, even in serious journals, are lax," says one of the authors, John Ioannidis, professor of medicine at Stanford, who specializes in studying studies.

Poor scientific research demonstrated in a 2005 article "why most published studies are false".

It is, he says, only some progress has been made

Requirements for Transparency

Some journals require authors to provide their raw data and publish their protocol in advance. This transparency prevents researchers from twisting their methods and data to find a result, no matter what. It also allows others to check or "replicate" the study.

Because when they are repeated, the experiments rarely lead to the same results. A third of the 100 studies published in the three most prestigious psychology journals were reproduced by researchers, in an badysis published in 2015.

Medicine, epidemiology, clinical trials of drugs and nutritional studies they work much better, insists Ioannidis, especially during rehearsals.

"In the biomedical sciences and elsewhere, scientists do not train enough in statistics and methodology," he adds.

Too many studies focus on a few individuals, making it impossible to generalize to a total population, since it is unlikely that selected participants will be representative.

"The diet is one of the most regrettable areas," continues Professor Ioannidis, not just because of conflicts of interest with the agri-food industry. Researchers often look for correlations in huge databases without a starting hypothesis.

In addition, "measuring a diet is extremely difficult," he explains. How to quantify exactly what people eat?

Even when the method is good, with a randomized study where participants are chosen at random, the execution sometimes leaves something to be desired.

A famous 2013 study on the benefits of the Mediterranean diet on heart disease had to be withdrawn in June by the prestigious medical journal The New England Journal of Medicine, since the participants had not been randomly recruited; the results have been revised downward.

So what to choose in the avalanche of studies published every day?

Ioannidis recommends asking the following questions: Is it an isolated study or does it strengthen existing work? Is the sample small or big? Is it a random experience? Who funded it? Are researchers transparent?

These precautions are fundamental in medicine, where poor studies contribute to the adoption of treatments that are ineffective at best and, at worst, harmful

. Medical Reversal, "Vinayak Prasad and Adam Cifu relate terrible examples of practices adopted on the basis of studies that have been invalidated years later, such as placing stents in a brain artery to reduce the risk of pain. stroke. Ten years later, a rigorous study showed that the practice actually increased the risk of stroke.

The solution requires the collective adjustment of common criteria for research actors, not just journals: universities, public funding agencies, laboratories. But all these entities are subject to competition.

"The system does not encourage people to go in the right direction," says Ivan Oransky, co-founder of Retraction Watch's website, which covers withdrawals of scientific articles. "We want to develop a culture in which we reward transparency."

The media also have their share of responsibility, because according to him they must better explain to their readers the uncertainties inherent in scientific research, and avoid sensationalism.

"The problem is the endless succession of studies on coffee, chocolate and red wine," he complains. "We have to stop."

[ad_2]
Source link