Risk of being salvaged pushes scientists to adopt poor-quality methods | Science



[ad_1]

In the race for the COVID-19 vaccine, second place still offers glory, unlike some scientific fields.

KORA_SUN / SHUTTERSTOCK, ADAPTED BY C. SMITH /SCIENCE

By Cathleen O’Grady

Leonid Tiokhin, a metascientist at Eindhoven University of Technology, learned early on to fear kidnapping. He recalls emails from his undergraduate advisor emphasizing the importance of being the first to post: “We better hurry, we better rush.”

A new analysis by Tiokhin and his colleagues shows how risky this competition is for science. Rewarding the researchers who publish the first pushes them to cut corners, shows their model. And while some proposed reforms in science may help, the model suggests that others might unintentionally make the problem worse.

Tiokhin’s team isn’t the first to argue that competition poses risks to science, says Paul Smaldino, a cognitive scientist at the University of California (UC), Merced, who was not involved in the research. But the model is the first to use details that explore precisely how those risks play out, he says. “I think it’s very powerful.”

In the digital model, Tiokhin and his colleagues built a toy world of 120 scientific “robots” competing for awards. Each scientist participating in the simulation worked hard, collecting data on a series of research questions. Bots were programmed with different strategies: some were more likely than others to collect large, meaningful data sets. And some tended to drop a research question if someone else posted it first, while others stubbornly stood. As robots made discoveries and published, they racked up rewards – and those with the most rewards passed on their methods more often to the next generation of researchers.

Tiokhin and his colleagues documented the successful tactics that have evolved through 500 generations of scientists, for different simulation parameters. When they gave bots greater rewards for posting first, people tended to rush their searches and collect less data. This has led to research filled with uncertain results. Human behavior of nature today. When the reward difference wasn’t that great, scientists opted for larger sample sizes and a slower release rate.

The simulations also allowed Tiokhin and his colleagues to test the effects of the reforms to improve the quality of scientific research. For example, PLOS journals, as well as the journal eLife, offer “scoop protection” which gives researchers the possibility of publishing their work even if they come in second. There is no evidence yet that these policies work in the real world, but the model suggests they should: Larger rewards for scooped research have led bots to choose larger datasets as the winning tactic .

But there was also a hidden surprise in the results. Rewarding scientists for publishing negative results – an oft-discussed reform – reduced the quality of the research, as robots realized they could conduct studies with small samples, find nothing of interest, and still be rewarded. Proponents of posting negative results often point out the danger of only posting positive results, which leads to posting bias and masks negative results which help build a complete picture of reality. But Tiokhin says the modeling suggests that rewarding researchers for publishing negative results, without focusing on the quality of the research, will make scientists “do the crappy studies they can.”

In the simulations, making it harder for science robots to run low-cost study trains has helped correct the problem. Tiokhin says this underscores the value of real-world reforms such as recorded reports, which are study designs, peer-reviewed before data collection, which requires researchers to invest more effort early in their studies. projects and discourage the selection of data.

Science is supposed to seek truth and self-correction, but the model helps explain why science sometimes veers in the wrong direction, says Cailin O’Connor, a philosopher of science at UC Irvine who was not involved in the job. The simulations – with robots collecting data points and testing their significance – reflect fields like psychology, animal research and medicine more than others, she says. But the patterns should be similar across disciplines: “It’s not based on a few delicate little details of the model.”

Scientific disciplines vary as do simulated worlds: how much they reward for being the first to publish, how likely they are to publish negative results, and how difficult it is to launch a project. Now, Tiokhin is hoping meta-researchers use the model to guide research into how these models play out with scientists in the flesh.

[ad_2]

Source link