Scientists are trying to get rid of a fundamental measure used in science



[ad_1]

Scientists have long discussed with one of the most famous tools used to describe scientific certainty, the concept of "statistical significance". Some think it's fine as it is. Others want to reduce it, while others plead in favor of total abandonment.

Assuming that we have bitten the ball in our hands and abandoned the concept of "probability values", we would need to replace it with an even better idea. This last issue of The American statistician contains a few opinions, including 43 huge ones, actually.

To paraphrase a famous quote from Winston Churchill, p value has long been the worst way to distinguish useful ideas in science … with the exception of all the other methods that have been tried from time to time.

It's not really p's fault. The value alone tells you just how likely you are to bet on the wrong horse in your experience.

Usually, if the value falls below 0.05, it means that there is less than a five percent chance that the null hypothesis (an explanation of your observations that is not part of your brilliant idea) is actually hiding behind your results.

Why five percent? Because the story, really. It's a better bet than 10%, without being as strict as one percent. Otherwise, there is really no magic quality.

There are many statistical tools that researchers can use to calculate this significant figure. Problems arise when we try to translate this mathematical ideal into something that meat processing computers inside our skull can actually use.

Our brains do not deal with probability very well. Perhaps this is due to the fact that we have never evolved to worry about the probability of being eaten by a bear while it is already chewing our face.

We treat a lot better with the net division of a true or false statement. It is therefore often difficult to swallow a veil of p <0.05, which makes it prone to abuse.

"The world is much more uncertain than that," said the statistician of the University of Georgia, Nicole Lazar, Richard Harris.

In collaboration with the Executive Director of the American Statistical Association, Ronald L. Wbaderstein and the retired Vice President of Mathematica, responsible for policy research, Schär, Lazar wrote an editorial proposing an anthology of reflections on how we could do better than "p".

Clearly, there are ways in which a probability figure can be beneficial to us, but only if we do not do anything wrong with it, as if it meant more than telling you that a clever explanation of yours is everything just a suitor.

"Knowing what to do with the p-values ​​is certainly necessary, but that's not enough," writes the trio.

"It's as if statisticians were asking statistics users to tear up the beams and struts that supported the building of modern scientific research without offering solid building materials to replace them."

The articles in the issue do not reach consensus on what these building materials should look like. But many share some basic elements.

According to some, the abandonment of the importance of retirement should ideally result in a tabulation of data and descriptions of methods that bring an additional nuance, humbly exploiting the possibilities while pleading in favor of a unique explanation.

"We must learn to embrace uncertainty," write in their book several authors. Nature opinion article.

"A convenient way to do this is to rename the confidence intervals to" compatibility intervals "and interpret them in such a way as to avoid overconfidence."

It's not just a cheap metamorphosis. Researchers should actively describe the practical implications of values ​​in these ranges.

The ultimate goal would be to establish practices that avoid the thresholds leading to true or false thinking and, on the contrary, reinforce the uncertainty that underlies the scientific method.

Science at its heart is a conversation after all. Decision makers, technicians and engineers are the indiscreet who translate this buzz of voice into a concrete decision, but for scientists looking for the next stage of research, a p-value is not very useful in itself .

Unfortunately, it has become an arrival line in the race for knowledge, where funding for surveys and public praise await the winners.

Reversing a deeply rooted cultural practice will require much more than a few editorials and a handful of well-argued scientific papers. The value p has been an integral part of science for about a century now. It will therefore be available for a while.

But perhaps this kind of reflection offers us practical stepping stones to take us beyond statistical significance, into a place where fuzzy lines of uncertainty can be celebrated.

[ad_2]
Source link