One of Cornell's top researchers had 13 retracted studies. That's a lot.



[ad_1]

This is the worst nightmare of all scientists: six articles retracted in one day, with a press release that helped science journalists around the world to broadcast and discuss news.

That's exactly what happened today in the network of journals JAMA, and Cornell Researcher Brian Wansink.

Wansink is currently the director of Cornell's Food and Brand Lab. For years, he has been recognized as a "world-renowned food behavior expert".

Even if you have never heard of Wansink, you probably know his ideas. His studies, cited more than 20,000 times, focus on how our environment shapes how we think about food and what we end up consuming. This is one of the reasons why Big Food companies started offering smaller snack packs in 100-calorie portions. He formerly led the USDA's committee on dietary guidelines and influenced public policy. He helped Google and the US military implement programs to encourage healthy eating.

But in the last two years, the scientific repertoire of maps on which this work and its influence are based has begun to collapse. A group of skeptical researchers and journalists, including BuzzFeed's Stephanie Lee, closely examined the Wansink food psychology research unit, the Cornell University Food and Brand Lab, and showed that data manipulation

Thirteen of Wansink's studies have been retracted, including the six JAMA aujourd & # 39; hui. Among them: studies suggesting that people who are hungry in the grocery store are buying more calories; that the preparation lunch can help you choose healthier foods; and serving people with big bowls encourages them to use larger portions.

In a press release, JAMA stated that Cornell was not able to "provide assurances as to the scientific validity of the six studies" as they did not have access to the original Wansink data. (We contacted Cornell for a comment, and they say they will issue a statement on Friday.) So, Wansink's ideas are not necessarily wrong. he did not provide credible evidence for them.

But this story is much bigger than any researcher. This is important because it helps to highlight the lingering problems of science that exist in laboratories all over the world, and that scientific reformers are increasingly calling for action. Here's what you need to know.

Thirteen of Wansink's studies were removed and the results in dozens of others were questioned.


New Year's diet

Vox

Wansink had the gift of producing studies that were catnip for the media, including us here at Vox. In 2009, Wansink and a co-author published a viral study that suggested Joy of the kitchen The cookbook (and others of the same kind) contributed to the growth of America's size. He found that the receipts of the more recent editions of the volume – which have sold more than 18 million copies since 1936 – contain more calories and larger portions than those of the first editions.

The study looked at 18 classic recipes that appeared in Joy of the kitchen since 1936 and have found that their average caloric density has increased by 35% per serving over the years.

There was also Wansink's famous "bottomless bowls" study, which concluded that people would drink the soup without thinking as long as their bowls were automatically filled, and his study on "bad popcorn," which showed that We will swallow food it is presented to us in large quantities.

Together, they helped Wansink strengthen its larger research program that focuses on how decisions about what we eat and how we live are strongly influenced by environmental clues.

The critical inquiry into his work began in 2016 when Wansink published an article on his blog in which he had inadvertently admitted that he was encouraging his graduate students to embark on questionable research practices. Since then, scientists have been looking at his work and looking for errors, inconsistencies and general fishing effects. And they discovered dozens of scrapers.

In more than one case, Wansink misidentified the age of participants in published studies, mixing children aged 8 to 11 with infants. In sum, the collective efforts resulted in a complete record of disturbing results in Wansink's work.

To date, 13 of his papers have been removed. And it's amazing, since Wansink was very much quoted and his work was so influential. Wansink also raised government grants, helped shape marketing practices in food companies, and worked with the White House to influence the country's food policy.

Wansink allegedly participated in "computer hacking against steroids"


Snacks of the new year

Vox

Among the biggest scientific problems that the Wansink debacle illustrates is the "publish or perish" mentality.

To be more competitive with grants, scientists must publish their research in respected scientific journals. For their work to be accepted by these journals, they need positive (ie, statistically significant) results.

This pushes laboratories like Wansink to do what is called p-hacking. The "p" represents the p values, a measure of statistical significance. In general, researchers hope that their results will give a p value of less than 0.05 – the limit beyond which they can call their significant results.

The P values ​​are a little complicated to explain (as we do here and here). But basically: they are a tool to help researchers understand the scarcity of their results. If the results are extremely rare, scientists can be more sure of their hypothesis.

Here's the thing: P-values ​​of 0.05 are not that hard to find if you sort the data differently or perform a lot of analysis. When returning coins, you would think it would be rare to have 10 heads in a row. You might start to suspect that the piece is weighted to favor the heads and that the result is statistically significant.

But what if you have 10 heads randomly (this can happen) and suddenly you're done tokens? If you continue, you stop believing that the piece is weighed.

Stopping an experiment when a p-value of 0.05 is reached is an example of p-hacking. But there are other ways to do this – for example, collect data on a large number of results, but only report results that have statistical significance. By performing many analyzes, you are forced to find something meaningful by chance alone.

According to Lee, of BuzzFeed, who got Wansink's emails, instead of testing a hypothesis and reporting his research, Wansink often encouraged his subordinates to analyze the data for better results.

In fact, he was conducting a computer hacking operation – or, as one researcher, Kristin Sainani of Stanford, BuzzFeed said, "hacking steroids".

Lack of care and exaggerations of Wansink may be more important than normal. But many researchers have admitted to participating in a form of hacking in their careers.

A 2012 survey of 2,000 psychologists revealed that hacking tactics were common. Fifty percent admitted that they reported only the studies done (ignoring the inconclusive data). About 20% admitted to having stopped collecting data after achieving the expected result. Most respondents thought their actions were defensible. Many thought that p-hacking was a way to find the real signal in all the noise.

But they do not have it. Increasingly, even school studies and phenomena are breaking down, with researchers testing them with more rigorous designs.

Many people are working to try to stop computer hacking


Snacks of the new year

Vox

There is a movement of scientists seeking to rectify the scientific practices that Wansick is accused of. Together, they are essentially asking for three main solutions that are gaining momentum.

  1. Pre-registration of study plans: This is a huge guarantee against hacking. Pre-registration means that scientists publicly commit to designing an experiment before collecting data. This makes it much more difficult to choose the results.
  2. Open data sharing: Increasingly, scientists are inviting colleagues to make all their experiences available (of course there are exceptions for particularly sensitive information). This ensures that poor quality research through peer review can still be verified.
  3. Registered replication reports: Scientists are eager to see if the results previously reported in the academic literature resist further investigation. Many efforts are under way to reproduce the research results (exactly or conceptually) with rigor.

There are other potential solutions: a group of scientists calls for a stricter definition of statistical significance. Others argue that arbitrary thresholds of meaning will always be used. And more and more, scientists are turning to other forms of mathematical analysis, such as Bayesian statistics, which pose a slightly different question of data. (While p-values ​​ask, "What is the rarity of these numbers?", A Bayesian approach asks, "What is the probability that my hypothesis is the best explanation of the results we found?")

No solution will be the panacea. And it's important to recognize that science has to deal with a much more fundamental problem: its culture.

In 2016, Vox sent a survey to over 200 scientists asking, "If you could change one thing about how science works today, what would it be and why?" One of the clear themes in the answers : better succeed in rewarding failure instead of attracting publication first and foremost.

A young scientist told us, "I am torn between asking questions that I know will lead to statistical significance and ask questions that matter.

Brian Wansink faced the same dilemma. And it is becoming clearer which path he has chosen.

Further Reading: Research Methods (and How to Improve Them)

[ad_2]
Source link