Facebook and NYU trained AI to estimate COVID results



[ad_1]

“COVID is a unique virus,” Dr. William Moore of NYU Langone Health told Engadget. Most viruses attack the respiratory bronchioles, resulting in a pneumonia-like area of ​​increased density, he explained. “But what you usually won’t see is a tremendous amount of cloudy density.” But that’s exactly what doctors are discovered with COVID patients. “They will have an increased density which appears to be an inflammatory process of pneumonia rather than a typical bacterial pneumonia, which is a denser area and in a specific location. [COVID] appears to be bilateral; it appears to be somewhat symmetrical. “

When the epidemic first hit New York City, “we started trying to figure out what to do, how we could actually help manage patients,” Moore continued. “So there were a couple of things going on: There was a huge number of patients coming in, and we had to find ways to predict what was going to happen. [to them]. “

To do this, the NYU-FAIR team started with chest x-rays. As Moore notes, x-rays are performed regularly, essentially whenever patients complain of shortness of breath or other symptoms of respiratory distress and are ubiquitous in rural community hospitals and large metropolitan medical centers. The team then developed a series of metrics to measure complications as well as patient progression from ICU admission to ventilation, intubation and potential mortality.

“This is another clearly demonstrable metric we could use,” Moore said of patient deaths. “Then we were like ‘okay let’s see what we can use to predict that’, and of course the chest x-ray was one of the things that we thought was very important.

Once the team established the necessary metrics, they set about training the AI ​​/ ML model. However, this raised another challenge. “Because the disease is new and its progression is not linear,” Facebook AI program manager Nafissa Yakubova, who previously helped NYU develop faster MRI scans, told Engadget. “It’s hard to make predictions, especially long-term predictions.”

In addition, at the start of the epidemic, “we didn’t have any COVID datasets, most importantly there weren’t any labeled datasets. [for use in training an ML model], “she continued.” And the size of the datasets was also quite small. “

CHICAGO, ILLINOIS - DECEMBER 15: (EDITORIAL USE ONLY) Melissa Rodriguez takes a chest x-ray of a COVID-19 patient at Roseland Community Hospital on December 15, 2020 in Chicago, Illinois.  Roseland Community Hospital is located in the Roseland district, in the far south of the city.  The neighborhood's population is 95% black.  The COVID-19 death rate among black Chicago residents is almost double that of white city residents.  This week, the United States recorded 300,000 deaths from COVID-19.  (Photo by Scott Olson / Getty Images)

Scott Olson via Getty Images

So the team did the next best thing, they “pre-trained” their model using larger publicly available chest x-ray databases, in particular MIMIC-CXR-JPG and CheXpert, using an advanced technique. self-supervised learning called Momentum Contrast (MoCo).

Basically like Towards data science Dipam Vasani explains, when you train an AI to recognize specific things – for example, dogs – the model has to develop this ability through a series of steps: first to recognize lines, then to basic geometric shapes, then to more detailed patterns, before you can tell a Husky from a Border Collie. The FAIR-NYU team did the first steps of their model and pre-train them on the larger public datasets, then came back and refined the model using the smaller COVID-specific dataset. “We don’t make the diagnosis of COVID – whether you have COVID or not – based on an x-ray,” Yakubova said. “We’re trying to predict the progression of its severity.”

“The key here, I think, was… to use a series of images,” she continued. When a patient is admitted, the hospital will take an x-ray, and then probably take more in the next few days, “so you have this time series of images, which was essential to have a more accurate prediction.” When fully trained, the FAIR-NYU model has managed a diagnostic accuracy of about 75% – on par and in some cases exceeding the performance of human radiologists.

Cremona, radiology department of the Maggiore hospital in Cremona;  radiologists look at CT scans of the lungs of patients with covid-19.  (Photo by: Nicola Marfisi / AGF / Universal Images Group via Getty Images)

AGF via Getty Images

It’s a smart solution for a number of reasons. First, initial pre-training is extremely resource-intensive – to the point that it is simply not possible for individual hospitals and health centers to do it alone. But using this method, massive organizations like Facebook can develop and will develop the initial model and then provide it to hospitals as open source code, which these healthcare providers can then complete the training using using their own data sets and a single GPU.

Second, since the initial models are trained on generalized chest x-rays rather than COVID-specific data, these models could – in theory at least, FAIR has not yet tried it – be adapted to other respiratory diseases by swapping just the data used for fine tuning. This would allow health care providers not only to model a given disease, but also to adapt that model to their specific locality and situation.

“I think that’s one of the really amazing things the team has done from Facebook,” Moore concluded, “is take something that’s a great resource – the CheXpert and MIMIC databases – and power apply it to a new and emerging disease process, which we knew very little about when we first started doing it, in March and April.

[ad_2]

Source link