Nvidia researchers develop an AI system that generates synthetic analyzes of brain cancer



[ad_1]

Artificial Intelligent Systems (AIs) are as diverse as they come from an architectural point of view, but they all share one common element: data sets. The problem is that the size of large samples is often a corollary of accuracy (a state-of-the-art diagnostic system developed by Google's DeepMind subsidiary required 15,000 analyzes of 7,500 patients) and some data sets are more difficult to find.

Researchers at Nvidia, the Mayo Clinic and the MGH and the BWH Center for Clinical Data Science believe that they have found a solution to the problem: a neural network that generates training data itself, in particular three-dimensional synthetic images. (MRI) brains with cancerous tumors. It has been described in a document ("Synthesis of medical images for the increase and anonymization of data using networks of generative adversaries") presented at the Medical Image Computing & Computer Assisted Conference in Granada, in Spain.

"We show that, for the first time, we can generate brain images that can be used to form neural networks," said Huang Chang, senior researcher at Nvidia and lead author of the newspaper, in a phone interview.

The AI ​​system, which was developed using Facebook's extensive PyTorch learning framework and was trained on an Nvidia DGX platform, exploits a general adversary network (GAN), a neural network in two parts. Generator which produces samples and a discriminator which attempts to distinguish between generated samples and real-world samples – to create convincing MRIs of abnormal brains.

The team obtained two publicly available data sets – the Neuroimaging Initiative for Alzheimer's Disease (ADNI) and the Multimodal Brain Segmentation Segmentation Indicator (BRATS) – to form the GAN and reserve 20% 264 studies of BRATS. Memory and computational constraints forced the team to reduce the number of samples analyzed from a 256 x 256 x 108 resolution to 128 x 128 x 54, but they used the original images for comparison purposes.

The generator, powered by images from ADNI, has learned to produce synthetic brain scans (with white matter, gray matter and cerebrospinal fluid) from an image from the ADNI. Then, when it is generalized on the BRATS dataset, it generated complete segmentations with the tumors.

The GAN annotated the scans, a task that can take a team of hours of human experts. And because it treated the brain and tumor anatomy as two distinct labels, it allowed researchers to alter the size and location of the tumor or "transplant" it into healthy brain scans.

"Conditional GANs are perfectly suited to that," Chang said. "[It can] remove patient privacy issues [because] the generated images are anonymous. "

So how are you? When the team formed a machine learning model using a combination of real brain scans and synthetic brain scans produced by the GAN, it achieved an accuracy of 80%, or 14% more

"Many radiologists have shown that the system has expressed its enthusiasm," Chang said. "They want to use it to generate more examples of rare diseases."

Future research will investigate the use of higher resolution training images and larger datasets among different patient populations, Chang said. And improved versions of the model could reduce the boundaries around tumors so that they are not "superimposed".

This is not the first time that Nvidia researchers have used GANs to transform brain scans. This summer, they highlighted a system capable of converting CT images to 2D MRI and another system capable of aligning multiple MRI images in the same scene with higher speed and accuracy.

[ad_2]
Source link