Google AI Tool identifies mutations of a tumor from an image



[ad_1]

When I was In high school in the early 2000s, I spent a week of my summer vacation observing a pathologist at the local hospital. Every day in his basement office was basically the same; he focused his microscope on a tissue slide, squinting for a few minutes at a time, methodically taking notes of cell shape, size and environment. When he had enough data points, he made the call: "squamous cell carcinoma". "Serrated adenocarcinoma." "Benin".

For decades, doctors have relied on the trained eyes of pathologists to give their patients a diagnosis of cancer. Now, researchers teach machines to do this tedious job in seconds.

In a new research published today in Medicine of natureResearchers at New York University have relearned a Google deep learning algorithm to distinguish two of the most common types of lung cancer with an accuracy of 97%. This type of AI – the same technology that identifies faces, animals, and objects in images downloaded from Google's online services – has already proven itself in diagnosing diseases, including diabetic blindness and diseases. heart. But NYU's neural network has learned to do something that no pathologist has ever done: identify the genetic mutations that multiply in every tumor from a single image.

"I thought the real novelty would not be just to show that AI is as good as humans, but that it would allow a human expert to not know it," says Aristotelis Tsirigos, a pathologist at the NYU School of Medicine. lead author of the new study.

To do this, the Tsirigos team started with Google Inception v3, an open source algorithm that Google has formed to identify 1000 different classes of objects. To teach the algorithm to distinguish images of cancerous and healthy tissue, the researchers showed him hundreds of thousands of images taken from the Atlas of the Cancer Genome, a public library of cancer. tissue samples from patients.

Once Inception understood how to select cancer cells with 99% accuracy, it was taught to distinguish two types of lung cancers: squamous cell carcinoma adenocarcinomas. Together, they represent the most common forms of the disease, which kills more than 150,000 people a year. Although they appear frustratingly under the microscope, both types of cancer are treated very differently. Doing things right can make the difference between life and death for patients.

When the researchers tested Inception on independent samples taken from cancer patients at New York University, their accuracy decreased somewhat, but not much. He always correctly diagnosed the images between 83 and 97% of the time. This is not surprising, says Tsirigos, as the hospital samples carried much more noise – inflammation, dead tissue and white blood cells – and were often treated differently than frozen samples of TCGA. To improve accuracy, pathologists only need to annotate slides with more features, so the algorithm can learn to select them.

But it is not a human hand that has taught Inception to "see" genetic mutations in these histology slides. This thing the algorithm has learned on its own.

Working again with the TCGA data, the Tsirigos team fed Inception's genetic profiles for each tumor, as well as the slide images. By testing their system on new images, he was able to identify not only the cancerous tissue, but also the genetic mutations of this tissue sample. The neural network had learned to notice extremely subtle changes in the appearance of a tumor sample, which pathologists can not see. "These cancer-causing mutations appear to have microscopic effects that the algorithm can detect," says Tsirigos. What are these subtle changes, however, "we do not know. They are buried [in the algorithm] and no one really knows how to extract them.

This is the problem of the black box of deep learning, but it is particularly pressing in medicine. Critics argue that these algorithms must first be made more transparent for their creators before being used on a large scale. If not, how can we catch their inevitable failures, which can make the difference between a living and dying patient? But people like Olivier Elemento, director of the Caryl and the Israel Englander Institute for Precision Medicine in Cornell, say it would be stupid not to use a clinical test that responds correctly 99% of the time, even without knowing how it works .

"Honestly, for an algorithm of this type to be the subject of a clinical test, it is not necessary to have fully interpretable features, it just needs to be there." be reliable, "says Elemento. But getting almost perfect reliability is not that easy. Different hospitals treat their tumor samples using different instruments and protocols. Teaching an algorithm to navigate all this variability will be a difficult task.

But that's what Tsirigos and his team plan to do. In the coming months, researchers will continue to train their AI program with more data from more varied sources. Then, they will start thinking about creating a business to get FDA approval. Because of the cost and time, sequencing of tumor samples is not always the norm in the United States. Imagine being able to send a digital photo of a tumor sample and get a complete diagnosis with viable treatment options almost instantly. This is where everything goes.

"The big question is whether this will be reliable enough to replace the current practice," says Daniel Rubin, director of biomedical informatics at the Stanford Cancer Institute. Not without a lot of validation work, he says. But this points to a future where pathologists work in partnership with computers. "This document really shows that there is much more information in the images than a human being can draw."

It's a theme beyond digital pathology. With Google and other companies offering advanced algorithms in the form of open source code, researchers can now launch their own AI project with relative ease. With a little personalization, these neural networks are ready to be dropped on a mountain of biomedical image data, not just tumor images.

I ask Tsirigos if he has had trouble finding fellow pathologists to volunteer to train his cancer classifier. He's laughing. At first, he said that he was afraid to ask anyone at NYU to join the project. After all, they would help create a future competitor. But in the end, recruitment has proven easy. People were curious to see what Inception could do. Not only for lung cancer, but also for their own projects. They do not fear being replaced, says Tsirigos, they are excited about being able to ask deeper questions because the machine supports the simplest. Leave object recognition on the machines and there is still a lot of medicine for humans.


Biggest cable stories

[ad_2]
Source link