[ad_1]
MEDIA CONTACT
Only available for connected journalists
Newswise – CHAPEL HILL – Using a technology similar to that used for facial and speech recognition on a smartphone, researchers at the North Carolina University's Lineberger Comprehensive Cancer Center have formed a computer in order to 39 analyze images of breast cancer and then classify tumors with great precision. .
In a study published in the journal NPJ Breast Cancer, researchers indicated that they were using a form of artificial intelligence called machine learning or deep learning to train a computer to identify certain characteristics of tumors breast cancer from images. The computer has also identified the type of tumor based on complex molecular and genomic features that a pathologist can not yet identify from a single image. They believe that this approach, while still in its infancy, could potentially lead to cost savings for the clinic and breast cancer research.
"Your smartphone can interpret your speech and find and identify faces on a photo," said the study's lead author, Heather D. Couture, a graduate research assistant in the Computer Science Department of UNC-Chapel Hill. "We use similar technology to capture abstract properties in images, but we apply it to a totally different problem."
For this study, researchers used a set of 571 images of breast cancer tumors derived from the Carolina Breast Cancer Study to train the computer to classify tumors by grade, estrogen receptor status, subtype intrinsic type of PAM50, histological subtype and recurrence risk score. To do this, they created software that taught them to predict labels from images with the help of a set of learning, so that new images can be processed in the same way.
They then used a different set of 288 images to test the computer's ability to distinguish only the characteristics of the tumor, by comparing the computer's responses to a pathologist's findings for each tumor grade and subtype. and to separate the tests from the subtypes of gene expression. They found that the computer was able to distinguish low to high intermediate stage tumors in 82% of cases. When they asked two pathologists to examine the quality of the tumor for the group of poor grades, they agreed about 89% of the time, which was slightly higher than the accuracy of the computer.
In addition, the computer identified the status of estrogen receptors, distinguished ductal and lobular tumors, and determined whether the risk of recurrence was high or low in each case. He also identified one of the molecular subtypes of breast cancers – the basal-type subtype – based on how the tumor genes were expressed – with an accuracy of 77%.
"By using artificial intelligence or machine learning, we were able to do a number of things that pathologists can do with similar precision, but we also could do a thing or two that pathologists do not are not able to do today, "said UNC. Dr. Line M. Charles M. Perou, Ph.D., May Goldman Shaw, Distinguished Professor of Molecular Oncology, Professor of Genetics, Pathology and Laboratory Medicine at the UNC School of Medicine. "The validation is long, but I think the accuracy will only improve as we acquire more images to train the computer."
The ability of the computer to identify the basal type subtype was of interest to researchers and could have applications in cancer research. They also believe that the technology could have applications in communities without pathology resources and contribute to the validation of pathologist results.
"We were surprised that the computer was able to get enough precision in estimating the risk of biomarkers simply by looking at the images," said Melissa Troester, PhD student at UNC Lineberger, a professor at the School of Global Public Health. Gillings of the UNC. "We are spending thousands of dollars to measure these biomarkers with the help of molecular tools, and this new method can take the image and get an accuracy of 80% or more to estimate the phenotype or the tumor subtype. It was pretty amazing. "
Couture said that deep learning technology had been used in a variety of applications, including voice recognition and autonomous vehicles.
"Humans can look at one or two examples and be able to generalize when they see other objects," Couture said. "For example, the seats come in different forms, but we can recognize it as something we sit on. Computers have a much harder time generalizing from small amounts of data. On the other hand, if you provide enough labeled data, they can learn much more complex concepts that humans can not evaluate visually – for example, identify the basal type subtype from a single image. "
The researchers said the unique aspect of their work was that they could use technology to see the characteristics of tumors that humans can not. They want to understand what the computer sees, as well as whether technology can predict the results.
"The computer has extracted a lot of information from the images," Troester said. "We would like to test the ability of these features to predict outcomes and combine them, among other things, with molecular data, to give patients a more accurate view of the disease's evolution and effective treatments. . "
Besides Couture, Perou and Troester, Lindsay A. Williams, Joseph Geradts, Sarah J. Nyante, Ebonee N. Butler, J.S. Marron and Marc Niethammer.
The study was funded by the University Cancer Research Fund through UNC Lineberger, the National Cancer Institute, the Breast SPORE Program, the Breast Cancer Research Foundation, and Susan G. Komen. The Tesla K40 used for this research was offered by NVIDIA Corp.
Conflict of interest: Peru is a shareholder and member of the board of directors of BioClassifier LLC and has filed patents on the subtyping test PAM50.
SEE THE ORIGINAL STUDY
[ad_2]
Source link