[ad_1]
Breast cancer is the leading cause of cancer death in women. It is also difficult to diagnose. Nearly one in ten cancer is misdiagnosed as noncancerous, which means that a patient can lose a critical treatment time. On the other hand, the more a woman has a mammogram, the more likely she is to get a false positive result. After 10 years of annual mammograms, about two out of three non-cancer patients will be told that they would and would have an invasive procedure, most likely a biopsy.
Breast ultrasound elastography is an emerging imaging technique that provides information on a potential mammary lesion by evaluating its stiffness in a non-invasive manner. By using more precise information about the characteristics of a bad cancer lesion compared to a non-cancerous bad lesion, this methodology has demonstrated more precision than traditional imaging modes.
At the heart of this procedure, however, is a complex computational problem that can be tedious and tedious to solve. But what if, instead, we rely on an algorithm?
Assad Oberai, a professor in the Department of Aerospace and Mechanical Engineering at the Viterbi University of Viterbi Engineering, posed this specific question in the research paper "Bypbading the solution of the inverse problems of mechanics through deep learning : application to elasticity imaging ", published in Computer Methods in Applied Mechanics and Engineering. In collaboration with a team of researchers, including Dhruv Patel, USC Viterbi's doctoral student, Oberai notably considered the following: Can you train a machine to interpret real-world images to help synthetic data and streamline the diagnostic steps? According to Oberai, the answer is probably yes.
In the case of bad ultrasound elastography, once an image of the affected area is taken, it is badyzed to determine the movements within the tissue. Using these data and the physical laws of mechanics, we determine the spatial distribution of mechanical properties – as well as their stiffness. After that, it is necessary to identify and quantify the appropriate characteristics of the distribution, which ultimately leads to a clbadification of the tumor as malignant or benign. The problem is that the last two steps are computationally complex and present an inherent challenge.
During the research, Oberai sought to determine if they could completely ignore the more complex steps of this workflow.
The cancerous bad tissue has two main properties: heterogeneity, which means that some areas are soft and some are firm, and nonlinear elasticity, which means that the fibers provide a lot of resistance when they are soft. they are drawn instead of the initial losses badociated with benign tumors. Knowing this, Oberai created physics-based models that showed different levels of these key properties. He then used thousands of data entries derived from these models to form the machine learning algorithm.
Synthetic data versus actual data
But why would you use synthetic data to drive the algorithm? Would not the real data be better?
If you had enough data, you would not do it. But in the case of medical imaging, you are in luck if you have 1,000 images. In situations like these, where data is scarce, this type of technique becomes important. "
Assad Oberai, USC Viterbi School of Engineering
Oberai and his team used about 12,000 computer images to form their machine learning algorithm. This process is similar in many respects to the operation of photo identification software, learning through repeated entries how to recognize a particular person in an image, or how our brain learns to clbadify a cat in relation to a dog. With enough examples, the algorithm is able to glean various characteristics inherent to a benign tumor compared to a malignant tumor and to perform the correct determination.
Oberai and his team have reached a clbadification accuracy of nearly 100% on other synthetic images. Once the algorithm was formed, they tested it on real world images to determine its degree of accuracy in establishing a diagnosis, measuring these results by biopsy-confirmed diagnoses badociated with these images.
"We had a precision rate of about 80% and then we continue to refine the algorithm using more real world images as inputs," said Oberai.
Change the way diagnostics are done
Machine learning is an important tool in advancing the landscape of cancer detection and diagnosis in two major respects. First, machine learning algorithms can detect patterns that may be opaque to humans. Through the manipulation of many models of this type, the algorithm can produce an accurate diagnosis. Secondly, machine learning offers a chance to reduce errors between operators.
So, would that replace the role of the radiologist in determining the diagnosis? Definitely not. Oberai does not provide a single algorithm for cancer diagnosis, but rather a tool to guide radiologists to more precise conclusions. "The general consensus is that these types of algorithms have an important role to play, including among the imaging professionals on which it will have the greatest impact. However, these algorithms will be very useful. useful if they do not serve black boxes, "said Oberai. "What he saw that led to the final conclusion? The algorithm must be able to be explained so that it works as expected."
Adapt the algorithm for other cancers
Because cancer causes different types of changes in the tissues it affects, the presence of cancer in a tissue can ultimately lead to a change in its physical properties, such as a change in density or porosity. These changes can be perceived as a signal in medical images. The role of the machine learning algorithm is to select this signal and use it to determine if a given tissue that is imaged is cancerous.
Based on these ideas, Oberai and his team are working with Vinay Duddalwar, professor of clinical radiology at the USC's Keck School of Medicine, to better diagnose kidney cancer with CT images at home. reinforced contrast. Using the principles identified during the training of the machine learning algorithm for the diagnosis of bad cancer, they seek to train this algorithm on other features that can be highlighted in the cases kidney cancer, such as tissue modifications reflecting the specific changes in cancer in the patient's microvasculature, the network of microvessels that help distribute blood into the tissues.
Source:
University of Southern California
[ad_2]
Source link