[ad_1]
An artificial intelligence (AI) algorithm that uses positron emission tomography (PET) enhances the ability of brain imaging to predict Alzheimer's disease (AD) at an early stage, according to new research.
The researchers studied more than 2000 potential molecules of 18F-fluorodeoxyglucose (18F-FDG) PET images taken from 1,000 patients of the Alzheimer's Disease Neuroimaging Initiative (ADNI). They trained the algorithm on 90% of the dataset and then tested it on the remaining 10%.
The algorithm has "learned" successfully to identify the metabolic patterns corresponding to Alzheimer's disease.
When the algorithm was tested on an independent set of 40 images from 40 never-studied patients, it reached a sensitivity of 100% when detecting the disease on average more than 6 years before the final diagnosis.
"The key point of our study is that our algorithm detects not only the MA successfully, but actually detects it 6 years before the diagnosis is made," said author Jae Ho Sohn, MD of the Department of Radiology and biomedical imaging of the University of California. San Francisco, said Medscape Medical News.
"In order to develop treatments for AD, and even in the interest of the patient, it is better to know the disease at an early stage, because at the time of diagnosis, there is usually too much loss of volume of the patient. brain and we can not do anything to help the patient, "he said.
The study was published online on November 6 in Radiology.
Precision Algorithm
Advances in diagnostic technology, such as 18F-FDG PET imaging allows for faster diagnosis and treatment of AD, but 18F-FDG PET is currently to be interpreted by neuroimaging specialists in nuclear medicine "to make pattern recognition decisions primarily with the aid of qualitative readings, [which is] particularly difficult in the context of a disease that spans a broad continuum ranging from normal cognition to MCI [mild cognitive impairment] to AD ", write the authors.
"In-depth learning" could "help to cope with the increasing complexity and volume of imaging data, as well as the diversity of skills of trained imaging specialists," he added. they.
Although this model has been studied in relation to other diseases, its application to brain imaging is only beginning to be explored, the authors note.
"It's been a long time since we suspected that a certain way in the way we study the uptake of FDG in the brain can mean signs of AD or be an early prediction of the evolving MA. over time, but unfortunately there was no really definitive method, "said Sohn. .
"Unlike a brain tumor or a convulsive process, where you can actually see a focal area illuminate, the development of AD is subtle, diffuse and present throughout the brain – and although it shows a certain predilection for different regions, there are no focal findings, "he explained.
"The artificial intelligence of PET is gaining popularity among the press and research as a type of nonlinear precision algorithm that allows us to recover subtle but diffused results in an efficient way," she said. he declared.
Determine if an in-depth learning algorithm (Incept V3 in-depth learning model) could be trained to predict the final clinical diagnoses of patients who have undergone PET imaging and, once trained, how its findings compare themselves to the actual diagnoses obtained by current clinical reading methods, the researchers employed 2109 prospective 18F-FDG PET scans from DNAI studies conducted from 2005 to 2017 (n = 1002 patients).
Of this dataset, 90% (1921 imaging studies, 899 patients) were used for model formation and internal validation. The remaining 10% (188 imaging studies, 103 patients) were used for model testing; these images served as an internal test set.
The researchers also used an additional set of tests obtained from their institution (the "independent test set"). This set, which served as an external test set, included 40 18F-FDG PET imaging studies in 40 patients not enrolled in DNA. The dates of these imaging studies were from 2006 to 2016.
The final clinical diagnosis determined after all follow-up examinations was used as the ground truth tag for both sets of data.
Three nuclear medicine doctors interpreted the 40 18F-FDG PET imaging studies in the independent test group.
Sensitivity "perfect", specificity "reasonable"
The average age of male patients participating in the ADNI study was 76 years (range: 55 years); the mean age of the patients was 75 (55 to 96 years) (P <0.001). Overall, 54% of the patients were men (547 out of 1002); in an imaging study, 58% of patients were men (1225 out of 2109).
The average follow-up period was 54 months per patient and 62 months per imaging study.
Of the 40 patients in the set of independent tests, seven were clinically diagnosed as having an MA, seven as having an MCI and 26 as having no AD / MCI at the end of the follow-up period .
The average age of these men was 66 years (range: 48 to 84 years); in patients, the average age was 71 years (range: 41 to 84 years).
The total percentage of men in the set of independent tests was 58% (23 out of 40). The average follow-up period for patients in the pool of independent tests was 76 months. In the AD group, the mean follow-up was 82 months; in the MCI group, it was 75 months; and in the non-AD / MCI group, it was 74 months.
Inception V3 was trained on 90% of the ADNI data and was tested on the remaining 10%. The receiver operating characteristics curve of the deep learning mode gave a surface under the curve (AUC) for prediction of AD of 0.92; for MCI, it was 0.63; and for non-AD / MCI, it was 0.73.
These results "indicate that the deep learning network had a reasonable ability to distinguish patients who eventually progressed to AD at the time of imaging of those who remained to have MCI or who were not AD / MCI. but was weaker at discriminating patients with MCI from others, "say the authors.
The sensitivity for the prediction of AD, MCI and non-AD / MCI was 81% (29 out of 36), 54% (43 out of 79) and 59% (43 out of 73) respectively.
Specificity was also high, at 94% (143 out of 152), 68% (74 out of 109) and 75% (86 out of 115), respectively.
The accuracy was respectively 76% (29 out of 38), 55% (43 out of 78) and 60% (43 out of 72).
The ROC tested on an independent test group gave an AUC for the prediction of AD, MCI and non-AD / MCI of 0.98 (95% confidence interval). [CI]0.94 to 1.00), 0.52 (95% CI, 0.34 to 0.71) and 0.84 (95% CI, 0.70 to 0.99), respectively.
When the researchers selected the class with the highest probability as the result of the classification, the sensitivity was 100% (7 out of 7), 43% (3 out of 7) and 35% (9 out of 26) for the prediction of MA, MCI, and non-AD / MCI, respectively.
The specificity was 82% (27 out of 33), 58% (19 out of 33) and 93% (13 out of 14), and the accuracy was 54% (7 out of 13), 18% (3 out of 17). and 90% (9 out of 10) in the prediction of AD, MCI and non-AD / MCI, respectively.
"With a perfect sensitivity rate and reasonable specificity on AD, the model retains a strong ability to predict final diagnoses before the full follow-up period, which on average ends 76 months later," commented authors.
Compared to radiology readers, the deep learning model performed better on a statistical basis by recognizing patients who may be clinically diagnosed with AD.
He also performed better on the set of independent tests to recognize patients with neither AD nor MCI. However, recognition of patients who develop MCI but whose condition does not progress to AD has been worse, although this finding is of no statistical significance.
"By predicting the final diagnosis of AD on the set of independent tests, he outperformed three radiology readers in the OCR space," note the authors.
"Although there have been false positives, the fact that the algorithm can detect each DA case is a feat," Sohn said.
"I consider that this algorithm complements the work of radiologists, especially in connection with other biochemical and imaging tests," he added.
Increased sophistication
Commenting on the study for Medscape Medical News, Arthur Toga, PhD, Neuroimaging Laboratory, Stevens Institute of Neuroimaging and Computer Science, Keck School of Medicine at the University of Southern California, Los Angeles, who did not not participated in the study, said that "the authors have formed a model of in-depth learning more accurate prediction of AD and MCI than professional readers of human radiology."
The authors also "provided the structure and hyper-parameters of their neural network model, which can serve as a benchmark for further improvement," he noted.
The findings have implications for clinical use, Toga said. "As the sophistication of deep learning models continues to improve, we are certain to see wider adoption in clinical practice as a decision support tool." . "
He noted that even though 18F-FDG PET is "one of the tools used in the diagnosis of AD", the high cost of scans, which, the authors note, "remains a challenge".
Sohn added, "One of the limitations of our study is that it is only 40 patients, which requires additional validation with larger datasets in different institutions, which is a necessary step before results can be integrated into clinical care. "
Mykol Larvie, MD, of the Division of Neuroradiology of the Department of Nuclear Medicine at the Cleveland Clinic (Ohio), said in his editorial that "The application of machine learning and description of the data of". tests "researchers allowed" other researchers to reproduce them ". analysis."
The collection and sharing of data for the project was funded by ADNI, the National Institutes of Health and the US Department of Defense. Drs Sohn, Larvie and Curfman did not reveal any relevant financial relationship. Disclosures of these relationships by coauthors are listed in the original articles.
Radiology. Posted online November 6, 2018. Full text, Editorial
For more information on Medscape Psychiatry, join us on Facebook and Twitter.
[ad_2]
Source link