AI is on par with human experts in medical diagnosis, study finds | Technology



[ad_1]

An examination revealed that artificial intelligence is comparable to human experts in medical diagnostics from images.

The potential for artificial intelligence in healthcare has sparked enthusiasm, advocates claiming that it would lighten resources, free up time for physician-patient interactions and even help develop a patient. customized treatment. Last month, the government announced funding of £ 250 million for a new NHS artificial intelligence laboratory.

Experts warned, however, that the latest findings are based on a small number of studies, as the field is littered with poor quality research.

A thriving application is the use of artificial intelligence in the interpretation of medical images – a domain based on deep learning, a sophisticated form of machine learning in which a series of tagged images is introduced into algorithms that select features and learn to classify similar images. images. This approach has shown promise in diagnosing diseases ranging from cancer to eye conditions.

However, questions remain about how such deep learning systems are measured against human skills. The researchers now say that they have conducted the first comprehensive review of published studies on the subject and that they have found that humans and machines are on an equal footing.

Professor Alastair Denniston, of the Birmingham University Hospitals Foundation and co-author of the study, said that the results were encouraging, but that this study was a reality check for some of the hype on the subject. 39; IA.

Dr. Xiaoxuan Liu, senior author of the study and the same NHS trust, acquiesced. "There are a lot of headlines on AI that surpass humans, but our message is that it can at best be equivalent," she said.

In the Digital Health Lancet, Denniston, Liu and his colleagues explained how they focused on research papers published since 2012 – a pivotal year for in-depth learning.

An initial search found more than 20,000 relevant studies. However, only 14 studies – all based on human diseases – reported good quality data, tested the system thoroughly in depth with images from a separate dataset than the one used to train it , and shown the same images to human experts.

The team gathered the most promising results from each of the 14 studies to reveal that deep learning systems correctly detected a disease state 87% of the time – compared to 86% for health professionals – and gave clear indication to 93% of users correctly. time, compared to 91% for human experts.

However, health professionals participating in these scenarios did not have additional information about the patient that they would have in the real world and that could guide their diagnosis.

Professor David Spiegelhalter, president of the Winton Center for Risk and Evidence Communication at the University of Cambridge, said the field was inundated with insufficient research.

"This excellent article demonstrates that the hype on AI in the medical field masks the deplorable quality of almost all evaluation studies," he said. "In-depth learning can be a powerful and impressive technique, but clinicians and curators should ask themselves the crucial question: what does this really add to clinical practice?"

Denniston, however, remained optimistic about the potential of AI in the health sector, saying such deep learning systems could be a diagnostic tool and help reduce the backlog. scans and images. What is more, said Liu, they could prove useful in places where there is a lack of experts to interpret the images.

Liu said it would be important to use in-depth learning systems in clinical trials to assess whether the results for patients have improved compared to current practices.

Dr. Raj Jena, an oncologist at Addenbrooke Hospital in Cambridge, who did not participate in the study, said that deep learning systems would be important at the same time. future, but stressed that they needed solid tests in the real world. He also stated that it was important to understand why such systems sometimes performed a poor evaluation.

"If you are an in-depth learning algorithm, when you fail, you can often fail in a very unpredictable and dramatic way," he said.

[ad_2]

Source link