Scientists ask for rules on the evaluation of predictive AI in medicine – Axios



[ad_1]

FDA tells Axios As Commissioner Scott Gottlieb pointed out last year, he is working on developing a framework to manage the progress of AI and medicine. Although unable to comment on this document, a spokesman said the FDA had used its current process for new medical devices to allow these AI algorithms:

  • Viz.ai to help providers detect strokes in scanners
  • IDx-DR to detect diabetic retinopathy.
  • OsteoDetect to detect bone fractures.

Meanwhile, Ravi B. Parikh, co-author of the paper and a researcher at the University of Pennsylvania's Faculty of Medicine, told Axios that the FDA needed to set standards for assessing the "staggering" pace of development from the AI. He adds:

"Five years ago, AI and predictive analytics had not yet had a significant impact on clinical practice.These last 2 to 3 years, prior authorizations to the sales have been awarded for AI applications ranging from the prediction of sepsis to interpretation in radiology. "

"But if these tools are to be used to determine patient care … they should meet the standards of clinical benefit just as do the majority of our drugs and diagnostic tests." We believe that it is essential to create and to proactively formalize these standards to protect patients and safely translate algorithms into clinical interventions. "

Why it's important: Advanced algorithms present both opportunities and challenges, says Amol S. Navathe, co-author and assistant professor at Penn's School of Medicine. He tells Axios:

"The real opportunity is that these algorithms outperform clinicians in medical decision-making, which is a big challenge.The challenge is that the data generated for the algorithms is not generated randomly. [what] the data algorithms & # 39; see & # 39; result from a human decision. We have a way to go in our scientific approaches to overcome this challenge and consistently develop algorithms that can help improve the decisions of human clinicians. "

Details: The authors list the following as recommended standards …

  1. Significant terminals The clinical benefits of the algorithms must be rigorously validated by the FDA, such as downstream outcomes such as overall survival or clinically relevant parameters such as the number of erroneous diagnoses.
  2. Appropriate milestones It should be determined, along the lines of the recent example of the FDA approving Viz.AI, the in-depth learning algorithm for the diagnosis of strokes, after being able to diagnose stroke faster than neuroradiologists.
  3. Variable input specifications should be clarified for all institutions, for example by defining entries for electronic health records so that the results are reliable for all institutions. In addition, algorithms need to be trained on data sources from populations as broadly representative as possible they are therefore generalizable in all populations.
  4. Guidance on possible interventions this would be linked to the results of an algorithm aimed at improving patient care.
  5. Perform rigorous audits after approval or approval by the FDA, particularly to verify how the new variables included in the deep learning may have altered its performance over time. For example, regular audits could reveal that the algorithm exhibits systematic bias against certain groups after being deployed in large populations. This could be followed in a manner similar to the current FDA Sentinel Initiative's Program for Drugs and Approved Devices.

External comment: Eric Topol, founder and director of Scripps Research Translational Institute, which was not part of this article, said the proposed standards schedule is "very smart" before advanced algorithms are placed in too many devices.

  • "[The algorithm] "This does not necessarily mean helping people," says Topol to Axios.
  • Worse, he adds, if the variables are inactive, predictive analytics can have negative ramifications.

And after: Scientists hope FDA plans to incorporate proposed standards into its current pre-certification program under the Digital Health Innovation Act to study the clinical outcomes of AI-based tools said Ravi.

[ad_2]

Source link