Researchers warn that medical AI is vulnerable to attack


The risk of "medical error" takes on a new and more disturbing meaning when the errors are not human, but the motives are.

In an article published in the journal ScienceUS researchers point to the growing potential for conflicting attacks on medical learning systems in order to influence or manipulate them.

Because of the nature of these systems and their unique vulnerabilities, minor but carefully crafted modifications of how entries are presented can completely alter outputs, subverting otherwise reliable algorithms, the authors say.

And they present a striking example: their own success using a "contradictory noise" to convince the algorithms to diagnose benign moles as being malignant with complete confidence.

The Boston-based team, led by Samuel Finlayson of Harvard Medical School, brought together health, law and technology experts.

In their paper, the authors note that contradictory manipulations can take the form of imperceptibly small disturbances of input data, such as the "invisible by man" modification of each pixel in an image.

"Researchers have demonstrated conflicting examples for virtually every type of machine learning model ever studied and for a wide range of data types, including images, audio, text, and other data. They write.

To date, they claim that no high-profile adversarial attack has been identified in the health sector. However, the potential exists, especially in the billing and medical insurance sector, where machine learning is well established. Conflicting attacks could be used to produce false medical claims and other fraudulent behavior.

To address these emerging concerns, they advocate an interdisciplinary approach to policy development in machine learning and artificial intelligence, which should include the active participation of medical, technical, legal and ethical experts in the field. whole health care community.

"Conflicting attacks are one of the many possible failure modes of medical learning systems, all of which are critical considerations for developers and model users," they write.

"From a policy perspective, however, conflicting attacks are an intriguing new challenge because they offer users of an algorithm the ability to influence their behavior in a subtle, hard-hitting, and sometimes ethically ambiguous manner."


Source link