Apple’s CSAM detection technology is under fire – again – TechCrunch



[ad_1]

Apple has encountered a monumental backlash to a new child pornography detection (CSAM) technology it announced earlier this month. The system, which Apple calls NeuralHash, has yet to be activated for its more than one billion users, but the technology is already facing heat from security researchers who say the algorithm produces false results .

NeuralHash is designed to identify known CSAM on a user’s device without having to own the image or know the content of the image. Since a user’s photos stored in iCloud are end-to-end encrypted so that even Apple cannot access the data, NeuralHash instead searches for known CSAMs on a user’s device, which it says Apple is more respectful of privacy because it limits scanning to just photos rather than other companies that scan a user’s entire file.

Apple does this by looking for images on a user’s device that have the same hash – a string of letters and numbers that can uniquely identify an image – which are provided by child welfare organizations like NCMEC. If NeuralHash finds 30 or more matching hashes, the images are reported to Apple for manual review before the account owner is reported to law enforcement. Apple says the risk of a false positive is about one in a trillion accounts.

But security experts and privacy advocates have expressed concern that the system could be abused by actors with considerable resources, such as governments, to implicate innocent victims or to manipulate the system in order to to detect other material that authoritarian nation-states find objectionable. NCMEC called the critics “the screaming voices of the minority,” according to a leaked memo distributed internally to Apple staff.

Last night, Asuhariet Ygvar reverse engineered Apple’s NeuralHash in a Python script and posted code to GitHub, allowing anyone to test the technology, whether or not they have an Apple device to test. In a Reddit post, Ygvar said NeuralHash “already exists” in iOS 14.3 as obscured code, but was able to reconstruct the technology to help other security researchers better understand the algorithm before it was released. rolled out to iOS and macOS devices later this year.

It wasn’t long before others tinkered with the published code and soon arrived the first reported case of a ‘hash collision’, which in the case of NeuralHash is when two entirely different images produce the same hash. . Cory Cornelius, a well-known Intel Labs researcher, discovered the hash collision. Ygvar confirmed the collision shortly after.

Hash collisions can spell the end of systems that rely on cryptography for their security, such as encryption. Over the years, several well-known password hashing algorithms, like MD5 and SHA-1, have been retired after collision attacks rendered them ineffective.

Kenneth White, crypto expert and founder of the Open Crypto Audit Project, said in a tweet: “I think some people do not understand that the time between discovering the iOS NeuralHash code and [the] the first collision did not last for months or days, but a few hours.

When contacted, an Apple spokesperson declined to comment on the matter. But on a background call where reporters weren’t allowed to quote executives directly or by name, Apple downplayed the hash collision and argued that the protections it puts in place – like a review manual photos before they are reported to law enforcement – are designed to prevent abuse. Apple has also said that the reverse engineered version of NeuralHash is a generic version, not the full version that will roll out later this year.

It’s not just civil liberties groups and security experts who are voicing concern about the technology. A senior lawmaker in the German parliament this week sent a letter to Apple chief executive Tim Cook, saying the company was heading down a “dangerous path” and urging Apple not to implement the system.



[ad_2]

Source link