Apple says it will start scanning iCloud photos for child abuse images – TechCrunch



[ad_1]

Later this year, Apple will deploy technology that will allow the company to detect and report known child sexual abuse material to law enforcement in a way it believes will protect user privacy.

Apple told TechCrunch that child pornography (CSAM) detection is one of many new features aimed at better protecting children who use its services from harm online, including filters to block potentially sexually explicit photos sent. and received through a child’s iMessage account. . Another feature will come into play when a user tries to search for CSAM related terms through Siri and Search.

Most cloud services – Dropbox, Google, and Microsoft to name a few – already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning user files in the cloud by giving users the ability to encrypt their data before it reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology – NeuralHash – works on a user’s device instead and can identify if a user uploads known images of child abuse to iCloud without decrypting the images until that a threshold is reached and a sequence of checks to verify that the content is erased. .

News of Apple’s efforts broke on Wednesday when Matthew Green, professor of cryptography at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news met with some resistance from some security experts and privacy advocates, but also from users accustomed to Apple’s approach to security and privacy that most other companies don’t. not.

Apple is trying to allay fears by preserving privacy with multiple layers of encryption, designed in a way that requires several steps before passing through the hands of Apple’s final manual review.

NeuralHash will land in iOS 15 and macOS Monterey, slated for release in a month or two, and works by converting photos on a user’s iPhone or Mac into a single string of letters and numbers, known as hash name. Anytime you tweak an image slightly, it changes the hash and may prevent matching. Apple says NeuralHash tries to ensure that identical and visually similar images, such as cropped or edited images, give the same hash.

Before an image is uploaded to iCloud Photos, these hashes are compared on the device against a database of known hashes of child abuse images, provided by child protection organizations like the National Center for Missing & Exploited Children (NCMEC) and others. NeuralHash uses a cryptographic technique called private set intersection to detect a hash match without revealing the nature of the image or alerting the user.

The results are downloaded to Apple but cannot be read on their own. Apple uses another cryptographic principle called Threshold Secret Sharing which allows it to decrypt content only if a user crosses a threshold of known child abuse images in their iCloud photos. Apple wouldn’t say what that threshold was, but said – for example – that if a secret is broken into a thousand pieces and the threshold is ten images of child pornography content, the secret can be reconstructed from any of the above. these ten images.

It is at this point that Apple can decrypt the corresponding images, manually verify the content, deactivate a user’s account, and report the images to NCMEC, which is then passed on to law enforcement. Apple says this process is more privacy-conscious than scanning files in the cloud, because NeuralHash only searches for known, not new, child abuse images. Apple said there is a one in a trillion chance of false positives, but there is an appeal process in place in case an account is reported in error.

Apple has posted technical details on how NeuralHash works on its website, which have been reviewed by crypto experts.

But despite the broad support for efforts to address child sexual abuse, there is still a component of surveillance that many would feel uncomfortable leaving to an algorithm, and some security experts are calling for more public discussion before Apple rolls out the technology to users.

A big question is why now and not earlier. Apple said its privacy-preserving CSAM detection has not existed until now. But companies like Apple have also come under considerable pressure from the US government and its allies to weaken or steal the encryption used to protect their users’ data so that law enforcement can investigate crimes. serious.

Tech giants have refused efforts to steal their systems, but have met resistance against efforts to further block government access. Although the data stored in iCloud is encrypted so that even Apple cannot access it, Reuters reported last year that Apple abandoned a plan to encrypt users’ full phone backups on iCloud after the FBI complained that it would interfere with the investigations.

The announcement of Apple’s new CSAM detection tool, without public debate, also raised concerns that the technology could be misused to flood victims with images of child abuse, potentially leading to reporting. and closing their account, but Apple played down the concerns and said a manual review. would examine evidence of possible misuse.

Apple has said that NeuralHash will be deployed in the United States initially, but does not say if, or when, it will be deployed internationally. Until recently, companies like Facebook were forced to turn off their child abuse detection tools across the block after the practice was inadvertently banned. Apple said the feature is technically optional since you don’t have to use iCloud Photos, but it will be a requirement if users do. After all, your device is yours, but Apple’s cloud doesn’t.



[ad_2]

Source link