Apple explains how iPhones will scan photos for images of child sexual abuse



[ad_1]

Close up of female finger scrolling on smartphone screen in dark environment.

Shortly after reporting today that Apple would start scanning iPhones for images of child abuse, the company confirmed its plan and provided details in a press release and technical summary.

“Apple’s method of detecting known CSAM (Child Sexual Abuse Material) is designed with user privacy in mind,” Apple’s announcement said. “Instead of scanning images to the cloud, the system performs a match on the device using a database of known CSAM image hashes provided by NCMEC (National Center for Missing and Exploited Children). ) and other child safety organizations. Apple further turns this database into an unreadable hash set file that is stored securely on users’ devices. “

Apple provided more details on the CSAM detection system in a technical summary and said its system uses a threshold “set to provide an extremely high level of accuracy and guarantees less than a one in a trillion per year chance of reporting. incorrectly a given account “.

The changes will roll out “later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey,” Apple said. Apple will also deploy software capable of analyzing images in the Messages app for a new system that will “notify children and their parents when they receive or send sexually explicit photos.”

Apple accused of having built “surveillance infrastructures”

Despite assurances from Apple, security experts and privacy advocates have criticized the plan.

“Apple is replacing its industry standard end-to-end encrypted messaging system with a surveillance and censorship infrastructure, which will be vulnerable to abuse and drift not only in the United States, but around the world,” he said. said Greg Nojeim, co-director of the Security & Surveillance project at the Center for Democracy & Technology. “Apple should abandon these changes and restore the confidence of its users in the security and integrity of their data on Apple devices and services.”

For years, Apple has resisted pressure from the US government to install a “backdoor” in its encryption systems, claiming it would harm the security of all users. Apple has been hailed by security experts for this position. But with its plans to deploy software that performs on-device analysis and shares the selected results with authorities, Apple is poised to act dangerously as a government surveillance tool, the crypto professor suggested. Johns Hopkins University, Matthew Green, on Twitter.

The client-side analytics announced today by Apple could potentially “be a key ingredient in adding surveillance to encrypted messaging systems,” he said. wrote. “The ability to add scanning systems like this to E2E [end-to-end encrypted] messaging systems has been a major “demand” from law enforcement around the world. ”

Siri message analysis and “intervention”

In addition to scanning devices for images that match the CSAM database, Apple said it will update the Messages app to “add new tools to notify children and their parents when receiving or checking out. ‘sending sexually explicit photos “.

“Messages uses machine learning on the device to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not have access to messages,” Apple said.

When an image in Messages is flagged, “the photo will be blurry and the child will be alerted, presented with useful resources, and reassured that there is no problem if he does not want to see this photo.” The system will allow parents to receive a message if children see a flagged photo, and “Similar protections are available if a child attempts to send sexually explicit photos. The child will be notified before the photo is sent, and parents can receive a message if the child chooses to send it, ”Apple said.

Apple said it will also update Siri and Search to “provide parents and children with detailed information and help if they encounter dangerous situations.” The Siri and Search systems “will step in when users search for CSAM-related queries” and “explain to users that interest in this topic is harmful and problematic, and provide partner resources for help on this problem”.

The Center for Democracy & Technology called the scanning of photos in Messages a “backdoor,” writing:

The mechanism that will allow Apple to scan images into Messages is not an alternative to a backdoor, it is a backdoor. Client-side scanning at one “end” of the communication breaks the security of the transmission, and informing a third party (the parent) of the content of the communication invades their privacy. Organizations around the world have cautioned against client-side analysis because it could be used by governments and businesses to monitor the content of private communications.

Apple’s technology to analyze images

Apple’s technical paper on CSAM detection includes some privacy promises in the introduction. “Apple does not learn anything about images that do not match the known CSAM database,” he says. “Apple cannot access metadata or visual derivatives of corresponding CSAM images until a match threshold is exceeded for an iCloud Photos account.”

Apple’s hashing technology is called NeuralHash, and it “analyzes an image and converts it to a unique number specific to that image. Only another image that looks almost identical can produce the same number; for example, images that differ in size or quality have the same NeuralHash value, ”Apple wrote.

Before an iPhone or other Apple device uploads an image to iCloud, “the device creates a cryptographic security voucher that encodes the match result. It also encrypts the image’s NeuralHash and a visual derivative. This voucher is uploaded to iCloud Photos with the image. ”

Using “Threshold Secret Sharing”, Apple’s “system ensures that the content of security vouchers cannot be interpreted by Apple unless the iCloud Photos account exceeds a known CSAM content threshold,” states the document. “It is only when the threshold is exceeded that cryptographic technology allows Apple to interpret the content of security vouchers associated with the corresponding CSAM images.”

While noting the 1 billion in 1 probability of a false positive, Apple said it “also manually reviews all reports made to NCMEC to ensure the accuracy of the reports.” Users can “appeal to reestablish their account” if they believe their account has been flagged in error.

User devices to store the hidden CSAM database

Users’ devices will store a “blind database” that allows the device to determine when a photo matches an image in the CSAM database, Apple explained:

First, Apple receives NeuralHashs corresponding to CSAMs known to child safety organizations above. Then, these NeuralHashes undergo a series of transformations that include a final stage of blindness, fueled by elliptical curve cryptography. The blindness is performed using a blinding server-side secret known only to Apple. Hidden CSAM hashes are placed in a hash table, where the position in the hash table is purely a function of the NeuralHash of the CSAM image. This hidden database is stored securely on users’ devices. The properties of elliptical curve cryptography ensure that no device can infer anything from the underlying CSAM image hashes from the hidden database.

An iPhone or other device will analyze user photos, calculate a NeuralHash, and search for “blind hash table entry.” The device also “uses the blind hash that the system looked for to obtain a derived encryption key” and uses that encryption key “to encrypt the associated payload data.”

Combined with other steps, this ensures that only images matching the CSAM database will be decrypted, Apple wrote:

If the hash of the user image matches the entry in the known CSAM hash list, the NeuralHash of the user image turns into exactly a blind hash if it has undergone the series of transformations performed when configuring the user image. the database. Based on this property, the server will be able to use the cryptographic header (derived from NeuralHash) and using the server side secret, it will be able to calculate the derived encryption key and successfully decrypt the associated payload data.

If the user’s image does not match, the above step will not lead to the correct derived encryption key and the server will be unable to decrypt the associated payload data. The server therefore learns nothing about the mismatched images.

The device does not learn the outcome of the match as it requires knowledge of the blinding secrecy on the server side.

Finally, the client uploads the image to the server along with the voucher which contains the encrypted payload data and the cryptographic header.

As stated earlier, you can read the tech summary here. Apple also released a longer and more detailed explanation of the “private set intersection” cryptographic technology that determines whether a photo matches the CSAM database without revealing the result.



[ad_2]

Source link