Apple pushes back on child abuse analysis in new FAQ



[ad_1]

In a new FAQ, Apple has tried to allay concerns that its new measures against child abuse could be turned into surveillance tools by authoritarian governments. “Let’s be clear, this technology is limited to detecting CSAM [child sexual abuse material] stored in iCloud and we will not accede to any government request to extend it, ”the company writes.

Apple’s new tools, announced last Thursday, include two features designed to protect children. One, called ‘communications security,’ uses machine learning on the device to identify and scramble sexually explicit images received by children in the Messages app, and can notify a parent if a 12-year-old and less decides to see or send such a picture. The second is designed to detect known CSAMs by analyzing user images if they choose to upload them to iCloud. Apple is notified if CSAM is detected, and it will alert authorities when it verifies the existence of such material.

The plans met a quick reaction from digital privacy groups and activists, who argued they were introducing a backdoor into Apple’s software. These groups note that once such a backdoor exists, it is always possible to expand it to search for types of content that go beyond child sexual abuse material. Authoritarian governments could use it to research politically dissenting material, or anti-LGBT regimes could use it to suppress sexual expression.

“Even a carefully documented, carefully thought out, narrow-range backdoor is still a backdoor,” wrote the Electronic Frontier Foundation. “We’ve seen this mission slip into action before. One of the technologies originally designed to analyze and hash images of child sexual abuse has been reused to create a database of “terrorist” content that businesses can contribute to and access for the purpose of prohibit such content. “

However, Apple argues that it has safeguards in place to prevent its systems from being used to detect anything other than images of sexual abuse. He says his list of banned images is provided by the National Center for Missing and Exploited Children (NCMEC) and other child safety organizations, and that the system “only works with CSAM image hashes. provided by NCMEC and other child safety organizations “. Apple says it won’t add image hashes to this list and that the list is the same on all iPhones and iPads to prevent individual targeting of users.

The company also said it would decline requests from governments to add non-CSAM images to the list. “We have already faced requests to create and deploy government-mandated changes that degrade user privacy, and we have firmly refused those requests. We will continue to refuse them in the future, ”he said.

It should be noted that despite assurances from Apple, the company has made concessions to governments in the past in order to continue operating in their countries. It sells iPhones without FaceTime in countries that don’t allow encrypted phone calls, and in China, it has removed thousands of apps from its App Store, as well as moved to store user data on servers. a state-run telecommunications company.

The FAQ also does not address some concerns about the feature that scans posts for sexually explicit content. The feature doesn’t share any information with Apple or law enforcement, the company says, but it doesn’t say how it ensures that the tool only focuses on sexually explicit images.

All it would take to widen the narrow backdoor that Apple is building is an extension of machine learning settings to find additional content types, or an adjustment of configuration flags to analyze. , not just children’s accounts, but anyone’s accounts, ”the EFF wrote. The EFF also notes that machine learning technologies often classify this content incorrectly, and cites Tumblr’s attempts to crack down on sexual content as a vivid example of where technology has gone wrong.

[ad_2]

Source link