Apple plans to scan all of your images and report people to the police?



[ad_1]

Apple announced this week that it will start rolling out new child safety features. These features will arrive later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. Apple says this agenda is ambitious and protecting children is an important responsibility.
In this video, iCave Dave describes the new child safety features that will start appearing later this year with iOS 15. Dave explains in detail how the new features will work and how well Apple is handling such a sensitive issue. There are three new ways Apple will aim to protect children online.

Security in messages

Message features will not be enabled by default on all devices; they will need to be enabled for children’s devices as part of a family on your Apple devices. Here’s what Apple has to say about the child protection features coming from the Messages app as part of iOS 15:

The Messages app will add new tools to notify children and their parents when receiving or sending sexually explicit photos. When receiving this type of content, the photo will be blurry and the child will be notified, presented with useful resources, and reassured that there is no problem if they do not want to see that photo. As an added precaution, the child can also be informed that to make sure they are safe, their parents will receive a message if they see it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be notified before the photo is sent, and parents can receive a message if the child chooses to send it.

New guidelines in Siri and Search

Siri warnings will also be put in place if a user attempts to search for images of child pornography (CSAM). Here’s how Apple says these features will work:

Apple is also expanding the guidance in Siri and Search by providing additional resources to help kids and parents stay safe online and get help in dangerous situations. For example, users who ask Siri how they can report CSAM or child exploitation will be directed to resources on where and how to file a report.

Siri and Search are also updated to intervene when users search for CSAM related queries. These interventions will explain to users that interest in this topic is harmful and problematic, and will provide partner resources for help with this problem.

I think these features seem like a great way to help protect kids online.

CSAM detection

Finally, the most controversial feature Apple rolled out involved scanning all images on the device before they were saved to your iCloud account. The images are still encrypted, so Apple still cannot see your images. They will simply be reported if the markers in a user’s image match the same markers in the National Center for Missing and Exploited Children database. Here’s what Apple has to say about this feature:

New technology in iOS and iPadOS will allow Apple to detect known CSAM images stored in iCloud Photos. This will allow Apple to report these cases to the National Center for Missing and Exploited Children (NCMEC).

Apple’s method of detecting known CSAMs is designed with user privacy in mind. Instead of scanning images to the cloud, the system matches on the device using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further turns this database into an unreadable set of hashes that is stored securely on users’ devices.

This innovative new technology enables Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAMs. And this while providing significant privacy advantages over existing techniques, as Apple only knows about users’ photos if they have a collection of known CSAMs in their iCloud Photos account. Even in these cases, Apple only learns the images that match the known CSAM.

Concerns about this technology

It would be difficult for anyone to fault Apple for making changes to protect children online and for reporting images of CSAM. I totally agree with iCave Dave on handling these types of images and content of that nature. It appears that Apple is handling the protection of children in a thoughtful and appropriate way.

Personally, I am inclined to agree with some criticisms of image scanning technology and the precedent it sets. While we would all agree that producing and sharing CSAM images is just plain wrong. The problem with scanning images is when report users are appropriate, where should the line be drawn? Do images of drug use need to be reported? Some would say they absolutely should. What about terrorism, would that be defined by the government of each territory? In the West we are probably fine, but other parts of the world might have different definitions of “terrorist”. Who would decide what should be reported and to whom it should be reported?

I think we all agree that the types of images discussed in this video and specifically mentioned by Apple are bad, the authors should be reported, reported, and the world would be a better place if these types of images were not produced or shared. I have yet to see anyone advocate for the defense of CSAM images. However, I believe there is some discussion to be had around any future use of this technology. What about countries where homosexuality is illegal, is it possible that images of consenting adults doing something that the government does not approve of will be flagged and reported? It may seem like an unlikely possibility, but with the precedent created by this technology, it is a possible possibility.

Could governments with questionable ethics in the future take advantage of Apple to flag the images they dictate in order to continue selling iPhones in this country? I think with Apple’s current focus on customers and their privacy, it’s unlikely to be an issue anytime soon.

Google and Facebook have been analyzing uploaded images for this type of content for several years. Apple will now use it on the device. Does this infringe on Apple’s previous statement that “privacy is a human right”?

A cynic might say that this technology is being introduced in the interest of protecting children because it is a very difficult subject that anyone disagrees with.

What do you think of the images of users scanning Apple? Are tech critics overreacting? Should a service provider be able to check everything stored on their servers? How would you feel if Adobe started scanning images to Creative Cloud or your Lightroom library for specific types of images?

Let me know in the comments, but remember to be polite, even if you don’t agree with someone’s point of view.



[ad_2]

Source link