[ad_1]
Apple clearly thought he was on a winner with his Child Pornography Material Detection (CSAM) system and, more than likely, he expected more of the usual gushing applause he’s used to. It’s not hard to imagine that Cupertino thinks they have solved the intractable CSAM problem in a way that works best for them and their users.
Apple claims its system is more private because it doesn’t actively scan or monitor photos uploaded to its servers unlike almost everyone in the industry, but over the weeks it seems more and more that Apple has created a Rube Goldberg machine in order to differentiate itself.
The consequences of this one-sided approach are far-reaching and will impact everyone, not just those in Apple’s walled garden.
Governments have been pushing for some time for big tech to create decryption capabilities. One way to achieve a compromise is to have an encrypted system but not allow users to encrypt their own backups, thus allowing some visibility into the content, while another is to have an end-to-end encrypted system. end and inspect the content as it is decrypted on the user device for viewing.
While the rest of the industry has gone for the former, Apple has changed course for the latter.
This change came just as Australia forwarded its interim set of rules that will define how its online safety law works.
“If the service uses encryption, the service provider will take reasonable steps to develop and implement processes to detect and address material or activity on the service that is or may be illegal or harmful,” the draft states. .
See also: Apple to adjust CSAM system to maintain a billion in one billion false positive deactivation threshold
Canada is going further in a similar project. In its iteration, it demands proactive monitoring of content relating to CSAM, terrorism, incitement to violence, hate speech and non-consensual image sharing, and the creation of a new role of commissioner of digital security to assess whether the AI used is sufficient, according to University of Ottawa Law Professor Dr Michael Geist.
If that became law, online communications services in Canada would also have 24 hours to make a decision on a piece of harmful content.
How this potential law interacts with Apple’s decision to set a threshold of 30 CSAM images before injecting humans into the process and inspecting content metadata will be something to watch out for in the future.
While the Canadian proposal was seen as a collection of the worst ideas from around the world, countries like India, the UK and Germany are also pushing internet regulation forward.
Apple said its CSAM system will only boot with the United States when iOS 15, iPadOS 15, watchOS 8 and macOS Monterey arrive, which means it could be argued that Apple will be able to avoid regulations. from other western countries.
But not so fast. Apple Chief Privacy Officer Erik Neuenschwander said in a recent interview that the hash list used to identify CSAM will be built into the operating system.
“We have a global operating system,” he said.
Even though Apple has consistently said its policies are aimed at preventing excesses, use by corrupt regimes, or bogus suspensions, it is not clear how Apple will answer a very important question: what happens when ‘Apple Receives Court Order That Goes Against Its Policies?
There is no doubt that non-U.S. Lawmakers will take a dim view if the kind of systems they want are available on Apple devices.
“We follow the law wherever we do business,” said Tim Cook in 2017 after the company removed VPN apps from its Chinese app store.
Follow the Law: Citizen Lab Finds Apple’s China Censorship Process Expands to Hong Kong and Taiwan
While there are a lot of valid concerns and questions about Apple’s system itself, the consequences of the existence of such a system are of more concern.
For years, Apple has rejected requests by U.S. authorities to help unlock the phones of those suspected of being involved in mass shootings. Responding to FBI requests in 2016, Cook wrote a letter to clients that refuted suggestions that unlocking a phone was the end of the matter, and said the technique could be used over and over again.
“In the wrong hands, this software – which does not exist today – would have the potential to unlock any iPhone in someone’s physical possession,” said the CEO.
The key to Apple’s argument was the words between the dashes, and now in August 2021, while that exact capability doesn’t exist, an on-device capability is expected to appear on all of its devices, and that’s a reason enough to be concerned.
“Apple unilaterally chose to enroll its users in a global mass surveillance experiment, apparently underestimated the potential costs this could have on people who are not involved in making or storing CSAM content, and has outsourced these costs to a user base of over 1 billion people worldwide, ”wrote Christopher Parson, senior research associate at Citizen Lab.
“These are not the activities of a company that has reflected significantly on the weight of its actions, but rather the reflection of a company that is willing to sacrifice its users without adequately balancing their privacy and security needs. security.”
For the sake of argument, let’s give Apple a pass on all its claims – perhaps the biggest tech giants can withstand legislative pressure and the system remains obsessed with only CSAM in the United States. . However, it will require eternal vigilance on the part of Apple and privacy advocates to ensure it follows through.
The biggest problem is the rest of the industry. The slippery slope does exist and Apple has taken the first step down. Maybe he has boots with ice grips and has tied himself to a tree to make sure he can’t get down any further, but few others do.
As a result, on-device scanning has become a lot less disgusting because if a company as big as Apple can do it, and it promotes it on the basis of privacy and continues to sell squillions of devices, that must therefore be acceptable to users.
Based on that, shady companies that want to upload data to their own servers now potentially have a BOM designed for them by Apple. This is not user data, it is security vouchers. What previously might have been considered a form of exfiltration is now being done to protect users, comply with government orders, and make the world a safer place.
Apple’s successor systems are unlikely to care as much about user privacy, technical expertise and resources, the ability to withstand court orders, or just the good intentions Cupertino seems to have.
Even if Apple abandoned its plans tomorrow, it would be too late. The genie is now out of the bottle. Critics and those keen to take an on-device approach will simply say that Apple has bowed to pressure from extreme sections of the privacy debate if it decides to change its mind.
Businesses are going to be competing with each other over who can dig better on devices, brag about the number of users being arrested and how that makes them safer than other choices. He will undoubtedly miss the number of mistakes made, the fringe cases that never get properly addressed or the angst caused to some of those who pay for devices. It’s not going to be pretty.
Apple doesn’t seem to understand that it has transformed its user’s relationship with its products from an ownership relationship to one that is potentially conflicting.
If your device scans the content and downloads it somewhere and you can’t turn it off, then who is the real owner? That’s a question we’ll have to answer soon, especially since client-side analytics isn’t going away.
ZDNET’S MONDAY MORNING OPENING
The Monday Morning Opener is our opening salvo for the week in tech. Because we operate a global site, this editorial is posted on Mondays at 8:00 a.m. AEST in Sydney, Australia, which is 6:00 p.m. EST on Sundays in the United States. It is written by a member of ZDNet’s global editorial board, which is made up of our senior editors in Asia, Australia, Europe and North America.
BEFORE THE OPENING ON MONDAY MORNING:
[ad_2]
Source link