[ad_1]
It's been months since Google tries to stay out of the growing backlash of technologies, but yesterday the dam finally broke the announcement of a bug in the rarely used Google+ network, which exposed private information to 500,000 users. Google discovered and fixed the bug in March, around the same time that the Cambridge Analytica story was gaining momentum. But with the announcement of the news, the damage is already spreading. The consumer version of Google+ is closing, German privacy regulators in Germany and the United States are already considering prosecution, and former SEC officials are publicly speculating on Google's mistakes. .
The vulnerability itself seems to have been relatively limited. The heart of the problem was a specific development API that could be used to display nonpublic information. But most importantly, there is no indication that it was actually used to display private data and, given the small number of users, it is unclear how much non-public data there was to see. The API was theoretically accessible to anyone who asked for it, but only 432 people asked for access (again, that's Google+), so it's plausible that none of them ever thought about it. use this way.
The biggest problem of Google is not the crime, but the camouflage. The vulnerability was fixed in March, but Google did not appear until seven months later, when The Wall Street Journal got some of the memos dealing with the bug. The company seems to know that it is messed up. Why else remove a whole social network from the map? – But there is real confusion about what is wrong and when, a confusion that has more profound consequences on the way technology treats this type of privacy breach.
Part of the disconnect comes from the fact that, legally, Google is in clear. There are many laws on reporting violations – mainly the GDPR, but also a series of state-level bills – but by that standard, what happened to Google+ was not technically a violation. These laws concern unauthorized access to user information, codifying the basic idea that anyone who steals your credit card or phone number has the right to know it. But Google just discovered that the data was available to developers and not that all the data had actually been taken. In the absence of clear stolen data, Google was not subject to any legal reporting requirements. As far as the lawyers were concerned, it was not a violation, and it was enough to solve the problem in silence.
There are real arguments against the disclosure of this type of bug, although this is not as convincing retrospectively. All systems have vulnerabilities, so the only effective security strategy is to look for and fix them all the time. Therefore, the most secure software will be the one that will detect and correct the largest number of bugs, even if it may seem counter-intuitive from the outside. Requiring companies to publicly report each bug could be a perverse incentive, punishing the products that best protect their users.
(Of course, Google has been outlawing other companies' bugs for years in Project Zero, which is one of the reasons why critics are so eager to jump on the apparent hypocrisy, but the Project Zero team is will say that third party reports are a completely different dance, revelation being generally used as an incentive for correction and as a reward for bug hunters seeking to build their reputation.)
This logic is more logical for software bugs than for social networks and privacy issues, but it is accepted wisdom in the cybersecurity world, and it is not an exaggeration to say that this has guided the thinking of Google keeping it in mind.
But after Facebook's painful loss of grace, legal and cybersecurity arguments seem almost irrelevant. The contract between technology companies and their users is more fragile than ever before, and stories like this are making it even longer. The concern relates less to a breach of information than a breach of trust. Something went wrong and Google did not tell anyone. Absent on Newspaper report, it is not obvious that it would have ever been. It is difficult to avoid this uncomfortable and unanswerable question: what does this tell us no more?
It is too early to tell if Google will face a real reaction back. The low number of users involved and the low importance of Google+ suggest that this will not be the case. But even if this vulnerability was minor, such failures pose a real threat to users and a real danger for the companies they trust. The confusion over its naming – a bug, a breach, a vulnerability – conceals a deeper confusion about what companies actually owe their users when a breach of privacy is significant and what control we have Actually. These are crucial questions for this era of technology, and if the last few days are an indication, these are issues that the industry is still trying to solve.
Source link