[ad_1]
A year ago, Google announced that it would automatically stop scanning and badyzing the text of your Gmail messages to target you with advertisements. The move has been widely hailed as a victory for online privacy.
It's perhaps a surprise, then, to see new titles this week on Google allowing people to read emails from Gmail users. On Monday, the Wall Street Journal published an investigative article titled "The Secret Secret of Tech: Application Developers Filtering Your Gmail." He explains how several third-party companies have obtained permission from Gmail users to scan their inboxes. , even allowed human employees to read people's messages.
The explanation is not that Google has retreated on its privacy practices. It is that the public and the media are starting to raise the bar as a result of the Facebook Analytica scandal of Facebook – re-evaluating, along the way, our relationship with some of the world's largest Internet companies. And it's a very good thing.
For more than a decade, Google has used software to "read" people's Gmail messages and show them related advertisements about their personal communications. While Google insisted that its human employees did not literally read people's mail, the practice was still considered by many to be invasive and terrifying. Google 's critics (and rivals) have touted it as an example of how the company' s business model required violating the privacy of its users.
Society no longer turns a blind eye when large technology companies let sensitive information from their users to third parties .
That changed in June 2017. The company's paid Gmail service, part of its G Suite cloud computing software for businesses, was in full swing without . the breaking of e-mails or targeted advertisements. To further fuel its growth, Google has chosen to try to consolidate its reputation for privacy by making the free and mainstream version of Gmail less invasive. Google said its computers would stop scanning people's messages for advertising targeting purposes. But they continued to scan them for other purposes, such as filtering spam and malware, personalizing search results, and suggesting "smart responses" to emails. In May, following the Cambridge Analytica Facebook fiasco, NBC News reported in detail all the ways that Google always collected users' personal data, including their Gmail messages.
This week, the Journal highlighted another form of Gmail data collection, arguably more troubling, than the NBC News report did not mention. It's the e-mail monitoring that Google allows external developers to perform on users' inboxes, provided that they get permission from those users. From the history of the Journal:
The Internet giant continues to allow hundreds of external developers to scan the inboxes of millions of Gmail users who sign up for email services offering comparisons of prices, automated routes or other tools. Google does little to control these developers, who train their computers – and, in some cases, employees – to read e-mails from their users, a review of The Wall Street Journal
found. the type of data they will collect, and users must accept this before they can start using the applications. To Google's credit, this permissions pop-up is written in plain English and concise, unlike the long legalistic privacy policies that accompany most online services. Yet, the free Internet has trained many of us to simply accept the necessary conditions to install or launch an application once we have decided that we want it. And as the Journal pointed out, permissions granted to Gmail app developers by many users have proven to be more extensive than what users could reasonably expect. In some cases this involved granting additional access to relatively obscure third parties, which the user might never have heard of.
For example, an email management and badysis company called Return Path appears to have accessed the inboxes of some 2 million Gmail users. He did not do this directly by interviewing them, but through a network of 163 "partner" applications. These applications require users to monitor email in exchange for a free service, and then, in turn, they also grant Return Path access to users' inboxes.
An example is an application called Earny that promises to save you money by scanning your inbox to verify purchases made on items that have been priced since, so you can get a refund . . The Journal reports that Earth does not only scan your email, but also badociates with Return Path to allow Return Path to collect and process your emails. Return Path then uses this data to inform its business customers, such as Overstock.com, about your e-mail reading behavior, so that they can target and better target their marketing e-mails. The paper reports that Return Path has allowed some of its human employees to read people's emails as well, to better train its filtering algorithms.
Earny says that his privacy policies clearly state that third parties such as Return Path will be able to monitor your email according to their own privacy policies . But it is hard to imagine that Gmail users who have signed up to Earny have really understood exactly what would happen to their data and how many companies could then use them for their own purposes. Even a Google rep with whom I spoke acknowledged that she did not understand exactly how Earny's relationship with Return Path worked or how this complied with Google's privacy policies.
Up to now, there is no indication that Earny, Return Path or anyone else has abused their access to users 'data in a damaging manner, such as l' # [[[[,,,,,,,,,,,,,,,,,,,,. has made a developer of Facebook third-party applications by providing personal information to political users. Cambridge Analytica without their consent. That's good news for Google, because without clear evidence of harm, it's unlikely that Google CEO Sundar Pichai, or Alphabet CEO Larry Page, will find themselves in front of Congress like Mark Zuckerberg of Facebook.
However, there are parallels between the two series of revelations. Both Facebook and Google, pushed to turn their products into "platforms" on which third parties could build applications, were willing to give up their control of sensitive personal data of users. It seems so far that Google has been more cautious about it than Facebook was – but not careful enough, because of the high standards of today.
It should be noted that Gmail is not the only email service that allows application developers and data seekers to scan users' emails. Microsoft and Oath, the Verizon affiliate that includes the remains of AOL and Yahoo, seem to be giving different forms of email access to third parties. Oath, in particular, has touted its ability to extract user messages for marketing data.
Google made it clear that it was taking the Journal's story seriously, by posting an article on its Keyword blog to explain and defend its privacy practices. To guard against the misuse of Gmail user data by third-party applications, Google has implemented a multi-step review process that includes an automated and manual developer review, a policy review Privacy and an application home page. (The Journal's story, however, quoted at least one developer of applications saying that he had never seen any evidence of such a human test.)
The Google blog also highlighted a feature called Security Checkup that periodically encourages users to review their privacy settings, including the access that they have given to app developers Gmail. You can use Google Security Control now to see which apps you have allowed to scan your inbox.
When I did, I found that I had given permission to Apple's OS X operating system, at the same time as I did. Apple's iOS application and Apple's Calendar app for Mac, which I have no problem. But I also saw that I gave my Gmail keys to a third-party calendar application called CalenMob, which I have not used for years. I quickly revoked this access.
Here is another parallel between the history of Gmail and Cambridge Analytica: In both cases, the story revolved around a long-standing industrial practice that most users, and the media , tacitly agreed. Yes, we granted access to our sensitive data left and right, but it was a price that many of us were willing to pay for the "free" internet. We implicitly trusted not only the likes of Facebook and Google, but also tiny, obscure startups and individual app developers who had access to our social media profiles and personal mailboxes.
In retrospect, it's hard to imagine what we all thought. But surely part of that was that Silicon Valley's big tech companies were widely viewed by the public and mainstream media as benign forces in society, improving our lives inexpensively through the magic of software. And these big tech companies have always encouraged us not to think too much about compromises; to sign terms of service agreements, we have not had time to go through, let alone understand; to sacrifice our archaic notions of privacy on the altar of big data.
Google has not announced any changes in response to the Journal's story. should
This is the reason why Facebook's attempts to pin the Cambridge Analytica scandal on a single rogue application developer have been so gloomy. It was not a man named Aleksandr Kogan that Facebook users were confident when they signed up for a seemingly harmless quiz app called "This Is Your Digital Life" – it was Facebook.
Google seems to at least understand that Google, and not just Earny or Return Path, is responsible for ensuring that data from Gmail users is not misused. A few years ago, Google's policies, which required that apps get explicit permission from users in plain language, reviewing developers' credentials and ensuring that their requests for authorization were consistent with their objective stated, would have been considered sufficient, even supererogatory.
But after Cambridge Analytica, the implementation of the new strict regulation on the protection of privacy in Europe and the growing disillusionment with regard to Silicon Valley and the free Internet, Google's modest efforts are not enough. The company no longer diverts its gaze when large technology companies let sensitive information from their users to shady third parties. We saw this in the skeptical responses from Congress and the UK Parliament to Facebook's apology for the Cambridge Analytica debacle. We saw it last month when major US mobile operators agreed to no longer share real-time location data from their users with certain third-party data aggregators. And we see it now in Google's Gmail Privacy Review.
So far, Google has not announced any changes or crackdowns in response to the Journal's story, "said a spokesman. But it should: The arrangement by which Return Path gets and conditions the information of Gmail users for business customers is not something that the users of the largest email client in the world should accept. Nor should we accept similar arrangements from Microsoft or Oath.
There are real trade-offs when we maintain Web platforms at higher privacy standards. Giants such as Facebook and Google are consolidating their control of user data, while the freewheeling application ecosystems that have helped launch so many startups are fading. The reaction against human employees occasionally reading e-mails from people, in particular, seems potentially misguided. According to the Journal, Return Path has allowed some of its employees to read a subset of messages from Gmail users for a limited time for the specific purpose of training its algorithms to better filter their personal emails from its systems. collection. If we have to rely on machine learning algorithms, humans need to be able to train these algorithms to properly handle our data.
That said, the balance oscillates in the right direction. While the US government demonstrates neither the willingness nor the ability to protect privacy online – even if representatives of both parties agree that it is a problem – the media, the public and non-profit organizations have to put pressure on technology companies ways. And that's just what they do, finally.
Source link