Twitter purge solves nothing



[ad_1]

I lost 120 followers on Twitter during the night. President Donald Trump lost 340,000, the New York Times 732,000, former President Barack Obama 3 million or so. Even Twitter CEO Jack Dorsey has lost 200,000 people in his company's highly publicized crackdown on suspicious accounts. But what looks like a major purge is more like a public relations attack, as Twitter and Facebook are trying to outdo each other to show that they care about the health of the conversation on the social network

purging in a blog post explaining that most of the accounts targeted by the company were not bots. They were mainly set up by real people, she wrote, "but we can not confirm that the person who opened the account still has control and access." To confirm this, Twitter tells the supposed owners to solve a captcha

or change their pbadword. Accounts for which this does not happen are "locked"; after a month, they stop counting towards the total number of Twitter users. Now, they do not rely on followers accounts either

. The interesting part here is how Twitter determines that there is something wrong with an account. According to Gadde's publication, the trigger is usually a sudden change in the behavior of an account. Suddenly, he could start tweeting "a lot of unsolicited answers or mentions" or "misleading links". The same behavior in a new account also triggers an alarm: Twitter's algorithms identify the account as potentially "spam" or "automated" "Its owner, for example by asking him to confirm a phone number.

Twitter reports a significant increase in the number of accounts challenged in this manner – from just over 2.5 million in September to 10 million in May. Given that Twitter had 226 million active users per month in the first quarter of 2018, this looks like a lot – but only until we look at Facebook's recent report supposed to document a similar activity.

In May, Facebook announced it had withdrawn 583 million fake accounts, compared with 694 million in the fourth quarter of 2017. This represents about 27% of Facebook's monthly active users in the first quarter.But of course, Facebook has not decimated its base of users – this would have pushed up the course of action. The company explained that she had killed the fake accounts just as malicious actors tried to register them. The idea is that Facebook's user base is not inflated, it only contains 3 to 4% fake accounts, but it would have been inflated with fake if it had not not detected algorithms that, according to the company, detected 98.5% of counterfeits.

Facebook's criteria for spotting false accounts are similar to Twitter's: repeated display of the same content, sudden increase in the number of messages sent and other patterns of activity. Twitter and Facebook both have systems to stop automatic registration of accounts.

The problem is that on the scale of social networking operation, even a very high detection rate still allows for adding millions of fake accounts each month. Of the 583 million fake Facebook accounts deleted in the first three months of this year, the algorithms spotted 98.5%. This means that users reported the rest, 8.7 million accounts. Facebook has no idea of ​​the number of unreported cases. In an article published in 2017, a team of Canadian researchers has shown that requests for the creation of accounts of an Internet robot network on Instagram, owned by Facebook, were successful in 18% of cases. Detection technology may work better now, but there's still no way for social networks to know exactly what they're doing to the cops and thieves game. Whatever the case may be, the market of false adepts and false promises is always flourishing, a simple search revealing to anyone interested.

The automatic detection of false accounts or misappropriations of accounts is a flourishing academic field because social media companies are willing to devote significant resources to this work – and even to do it manually where the algorithms fail. Facebook, for example, admits that its technology is better at detecting nudity than hate speech, which is reported algorithmically in only 38 percent of cases before users report it. It is better to spend a lot on detection than to deal with public outrage and regulatory scrutiny as a result of false news and election scandals.

Of course, no police force can prevent or punish 100% of crimes. Social networks are increasingly making their public policing efforts so that users, and society at large, can start thinking about it in the following ways: they do what they can, but we can not prevent bad things things.

however, is false framing. Technically, nothing prevents Twitter and Facebook from setting up an identification procedure that would make automatic registration impossible – but they do not do it. Twitter has begun to require a phone number or email confirmation when registering, but both can easily be automated. Correct identification does not necessarily mean an identity document issued by the government; resources devoted to detection could be redirected to identification technology. However, this could trigger new attacks on social networks to collect too much data on their users.

While they are trying to navigate between spam nuisances and fake news on the one hand, and privacy concerns on the other hand, network companies can only step up the pace. public relations activity around their efforts to fight against the fake. In the process, they do their best not to hurt the number of users that their investors follow religiously. Does this approach improve the health of the conversation on the social network? My answer, up to now, is a resounding no. Your experience could be different. To find out, say something combative on Twitter and see what happens.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

To contact the author of this story:
Leonid Bershidsky at [email protected]

To contact the editor responsible for this story:
Jonathan Landman at
[email protected]

[ad_2]
Source link