False Facebook Accounts: The Endless Battle Against Bots – Tech News



[ad_1]

The staggering figure of more than three billion fake accounts blocked by Facebook over a six-month period highlights the challenges that social networks face in reducing automated accounts, or bots, and other harmful attempts to platform manipulation.

Here are four key questions about fake accounts:

How did so many false accounts arise?

Facebook said this week has "disabled" 1.2 billion fake accounts in the last three months of 2018 and 2.19 billion in the first quarter of 2019. Most fake social media accounts are "bots," created by automated programs to display certain types of information – a violation of the terms of use of Facebook and part of an effort to manipulate social conversations.

Sophisticated players can create millions of accounts using the same program. Facebook said that its artificial intelligence detects most of these efforts and disables accounts before they can publish on the platform. Nevertheless, he acknowledges that about 5% of the more than two billion active Facebook accounts are likely to be fake.

What is wrong with fake accounts?

Fake accounts can be used to amplify the popularity or dislike of a person or movement, thus distorting users' views on the true public feeling. The researchers said the robots played a disproportionate role in spreading misinformation on social media in anticipation of the 2016 US elections. The researchers explained.

Malicious actors have used this kind of false accounts to spread distrust and social division in many parts of the world, sometimes sparking violence against groups or individuals.

Robots "do not just manipulate the conversation, they build groups and build bridges," said Kathleen Carley, a computer scientist at Carnegie Mellon University, who did research on social networking robots. in another group and, in doing so, they are building echo chambers ".

Facebook claims that its artificial intelligence tools can identify and block fake accounts as they are created – and therefore before they can publish false information. "These systems use a combination of signals, such as the repeated use of suspicious e-mail addresses, suspicious actions or other signals previously badociated with other fake accounts that we have removed, "said Facebook Analytics vice president Alex Schultz.

Does Facebook have control of the situation?

Facebook's transparency report figures suggest that Facebook is aggressively acting on fake accounts, said Onur Varol, a postdoctoral researcher at Northeastern University's Center for Complex Research Network.

"Three billion, that's a big number – it shows that they do not want to miss fake accounts, but they are willing to take the risk" of turning off some legitimate accounts, Varol said.

The researcher noted that legitimate users may be inconvenienced but can generally restore their accounts. "My feeling is that Facebook is making serious efforts" to fight against false accounts, he added.

But the newer robots are becoming more sophisticated and difficult to detect because they can use language as well as human, according to Carley. "Facebook may have solved the battle of yesterday, but the nature of these things is changing so quickly that they may not be getting the news," she said.

Varol agreed, noting that "there are robots that understand natural language and can respond to people, and that's why it's important to continue research."

Should I worry about robots and fake accounts?

Many users do not know the difference between a real account and a false account, according to the researchers. Facebook and Twitter have stepped up efforts to identify and eliminate fake accounts, and some public tools such as Botometer developed by Varol and other researchers can help determine the likelihood of fake Twitter accounts and followers.

"If you use Facebook to communicate with your family and friends, do not worry much," said Filippo Menczer, a computer scientist who conducts research on social media at Indiana University. "If you use it to access news and share it with friends, you have to be careful."

Menczer said many Facebook users pay little attention to the source of the content and risk sharing false or misleading information. "Everyone thinks that they can not be manipulated, but we are all vulnerable," he said.

Researchers say that humans, along with robots, are a key part of the chain of misinformation. "Most of the false information does not come from robots," Carley said. "Most of it comes from blogs and robots rebroadcast it" to amplify misinformation.

Facebook chief Mark Zuckerberg said Facebook is seeking to eliminate financial incentives from fake accounts. "Much of the harmful content we see, including misinformation, is in fact commercially motivated," Zuckerberg told reporters. "One of the best tactics is therefore to remove the incentives to create fake upstream accounts, which limits the content created downstream." – AFP

[ad_2]
Source link