[ad_1]
Before the next mid-term elections, Facebook is fighting to fight false information provided to voters. Natasha Abellard of Veuer has history.
Buzz60
As voters vote in the mid-term elections, social media companies have another concern: to protect this process.
After reports of false accounts and false news infiltrating social networks during the 2016 presidential election, companies such as Facebook and Twitter have redoubled their efforts to prevent electoral manipulation.
The issue is not just the validity of the information found on their platforms, but also the trust of their users.
In January 2017, reports revealed that foreign entities such as the Russian-based Internet Research Agency were using social media platforms to disseminate false and conflictual information throughout the 2016 campaign. In September 2017, Facebook announced the connection of more than 3,000 political ads on its platform between 2015 and 2017 to Russia. Facebook later said that more than 10 million users had been exposed to commercials.
Hi! Here we have full coverage of the mid-term elections. Let's start!
Last September, Facebook and Twitter leaders testified before Congress about charges that the use by their agents of their foreign platforms could have affected the outcome of the election. presidential.
Facebook and Twitter spokespersons said that after the 2016 elections, companies had stepped up efforts to identify and remove fake accounts and protect users against false information.
Yoel Roth, head of integrity of Twitter's website, said the company had cracked down on "coordinated platform manipulation", or that people and organizations were using Twitter to mislead others users and disseminate false information.
More: Twitter, Lyft, Bumble and Tinder: How Technology and Social Media Companies Can Change Elections This Year
More: Twitter Deletes More Users During Purge Period, But Revenues and Profits Exceed Expectations
More: "Fake Social", "Fake Search" are the new "false news" as Trump attacks technology before mid-term
During the 2016 campaign, misinformation appeared online, including fake accounts and online publications with hyper-partisan views. Before mid-November, experts say the techniques are similar, but people who spread misinformation have become smarter.
Social networks have too.
"We have not seen any fundamental change in what the (bad actors) are doing, but in 2016, it was as if we were entering a house with the door wide open and there was now has at least one dog going to bark, "said Bret Schafer, social media analyst at the Alliance for Securing Democracy, a bipartisan national security group.
Schafer said that social media efforts to protect their platforms and users have created a "friction layer" that makes it more difficult to carry out misinformation campaigns. Efforts include the crackdown on "bad actors" who use fake accounts to spread misinformation and force political advertisers to verify their identity by providing a legitimate mailing address.
Facebook has developed a multifaceted approach to the integrity of elections. The company has nearly doubled its security team on the eve of the 2018 session and is playing a more proactive role in identifying "coordinated unauthentic behavior," according to spokeswoman Brandi Hoffine Barr.
"We now have more than 20,000 people working on safety and security, we have put in place advanced systems and tools to detect and stop threats, and developed backstops … to help deal with any threat. unexpectedly as quickly as possible, "said Hoffine Barr. .
Most of the company's efforts begin with the detection and removal of fake accounts. In May, Facebook announced the deactivation of nearly 1.3 billion fake accounts in the first half of 2018. As these accounts are often the source of false information on the site, Facebook said that it was fighting the spread of false news by deleting them.
Facebook also announced in October that it had removed 559 pages and 251 accounts for breaking the platform's "spam and unauthenticated behavior coordinated" rules, which includes creating extensive networks of accounts for deceive other users. On Facebook, this may look like people or organizations creating fake pages or fake accounts.
More: Critics say Facebook wants to fight against electoral manipulation
More: We read each of the 3,517 Facebook ads bought by the Russians. Here is what we found
Hoffine Barr described Facebook's work as "an ongoing effort" and emphasized that society does not work in a vacuum.
"Before the next mid-term elections, we are working closely with federal and regional election officials, as well as with other technology companies, to coordinate and share our efforts. information, "she said.
Two weeks before the end of the session, Facebook revealed a disinformation campaign led by Iran that was trying to sow controversy over controversial issues such as President Donald Trump and immigration. There is currently no evidence that the campaign was linked to the Iranian government.
Twitter has also taken action against bad actors, recently serving accounts that the company had previously blocked for "suspicious behavior changes". In one 1st blog post, Twitter executives have detailed three "critical" areas of its efforts to preserve electoral integrity.
The first, an update of the Twitter rules, includes the extension of what Twvitter considers to be a fake account. The company currently uses a number of criteria to make its decision, especially if the profile uses stolen or copied photos and provides intentionally misleading profile information. The second category is described as "detection and protection" and involves identification and spam accounts, as well as improving Twitter's ability to prohibit users who break its rules.
The most visible efforts fall under the heading "product developments". To give users control of the order of their timelines to the addition of a election tag for candidate accounts, this category is essentially about helping Twitter users stay informed.
Roth said the company is also sharing information with researchers to learn more about violations of electoral integrity.
"Our goal is to try to keep ahead of the new challenges ahead," Roth said. "Protecting public conversation is our main mission."
As the efforts of social networking companies are relatively new, it may be difficult to measure their effects. Facebook highlights a recent study from Stanford University and New York University, in which researchers found that user interactions with fake news sites had declined further. half on Facebook after the 2016 election.
Schafer said that one of the big differences from 2016 is the decline in automated business. He said that Twitter in particular has become "much more aggressive" when it comes to shutting down robots.
He pointed out, however, that social networks were in a "delicate situation" in terms of content regulation. Too much regulation and they are criticized for suppressing points of view. Too few and their own platform rules are not enforced.
"Unless we want to get them to actively regulate content and make informed decisions about what's factual or not, you have to accept that a number of" bad activities "will occur on the platform ", Schafer said.
And while these companies are attacking disinformation providers, others are putting emphasis on tools allowing users to distinguish facts from fiction.
On Oct. 2, the New York tech startup Our.News launched a browser extension for Google Chrome and Firefox called Newstrition, which provides users with basic information about the media publishers for each given article, as well as the latest news. a third party verification of the facts.
The tool was developed jointly with Newseum and the Freedom Forum Institute, two non-profit organizations dedicated to preserving the first amendment.
More: Donald Trump calls for more civility as he attacks the media and Democrats at Charlotte's rally
Unlike traditional fact-checking tools that qualify articles as true or false, Richard Zack, CEO of Our.News, said the Newstrition tool allows users to view the basic information validated for each article and to make this decision themselves.
"One of the things we discovered during our research and discussions with people … is that the public feels like they are not part of the (current) process." Zack said. "They feel they are not heard in many ways."
Newstrition also invites users to give their opinion on news articles via a public rating system. Just like Amazon or Yelp! Critical, individual responses are then aggregated to show "public consensus," Zack said.
"We are saying," We want to know your opinion. We want to know what you think. You are part of the process, "he said.
The extension has already received thousands of downloads, as well as the attention of media companies. Zack said that although Our.News is not free to discuss details, the company is currently discussing with several major news publishers about the integration of the tool on their websites.
"If people do not know what to believe, it undermines the entire First Amendment," said Zack. "This undermines the freedom of the press as an institution."
Social networks and experts say that one thing is clear: misinformation campaigns are far from over. Schafer said that although the issue is particularly timely during an election year, these efforts are taking place behind the scenes every day.
"It's not as if these accounts appeared one way or another before the elections, then they returned to hibernation and they would come back in 2020," he said. "They will work every day, breaking the trust of citizens in democracies or democratic institutions or simply igniting partisan debates".
More: These are the liberal memes used by Iran to target Americans on Facebook
More: Facebook foils campaigns of political influence originating from Iran and Russia before the United States at mid-term
Read or share this story: https://www.usatoday.com/story/news/politics/elections/2018/11/03/facebook-twitter-elections-interference/1806308002/
[ad_2]
Source link