Facebook represses fictitious messages inciting violence, starting with Sri Lanka and Myanmar, United States. News & Top Stories



[ad_1]

MENLO PARK, UNITED STATES – Facebook announced Wednesday (July 18) that it would begin to suppress misinformation that could trigger violence, a response to growing criticism that the flood of rumors on its platform has caused physical damage to people in neighboring countries. The new policy is a shift in Facebook's broader approach to misinformation, which until now has been focused on removing its popularity on the platform without scrubbing the problematic content, said the Wall Street Journal.

The company has also been confronted with more questions about the role of the platform as a vector of false information that can fuel social tensions.

A spokeswoman for Facebook said the company would implement the new policy in Sri Lanka and Myanmar. People and groups have used Facebook to spread rumors that eventually lead to physical violence, the Wall Street Journal reported.

Attacks in these countries According to Facebook, disinformation removed in Sri Lanka under the new policy included content falsely claiming that Muslims were poisoning food given or sold to Buddhists.

"There were cases of misinformation that did not happen. "We violate our distinct community standards, but it contributes to physical violence in countries around the world," said Tessa Lyons, product director on Facebook, citing Sri Lanka and Myanmar specifically

.

Facebook has been criticized for circulating rumors or obviously false information that could have contributed to the violence. Many consider Facebook as a means of disseminating false information in recent years.

The social media giant has put in place a series of changes to combat the use of the network to spread misinformation, lies that incite violence. Elections

The new policy raises questions that company officials have stated that it is too early to answer, including who will be its partners and what will be the criteria for becoming one.

It is also unclear how these partners will determine whether content such as photos that have been tampered with, created or shared to create unstable situations in the real world, is false or could lead to violence. It has also not been clear how Facebook would ensure that these organizations remain independent or relatively free from any political bias.

Lyons said that Facebook was at the beginning of creating these policies and had no details to publicly share. In an interview with WSJ, she said that Facebook would rely on the judgment of outside organizations because they have "local context and local expertise."

Facebook has been relying on third-party organizations to help it navigate other thorny issues in the past. In December 2016, as he was under increasing pressure for misinformation to proliferate on the platform during US elections, Facebook announced that he would team up with US-based crime control organizations. United to help remove the false news. determine which claims are true and false. If a sufficient number of organizations say that it is wrong, Facebook will lower the ranking of messages.

[ad_2]
Source link