Facebook automatically generates videos on extremist profiles despite its promise to fight hate speech



[ad_1]

The animation video starts with a photo of black flags of jihad. A few seconds later, he presents a summary of publications for more than a year on social networks: anti-Semitic verses, comments on punishment and a photo of two men wearing more jihadist flags while they burn the bars and the stars.

It was not produced by extremists, but for Facebook. In an ingenious process of self-promotion, the giant social networks takes a year from a user's content and automatically generates a celebratory video. In this case, the user is called "Abdel-Rahim Moussa, The Caliphate ".

"Thank you for being here, Facebook," concludes the video in a comic bubble before showing the famous "Like" of society.

Facebook likes to give the impression of being take advance on the extremists eliminating their publications, often even before users see them. But a confidential complaint from an informant before the Securities and Exchange Commission obtained by the agency The badociated press He alleges that the company exaggerated his success.

According to the complaint, over a period of five months during the past year, researchers watched the pages users who have joined groups that the Department of State of the United States they designated as terrorist organizations.

During this period, 38% of messages with symbols before extremist groups have been eliminated. In his own criticism, the AP discovered that from this month, most of the prohibited content cited in the study (a video execution, images of cut headspropaganda honoring the militant martyrs) slipped through the algorithmic web and remained easy to find in Facebook.

The complaint comes when Facebook is trying to stay firm faced with growing criticism of their privacy practices and their ability to maintain hate speech, murders and suicides out of your service.

In the face of criticism, CEO Mark Zuckerberg He said he is proud of the company's ability to automatically eliminate violent messages through artificial intelligence. For example, during a profit call last month, he repeated a carefully worded wording used by Facebook.

"In areas like terrorism, for al-Qaeda and content related to ISIS, which now accounts for 99% of the content we remove in the category of our systems, is proactively branding before anyone sees it, "said the tycoon. it really looks good ".

But Zuckerberg did not offer an estimate the amount of forbidden material total which is eliminated. At the same time, the investigation behind the complaint filed with the SEC aims to highlight obvious flaws in the company 's approach because Last year, researchers began monitoring users who explicitly identified themselves as members of extremist groups. Something that was not difficult to document.

Some of these people even mention the extremist groups like their employers. A profile announced by the black flag of a group affiliated to Al Qaeda He listed his employer as Facebook. The profile that included the video generated automatically with the burning of the flag also contained a video of the The leader of al-Qaeda, Ayman al-Zawahiri urge jihadist groups not to fight each other.

Although the study is far from exhaustive, partly because Facebook rarely it puts public provision a lot of your data, the researchers involved in the project affirm the ease of identification of these profiles by a search for basic keywords and the fact that so few of them have been eliminated, suggest that Facebook's claims that its systems trap the most extreme content are not accurate.

"I mean, it's just stretch the imagination beyond disbelief, "he says Amr Al Azm, one of the researchers involved in the project. "If a small group of researchers can find hundreds of pages of content to help with simple searches, why not a giant company with all its resources?"

Al Azm, professor of history and anthropology at Shawnee State University in Ohio he also directed a group in Syria document the looting and smuggling antiques. Facebook admits that its systems are not perfect, but claims to make improvements.

In response to reports from AP, the representative Bennie Thompson, D-Miss.The president of the National Safety Committee of the Chamber Representatives expressed frustration that Facebook has made so little progress in content blocking despite the guarantees that he has received from the company.

"This is another deeply troubling example of the The inability of Facebook to manage their own platforms, and the extent to which they need to clean up their performance, "he said. Facebook not only must rid their platforms of terrorist and extremist content, but also it is necessary to be able to avoid that it is amplified ".

But as a clear indication of the ease with which users can escape Facebook, a page from a user called "Nawan al-Farancsa" has a letterhead with white letters on a black background saying in English "The Islamic State." The banner is marked with a photo of an explosive mushroom that rises from a city.

The profile should have attracted the attention of Facebook, as well as agencies against espionage. It was created in June 2018 and lists users as coming from Chechnya, once it was a militant hotspot. He says he lived in Heidelberg, Germany, and studied at a university in Indonesia. Some of the user's friends have also published militant content.

The page, even in the last days, apparently escaped Facebook's systems, Due to an obvious and prolonged escape from moderation, Facebook should be an expert in recognition: the letters were not a research text, but were embedded in a graphic block. But the company claims that its technology badyzes audio, video and text, even embedded, looking for images that reflect violence, weapons or banned group logos.

Facebook now says employs 30,000 people who work in their practices security and protection, by examining potentially dangerous material and anything that does not belong to the site. However, the company relies heavily on its confidence artificial intelligence and the ability of their systems to eventually eliminate the bad things without the help of humans. New research suggests that the goal is far and some critics claim that the company does not make a sincere effort.

When the material is not removed, it is treated in the same way as anything else published by the 2.4 billion Facebook users: held in animated videos, linked, clbadified and recommended by algorithms.

[ad_2]
Source link