This is how Facebook’s “prison” works, the controversial sanctions system that accumulates criticism on the net.



[ad_1]

The Facebook Supervisory Board who defined in May the access of former President Donald Trump to his profile on the social network revealed a controversial action involving external moderators, artificial intelligence, algorithms and – in many cases – lack of arguments for sanctions.

Since it started doing business in October, the Supervisory Board has received more than 220,000 calls from users and issued just eight decisions, six of which overturn the decision Initial Facebook.

John taylor, the benchmark of the meeting, ensures that the intention was never “to offer the best versions of a particular topic of the day.” The goal of the board is to make a decision on the most difficult content decisions the business faces. “

In recent years, Facebook has introduced many new rules, usually in response to specific complaints from lawmakers and various interest groups, designed to protect users and guide a regiment of external employees They work on moderation of the content.

Facebook's algorithms can restrict the visibility of questionable posts, showing them to fewer users.  Photo: REUTERS.

Facebook’s algorithms can restrict the visibility of questionable posts, showing them to fewer users. Photo: REUTERS.

Internal directives are in addition to Community business standards listed on its website and are not made public.

Moderators were given instructions in the documents to which they had access Wall Street Journal (WSJ) to remove, for example, statements such as “If you vote by mail, you will have Covid!”.

Around voting and passing US elections, which featured Joe Biden as the winner, moderators were asked to remove the caption: “My friends and I will do our own poll tracking to make sure only the right people vote.”

The curious thing about the case, for example, is that they admitted to another at least controversial expression that prompted the electorate not to vote: “People heard disrupt the process of going to the polls today. I don’t go to the electoral college. “

The spokesperson for the social network told the WSJ that the differences are small but “distinct enough that we mention them and why we have this detailed documentation for our content reviewers.”

Inside the prison

Facebook reviews two million pieces of content per day. In this context, in Zuckerberg’s words, the company makes the wrong decision in more than 10% of cases, which means that around 200,000 decisions could be wrong at the end of the working day.

Users who break the rules can spend time in what many call “Facebook Prison”, you lose the ability to comment and post for 24 hours to 30 days or, in more serious cases, lose your accounts indefinitely.

Usually a Facebook user should accumulate various attacks against him before facing a ban, but that number is not reflected on the platform. “We do not want people to play with the system”, they justified.

When content is removed or users are blocked, the social network usually issues a notification telling them that they have violated the “Community standards”.

The ad usually indicates which major category was violated. Regardless of the decisions of the Supervisory Board, Facebook restores thousands of content previously deleted quarterly after user calls.

Recently, Facebook has turned more to automation to help guide its decisions, relying on artificial intelligence and algorithms to remove content and also decide on user calls.

Many blocked users are unaware of the reasons that led to the restriction of their Facebook account.  / Shutterstock

Many blocked users are unaware of the reasons that led to the restriction of their Facebook account. / Shutterstock

A New York University research paper published in mid-2020 called the company’s approach to content moderation “Extremely inappropriate” and asked Facebook to stop outsourcing most of the work and double the total number of moderators.

“That’s what you get when you develop a system as large as this,” says Olivier Sylvain, a law professor at Fordham University who has studied Facebook and content moderation in general.

“I don’t think it’s unhealthy or wrong for us to ask ourselves whether the benefits that come from such great service outweigh the confusion and the harm,” he added.

Barksdale, professor of history in Newcastle, Oklahoma, he has been banned from his Facebook page several times since without receiving a convincing response from the company.

He has already been barred from posting and commenting for three days after sharing a WWII photo of Nazi officials in front of the Eiffel Tower as part of a history discussion, along with a brief description of the photo. He was given a 30-day ban for trying to explain the term pseudoscience to one of his followers.

In another episode that cost him a seven-day restriction on the platform, Barksdale directly clicked a button to appeal, Facebook refused and extended its ban to 30 days, claiming that six of its previous posts went against the norms of the corporate community.

Barksdale says he tries to follow the standards of the Facebook community, but doesn’t know he’s committed so many violations.

The Supervisory Board which ratified the membership of former US President Donald Trump in May.  Photo: AFP

The Supervisory Board which ratified the membership of former US President Donald Trump in May. Photo: AFP

From the social network, they defended themselves with the argument that users can usually find their offenses in their “supports input tray ” attached to your profile.

Facebook’s community standards, public rules, have expanded in recent years to include six main categories and 27 subcategories ranging from “violent and graphic content” to “fake news”.

Facebook’s Hate Speech Policy Prohibits Direct Attacks Against People Based on Race, Religion, and Other Demographics. However, it does allow them when the words are used to create awareness, or “in an empowering manner”, depending on community standards.

“But we do require people to make their intention clear. If the intention is not clear, we can remove the content, ”they noted.

Internally, Facebook operates from a much more specific and complicated set of guidelines, a document of over 10,000 words entitled “Implementation Standards”, which its more than 15,000 content moderators rely on to make decisions.

There are even more documents, known internally as “Operational Guidelines” and “Known Issues”, which further explain the company’s rationale for its rules.

In most cases, when content is flagged by Facebook, whether by a user or its algorithms, the post, photo or video is usually reviewed by the moderators, whose job is to try to enforce the rules developed by Facebook.

In one of the documents to which WSJ had access, the company prohibits the use of a “degrading physical description”, which he defined as “qualifying an individual’s appearance as ugly, disgusting or disgusting”. For this, they gave as an example: “It’s disgusting and disgusting how fat and ugly John Smith is.”

Many users are unfamiliar with the internal documents that help moderators interpret public rules. On the other hand, Alphabet, the Google search engine, has published the 175-page set of rules that its 10,000 “search-quality assessors” use for search results.

Unlike Facebook, Google has revealed a document with its testers' rules for moderating search results.  Photo: DPA.

Unlike Facebook, Google has revealed a document with its testers’ rules for moderating search results. Photo: DPA.

“We want people to understand where we draw the line in different kinds of speech, and we want to invite discussion about our policies, but we don’t want to muddy the waters by inundating people with too much information,” says the door – Facebook speak. . .

In several decisions this year, The Supervisory Board urged Facebook to be clearer on its community standards, similar to the specificity of its internal documents.

Facebook numbers

During the last years, the social network relied more on its artificial intelligence to report problematic content, according to company sources. In May 2020, the company touted its use of AI to remove content related to the coronavirus pandemic.

Facebook removed 6.3 million content in the ‘bullying and harassment’ category in the fourth quarter of 2020, up from 3.5 million in the third quarter, in part due to ‘increased automation capabilities,’ the company said. in its quarterly community.

The users they used around 443,000 pieces of the category’s content and Facebook restored about a third, according to company data.

Fewer content items were removed in other categories compared to the previous quarter, and the action of content can be affected by many external factors, such as viral messages as the numbers increase.

Facebook increasingly monitors content in order to are not disclosed to users, hoping to avoid disputes over their decisions.

Algorithms “bury” questionable messages, showing them to fewer users, silently restricting the reach of those suspected of misconduct rather than removing content or completely blocking them from the platform.

SL

Also watch

Facebook will no longer write theories about the supposed human origin of the coronavirus

How to create your own autoresponder for WhatsApp and Facebook

.

[ad_2]
Source link