Facebook has not been able to solve its content moderation problems



[ad_1]

Can Facebook hope to properly monitor its more than two billion users?

Earlier this year, when Facebook founder Mark Zuckerberg appeared before the US Congress and later the European Parliament to answer questions, politicians expressed

Violent, abusive, intimidating messages and videos hateful, badually explicit or otherwise unsettling are regularly found on the large social media site. What was the company doing to solve this problem?

Each time, Zuckerberg snuck behind a well-honed heart-felt answer. The company was rapidly increasing the number of its human moderators, and also had big plans to incorporate artificial intelligence tools to do this job faster.

After Facebook's leaders appeared here before the Oireachtas, the same answer was given to question asked by Senator Tim Lombard, who had asked what was being done to counter hate speech.

In his written response, Facebook reported that there were now 7,500 moderators – an increase from 4,500 last year – contractors and vendor partners. Moderators are active 24 hours a day, 7 days a week and aim to review reports within 24 hours.

Facebook adds, "Many abuses may not be reported, that is why we are exploring the use of artificial intelligence.

Arbitrary Evaluation

But Facebook, like other major social platforms such as Twitter, seems to use an alarming and arbitrary evaluation of the material to be removed, even when many people report the same offensive instance.

To expose a feminine nipple on Facebook in a picture of a perfectly natural badfeeding mother, and the contents are removed

Exhibit a female nipple on Facebook in a picture of a perfectly natural badfeeding mother, and the contents are removed.

Yet Facebook moderators have been told by their Dublin instructors to leave dreadful examples of explicit violence or racist messages, images and videos, revealed by a secret investigation by the Dispatches program of Channel 4

In secret sessions recorded at one of the Dublin contractors for moderation of content, CPL Resources, instructors tell trainees to abandon positions that would surely violate hate speech laws in many countries.

A caricature of a mother drowning her young white girl in a bathtub with the caption "When your daughter's first crush is a little negro" was perfectly fine. Well, maybe if you like to wear white sheets and burn crosses, but for the rest of us, it's indisputable.

Similarly, a derisory article about Muslims is acceptable because, says the moderator in an incomprehensible view, "They are still Muslims but immigrants, which makes them less protected".

Ah, yes. Second-clbad school teachings of citizenship, as practiced against Irish emigrants in the United States and the United Kingdom for decades. We all know how much the Irish have been treated as immigrant rubbish.

Extremists

It seems that Facebook even has policies that allow extremist individuals and organizations to benefit from special protections (unlike Muslim immigrants, for example).

Especially horrible is a video showing a man brutally beating a toddler, that according to the program, Facebook now uses as a training video as an example of something to leave, but label "disturbing" (so that people have to click to see it.)

Facebook and CPL both have serious questions to answer

Facebook and CPL have serious questions to answer

While Facebook expresses its concern about these "mistakes" (there are still so many errors, listed in serial excuses over the last decade), the problem can not be placed at the door of a third-party contractor. Not when Facebook itself has allowed online video to stay online.

And not when Facebook's moderators ruled that the comment "children should be burned" did not violate its community norms, a comment that Taoiseach condemned. The truncated online child safety plan of the government, which does not provide sanctions against online platforms

Sisyphean task

Let's face it: even with 100,000 monitors, Facebook and others large social platforms like Twitter or YouTube will not be able to effectively moderate their vast communities. First of all, companies can be disturbing in setting standards and guidelines – as in the case of children flying video – or, as in the case of CPL, too much to do. (bizarre) interpretation of individual instructors or moderators

. , the task is sisyphean, given the size of these platforms and the volume of the messages.

Should the free form structure of these platforms be reconsidered?

As many technology experts have quickly pointed out after Zuckerberg's testimonials, artificial intelligence will also not be a panacea, and will exacerbate Facebook's existing problems with inappropriate censorship.

Now be reconsidered? Society – and governments – have pbadively accepted the platforms' arguments that their barely moderate design is acquired and that the solutions can only be additions to an untouchable format.

Yet a free real world for everyone in an Irish city – say, an open bazaar where children could be harbaded by adults, racist threats and tolerated intimidation – would be closed with all the force of the law.

So, why do we continue to favor digital worlds? And those companies that run them?

[ad_2]
Source link