Facebook now uses AI to sort content for faster moderation



[ad_1]

Facebook has always made it clear that it wants artificial intelligence to do more moderation tasks on its platforms. Today, he announced his final step towards that goal: putting machine learning in charge of his moderation queue.

Here’s how moderation works on Facebook. Posts deemed to violate company rules (which include everything from spam to hate speech and content that “glorifies violence”) are flagged, either by users or through machine learning filters. Some very clear cases are handled automatically (replies might involve deleting a message or blocking an account, for example) while others are queued for review by human moderators.

Facebook employs around 15,000 of these moderators worldwide and has been criticized in the past for not supporting these workers enough, employing them under conditions that can lead to trauma. Their job is to sort through the reported messages and decide whether or not they violate various company policies.

In the past, moderators would review posts more or less chronologically, processing them in the order in which they were reported. Now Facebook says it wants to make sure the most important posts are seen first and is using machine learning to help. Going forward, an amalgamation of various machine learning algorithms will be used to sort this queue, prioritizing posts based on three criteria: virality, severity, and likelihood of breaking the rules.

The old Facebook moderation system, combining proactive moderation through ML filters and responsive reporting from Facebook users.
Image: Facebook

The new moderation workflow, which now uses machine learning to sort the queue of posts for review by human moderators.
Image: Facebook

The exact weighting of these criteria is unclear, but Facebook says the goal is to deal with the most damaging posts first. Thus, the more viral a post (the more it is shared and seen) the faster it will be processed. The same is true of the seriousness of a message. Facebook says it ranks posts that involve real-world damage as the most important. It could mean content involving terrorism, child exploitation or self-harm. Messages like spam, which are annoying but not traumatic, are rated as least important for review.

“All content violations will still be subject to substantial human review, but we will use this system to better prioritize [that process]Ryan Barnes, product manager of the Facebook Community Integrity Team, told reporters at a press briefing.

Facebook has shared some details on how its machine learning filters analyze posts in the past. These systems include a model called “WPIE,” which stands for “Full Post Integrations,” and takes what Facebook calls a “holistic” approach to rating content.

This means that the algorithms together judge various elements of a given article, trying to determine what the image, caption, poster, etc. reveal together. If someone says they are selling a “whole lot” of “special treats” along with a photo of what looks like baked goods, are they talking about Rice Krispies squares or edibles? The use of certain words in the caption (such as “mighty”) may tip the judgment one way or the other.

Facebook uses various machine learning algorithms to sort content, including the “holistic” rating tool known as WPIE.
Image: Facebook

Facebook’s use of AI to moderate its platforms has come under intense scrutiny in the past, with critics noting that artificial intelligence lacks a human’s ability to judge the context of many online communications. Especially with topics like misinformation, bullying, and harassment, it can be nearly impossible for a computer to know what it’s looking at.

Facebook’s Chris Palow, a software engineer on the company’s Interaction Integrity team, agreed that AI has its limits, but told reporters that technology can still play a role in removing the. undesirable content. “The system is all about marrying AI and human reviews to make fewer total mistakes,” Palow said. “AI will never be perfect.”

When asked what percentage of posts the company’s machine learning systems rank incorrectly, Palow did not give a straightforward answer, but noted that Facebook only allows automated systems to operate without human supervision when ‘they’re as precise as human reviews. “The bar for automated action is very high,” he said. Still, Facebook regularly adds more AI to the moderation mix.

Correction: An earlier version of this article incorrectly named Chris Palow as Chris Parlow. We regret the error.

[ad_2]

Source link