Facebook will change algorithm to demote the "limit content" that almost violates policies – TechCrunch



[ad_1]

Facebook will modify its news feed algorithm to demote content that almost violates rules banning disinformation, hate speech, violence, bullying and click attacks, so that it is seen by fewer people, even if they are very engaging. The change could massively reduce the reach of inflammatory political groups, fake news vendors and more of the worst content on Facebook. It allows the company to hide what it does not want on the network without taking a hard position to defend against the fact that the content breaks the rules.

In a 5,000-word letter from Mark Zuckerberg published today, he explained that there was a "basic incentive problem" that "When they are not controlled, people disproportionately engage in more sensational and provocative content. Our research suggests that no matter where we draw the boundaries of what is allowed, when content approaches this limit, people will be more interested in it on average – even if they do not. tell us after the fact that they do not like the content. "

Without intervention, the commitment with the limit content looks like the graph above, increasing as it gets closer to the political line. So, Facebook intervenes, artificially suppressing the distribution of this type of content in the News Feed broadcast, so the engagement looks like the graph below.

[Update: While Zuckerberg refers to the change in the past tense in one case, Facebook tells me borderline content demotion is only in effect in limited instances. The company will continue to repurpose its AI technology for proactively taking down content in violation of its policies to find and demote content that approaches the limits of those policies.]

Facebook will apply sanctions to the limit content, not only in the news feed, but in all of its content, including groups and pages themselves, so as not to radicalize people by their recommending to join communities, because they strongly commit to respecting the political line. "REGroups and decisive pages can still fuel polarization, "notes Zuckerberg.

However, users who deliberately want to display limit content will have the option to sign up. "Zuckerberg writes that"For those who want to make these decisions themselves, we think that they should have this choice because this content does not violate our standards. For example, Facebook can create flexible standards for content types such as nudity, where cultural norms vary. countries prohibit women from exposing a lot of skin on photographs, while others allow nudity on network television. However, it may take some time before these options are available, as Zuckerber says, Facebook must first train its AI to be able to reliably detect content crossing the line or deliberately approaching it. from the border.

Facebook had previously changed the algorithm to demote clickbait. Starting in 2014, links that people clicked on but were quickly fired without returning to Like the post on Facebook were downgraded. In 2016, he analyzed the titles of common clickbait expressions and this year, he banned clickbait rings for inauthentic behavior. But now, this gives the demotion treatment to other types of sensational content. It could mean publications with violence that does not show physical injury, or obscene images with barely covered genitals, or publications that suggest that people should commit acts of violence for a cause without directly telling them .

Facebook could be exposed to criticism, especially from marginal political groups that rely on limited content to create their bases and spread their messages. But with the polarization and sensationalism that plague and tear the society, Facebook has opted for a policy to defend freedom of expression, but users do not have the right to amplify this speech.

Below you will find Zuckerberg's full written statement on limiting content:

One of the main problems facing social networks is that, when they are not controlled, internet users disproportionately engage in more sensational and provocative content. This is not a new phenomenon. It is widely used in the news of cable and has been a staple of tabloids for over a century. On a large scale, this can undermine the quality of public discourse and lead to polarization. In our case, this can also degrade the quality of our services.

[ Graph showing line with growing engagement leading up to the policy line, then blocked ]

Our research suggests that no matter where we draw the boundaries of what is allowed, when content approaching this limit, people will be more interested in it on average – even if they do not. tell us later that they do not like the content.

This is a fundamental incentive problem that we can remedy by penalizing the limiting content so that it gets less distribution and commitment. By making the distribution curve look like the graph below, where distribution decreases as content becomes more sensational, people are not encouraged to create provocative content as close to the line as possible.

[ Graph showing line declining engagement leading up to the policy line, then blocked ]

This process of adjusting this curve is similar to what I have described above to proactively identify harmful content, but it is now focused on limiting content identification. We train the AI ​​systems to limit content detection in order to be able to distribute this content less.

The category we focus most on is bait-clickers and misinformation. People are constantly telling us that these types of content make our services worse, even if they interact with them. As I mentioned above, the most effective way to prevent the spread of misinformation is to remove the false accounts that generate it. The second most effective strategy is to reduce its distribution and virality. (I described these approaches in more detail in my note on [Preparing for Elections].)

It is interesting to note that our research has shown that this natural model of increased commitment to limit content applies not only to news, but to almost any content category. For example, photos close to the nudity line, such as those showing revealing clothing or sexually suggestive positions, generated more engagement on average before we changed the distribution curve to discourage this. The same is true for publications that do not fit into our definition of hate speech, but which remain offensive.

This template can be applied to groups of people and the pages they follow. This question is particularly important because if social networks generally expose people to more diverse perspectives and if groups generally encourage inclusion and acceptance, groups and dividing pages can always feed into. polarization. To manage this, we must apply these distribution changes not only to the rankings, but also to all of our recommendation systems for the elements to which you must adhere.

One common reaction is that rather than reducing the distribution, we should simply move the line defining what is acceptable. In some cases, it is worth taking into account, but we must not forget that this will not solve the problem of the underlying incentive, which is often the biggest problem. This engagement model seems to exist no matter where we draw the lines, so we need to change this incentive and not just delete the content.

I think these efforts on the underlying incentives of our systems are one of the most important jobs we do in the business. We have made significant progress over the last year, but we still have a lot of work to do.

By addressing this incentive problem in our services, we believe it will create a virtuous circle: by reducing the sensationalism of all forms, we will create a healthier and less polarized discourse, allowing more people to participate safely .

[ad_2]
Source link