YouTube under fire for recommending kids videos with inappropriate comments – TechCrunch



[ad_1]

More than a year after a moderation scandal of child safety content on YouTube, just a few clicks of the mouse make the platform's recommendation algorithms redirect the search for bikini hul videos. from adult women to clips of underwear miners engaged in body contortion gymnastics or taking an ice bath or sucking on a "challenge" ice lollipop.

A youtube The creator named Matt Watson has reported the problem in a critical article from Reddit, claiming that he had found dozens of children's videos in which YouTube users are exchanging inappropriate comments and timestamps under the fold, denouncing the company for not having prevented what he describes as a "soft-core". pedophilia "to operate at sight on his platform.

He also posted a video on YouTube showing how the platform's recommendation algorithm pushed users into what he calls a "wormhole" for pedophilia, accusing society of facilitating and monetizing sexual exploitation. children.

We were able to easily replicate the behavior of the YouTube algorithm described by Watson in a private browser session erased by history which, after clicking on two videos of adult women in bikinis, suggested we watch a video titled "sweet sixteen pool party ".

By clicking on it, YouTube's sidebar has been able to stream several videos of prepubescent girls in the "next" section, where the algorithm contains related content to encourage users to click relentlessly.

The videos we recommended in this sidebar included thumbnails showing girls showing gymnastics poses, showing their "morning routines" or licking ice lollies or ice lollypops.

Watson stated that it was easy for her to find videos containing inappropriate / predatory comments, including sexually suggestive emoji and timestamps that appear to highlight, shorten and share positions and / or most compromising moments in the videos of minors.

We also found many examples of timestamps and inappropriate comments on children's videos that the algorithm of YouTube recommended us.

Some comments from other YouTube users have denounced those who made sexually suggestive remarks about children in the videos.

In November 2017, several major advertisers froze their spending on the YouTube platform after a BBC and Times investigation found equally obscene comments about children's videos.

Earlier in the month, YouTube was also criticized for its poor quality content, targeting children as viewers on its platform.

The company then announced a number of policy changes related to video for children, including stating that the police would aggressively act on comments from children's videos and that videos containing inappropriate comments about children's videos would have totally disabled.

Some of YouTube's recommended girls videos already had comments turned off, suggesting that the AI ​​had already identified a large number of inappropriate comments shared (due to its policy of disabling comments on clips containing children ). considered "inappropriate") – yet it was suggested to watch the videos themselves as part of a test search with the word "bikini hul".

Watson also said that he found ads posted on some of the children's videos containing inappropriate comments and found links to shared child pornography in YouTube comments.

We were unable to verify these results in our brief tests.

We asked YouTube why her algorithms advocated recommending videos of minors, even when the viewer started by watching videos of adult women, and why inappropriate comments remained a problem for minors' videos over a year later. that the same subject had been highlighted by investigative journalism.

The company sent us the following statement in response to our questions:

All content – including comments – endangering minors is obnoxious and we have clear rules banning it on YouTube. We aggressively enforce these policies by reporting them to the relevant authorities, removing them from our platform and closing the accounts. We continue to invest heavily in technology, teams and partnerships with charities to solve this problem. Strict rules govern where we allow ads to appear and we enforce them vigorously. When we find content that violates our policies, we immediately stop serving or removing ads.

A YouTube spokesman also told us that he was reviewing his policies in light of what Watson had pointed out, adding that he was reviewing the specific videos and comments presented in his video, also stating that some content had been deleted as a result of: the article.

However, the spokesman pointed out that the majority of videos reported by Watson were innocent recordings of children performing daily tasks. (Of course, the problem lies in the fact that innocent content is being reallocated and divided in time for improper gratification and exploitation.)

The spokesman added that YouTube is working with the National Center for Missing and Exploited Children to report to law enforcement agencies by making inappropriate comments about children.

In a broader discussion on the subject, the spokesperson told us that context determination remains a challenge for his AI moderation systems.

On the human moderation front, he said the platform now has about 10,000 human proofreaders to assess content posted for review.

The volume of video content uploaded to YouTube is about 400 hours per minute, he added.

There is still very much a massive asymmetry around moderation of content on user-generated content platforms, with AI being ill at ease to fill the gap given the persistently weak context of understanding, even as platforms' human moderation teams remain desperately underfunded and out of scale. of the task.

Another key point omitted by YouTube is the obvious tension between advertising-based business models that monetize content based on viewer engagement (such as his own) and content security issues that need to take a close look at the substance. content and the context in which it was consumed.

This is certainly not the first time that YouTube's recommendation algorithms have been used for negative consequences. In recent years, the platform has been accused of automating radicalization by pushing viewers toward extremist and even terrorist content – which has led YouTube to announce another policy change in 2017 regarding the management of created content. by known extremists.

The broader societal impact of algorithmic suggestions that ignite conspiracy theories and / or promote fake, fact-based scientific or factual content has also been raised on several occasions, including on YouTube.

And just last month, YouTube announced that it would reduce the "content limit" and content recommendations "likely to disinform users in a harmful way," citing examples such as videos promoting the use of video content. a miracle cure for a serious illness, or claiming that the earth is a reality. flat, or making "patently false statements" about historical events such as the Sept. 11 terrorist attack in New York.

"Although this change affects less than one percent of YouTube's content, we believe that limiting the recommendation of this type of videos will result in a better experience for the YouTube community," he wrote at the time. "As always, users can still access all videos that comply with our Community Guidelines and, where applicable, appear in the recommendations for channel subscribers and search results. We believe that this change is a balance between maintaining a platform for freedom of expression and respect for our responsibility towards users. "

YouTube said the change in algorithmic recommendations for conspiracy videos would be gradual and would only initially affect recommendations for a small set of videos in the United States.

He also noted that implementation of the recommendation engine modification would involve both machine learning technicians, human evaluators and experts involved in training AI systems.

"Over time, as our systems become more accurate, we will apply this change to more countries. It's an extra step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the YouTube recommendations experience, "adds the text.

It remains to be seen if YouTube will extend this policy change and decide that it should exercise greater responsibility in the way its platform recommends and broadcasts children's videos intended for remote consumption.

Political pressure can be a motivating factor, with momentum for regulation of online platforms – including calls for internet companies to face clear legal responsibilities and even a legal obligation towards users to the content they distribute and monetize.

For example, UK regulators have made Internet security legislation and social media security a political priority – with the government scheduled to release a white paper this winter outlining its plans for government platforms.

[ad_2]

Source link