[ad_1]
Twitch, the popular game streaming site Amazon bought out in 2014, has been inundated in recent months with “hate raids,” which can spill vulgar and hate speech into the site’s main chat streams. Racist slurs and fanatic references have been winning this fight for some time, but a leaked interface update suggests that Twitch may finally be taking legitimate action to crush its toxic chat streams.
Streaming industry reporter Zach Bussey shared on Sunday a series of screenshots, including an interface like apparently captured on German Twitch site, which point to a new kind of user verification system coming to the chat service. As illustrated and described, this system would allow Twitch users to choose to verify their email address or phone number. (An email verification version already exists, but currently Twitch users can use the same address to bulk verify multiple accounts at the same time.)
The incentive to sign up for this process will come from individual moderators on the Twitch channel, who might only allow people to chat if they have verified either (or both) credentials.
The leaked interface also indicates that a long-requested Twitch moderation feature is finally coming online: the ability to silence accounts based on their length of existence. If your channel is “assaulted” by hundreds of newly created accounts, all run by an automatic bot system with designs over flooding its dedicated chat channel, this teased new system would block them with a rule such as “them. accounts must be more than a week old. “(or even longer, if a host wishes).
One month after #ADayOffTwitch
Without these systems in place, Twitch users had to seek additional unofficial moderation tools to roll back an increasingly aggressive network of hate looters, some of whom are organizing themselves into an ever-evolving network of platforms. external such as Discord. In a report published in late August, Washington Post reporter Nathan Grayson describes much of the hate raiding ecosystem. And a late August report from The Mary Sue cites and directly cites some of the more hateful languages and tactics used by hate raiders prior to the September 1 #ADayOffTwitch, an effort led by the streamers concerned to gain further attention from the audience on platform issues.
However, since community moderation tools rely on public information rather than Twitch’s full control over the new user pipeline, they are only so effective. Hate raids are typically generated with a mix of automatic bot systems and Twitch’s free and forgiving account creation interface. (The latter still lacks any form of CAPTCHA authentication system, making it a prime candidate for bot mining.) While Twitch includes built-in tools to block or report messages that trigger a dictionary full of vulgar and hateful terms, many of the biggest hate raiders have turned to their own dictionary combing tools.
These tools allow hate raiders to evade basic moderation tools as they construct words using non-Latin characters and can generate thousands facsimiles of notorious insults by mixing and matching characters, thus appearing quite close to the original word. Their power of hatred and fanaticism explodes thanks to the context that turns arguably innocent words into targeted insults, depending on the marginalized group they are addressing. Twitch has since extensive updates to its dictionary moderation systems which, among other things, seek out streams of non-Latin characters. But these also proved insufficient for some affected Twitch hosts.
Earlier this month, Twitch sued two users it identified as repeat hate raiders. Yet, while fine for the targeted creators of thousands of accounts, the hate speech game that followed left affected users scrambling to find tools and systems that could fend off a flood of toxicity. And if users want to make a living as subscriber-supported streamers, all the channels involved, usually hosted by smaller streamers, sometimes with dozens or hundreds of viewers, end up with few superior options. to which to turn. In the West, neither YouTube Gaming nor Facebook Gaming offer significantly more robust automatic moderation tools, and they don’t enjoy audiences close to Twitch’s massive numbers. The latter becomes a sticking point for any host wishing to organically increase their audience while flagging their channel with tags such as “LGBTQIA +” or “African American”.
When Ars Technica was contacted with questions about the Bussey report and any other integrated tool the network might deploy for creators in the face of hate mobs, a Twitch rep declined to comment.
[ad_2]
Source link