The Alphabetically manufactured Chrome extension is designed to eliminate toxic comments



[ad_1]

Alphabet Incubator Jigsaw has released an experimental extension for Chrome, intended to make our online browsing experience a little more enjoyable by releasing it from toxic comments. The extension, called Tune, allows you to choose different levels of polite or aggressive comments. "Zen mode" completely disables comments, while "volume levels", from "calm" to "screaming", release different amounts of toxicity (such as attacks, insults, blasphemies, etc.).

The open-source extension uses automatic learning to determine the likelihood that a comment will be perceived as toxic. It uses Perspective, an API created in 2017 by Jigsaw and Google's Counter Abuse technology team, used by news agencies, including The New York Times and The Guardian experiment with online moderation. Here is an example of how Perspective sorts comments by toxicity:



Image: perspective

Jigsaw is careful to note that Tune is an ongoing experiment and it may be inaccurate to mark the toxic comments. The extension "is not meant to be a solution for direct targets of harassment (for whom seeing direct threats may be vital to their safety), nor is Tune a solution for any toxicity," reads in the description of the extension. Rather, it's about showing users how machine learning can be used to improve online discussions.

Until now, the extension is available for comments YouTube, Reddit, Facebook, Twitter and Disqus. The filtered comments appear in white with a dot, which can be opened to reveal the actual comment. Using it on my own Twitter feed set to the lowest "Silent" setting, it was tagged "Devastating attack of Beto attack" it's actually a toxic parody video. But can you really blame the AI? Machine learning has a long way to go before it can detect layers of irony, and it may be an example that ignoring things we do not want to see can do more harm than good – or just make us miss a funny face. video.

[ad_2]

Source link