AI of Bumble's private detector automatically detects and scrambles obscene images



[ad_1]

Bumble launches a "private sensor" feature that can automatically detect obscene images with an AI and warn users of the photo before opening it. Users can then decide whether to display, block, or flag the image to moderators. This feature is part of a security initiative by Bumble co-founders. It will also apply to the Badoo, Chappy and Lumen applications, all of which are part of the same parent company of the dating group, starting in June.

As one of the few dating apps that allow submitting photos under discussion, Bumble has already put measures in place to protect users by making all images blurry by default. Recipients must hold the photo down to view it, which will then be displayed with a watermark of the sender's profile image. The idea was that the photos attached to the sender's profile should, hopefully, contain the unwanted obscene images. However, as users have found, nothing prevents anyone from creating fake profiles. For example, do not look like "James, 23" below.


Image: Bumble

From now on, obscene photo messages will at least be accompanied by a warning that the AI ​​has detected (with an accuracy of 98%, according to the company) potentially inappropriate content.

In addition to this new feature, Bumble CEO and co-founder Whitney Wolfe Herd is working with lawmakers in Texas to pbad a bill that makes sharing unwanted images a crime punishable by a $ 500 fine. The bill was drafted by Republican State Rep. Morgan Meyer, on the fact that it is illegal to publicly expose oneself to the streets, it should be illegal to do the same online. "Something that is already a crime in the real world must be an online crime," said Meyer NBC News.

[ad_2]
Source link