Instagram deploys a feature to minimize hateful comments: NPR



[ad_1]

Instagram is deploying a feature that will notify users when their comments might contain harmful content before others see them.

Chandan Khanna / AFP / Getty Images


hide legend

activate the legend

Chandan Khanna / AFP / Getty Images

Instagram is deploying a feature that will notify users when their comments might contain harmful content before others see them.

Chandan Khanna / AFP / Getty Images

Instagram is deploying a feature that will make users think twice before posting hateful comments, with the goal of minimizing cyberbullying on the social media platform.

The new feature uses artificial intelligence to filter content and warn users if their publication is detrimental or offensive. Users will see a message: "Are you sure you want to publish this?" They will then have the option to delete or edit the comment before anyone can see it.

The first tests of this feature revealed that some users are less likely to post detrimental comments once they've had the opportunity to think about their message, said Instagram head Adam Mosseri in a blog post.

Gmail has a similar feature that gives users 30 seconds to cancel an email after pressing Send.

Other social media platforms have tried to monitor the type of content allowed on their platforms. Twitter has started reporting hateful or offensive tweets from politicians, and Facebook has banned some white supremacists and other accounts for hateful or offensive messages. But there is no absolute rule for what these platforms are supposed to restrict.

Monitoring harmful content on social media is a challenge. Justin Patchin, co-director of the Cyberbullying Research Center, explains that he's working with different platforms that are trying to find a solution to this problem.

With huge amounts of content created every second, Instagram is just one of the companies that try to use AI to monitor their publications. Facebook and Twitter have both tried to use technology in the past. But AI monitoring is problematic and algorithms often have trouble interpreting slang and nuances in different languages.

The latest feature of Instagram differs from previous attempts to prevent cyberbullying by large social platforms because it uses AI to warn users, but ultimately allows them to decide what content to post .

"Transparency here is helpful for those who have wondered why these big social media companies are not doing more technological to combat bullying," Patchin said.

Instagram is the first major platform to try this method of preventing the spread of hate content in its application. However, it is a concept similar to the application created by Trisha Prabhu in 2013. The 13-year-old then created a social platform called ReThink, which also alerts users when their message may be offensive. ReThink has been commended for its innovation, but Patchin said the solutions must be integrated with platforms already making the object of significant traffic to be the most effective.

Patchin says these big social enterprises are going in the right direction and are getting closer to finding a method for monitoring harmful content and cyberbullying.

"Companies have put a lot of energy into improving these systems, and they are improving every year," he said. "They have the responsibility and the obligation to show the way and to experience at least this type of technology."

Instagram plans to continue to strengthen its security features and will soon introduce a "restriction" feature, which allows users to filter the content of specific accounts without blocking them. Mosseri of Instagram wrote in the blog that the company had decided to add this feature after users had feared that blocking accounts posting offensive comments on their page would lead to retaliation.

[ad_2]

Source link