upper waypoint

Instagram to Flag Hateful Comments Before You Send Them

Save ArticleSave Article
Failed to save article

Please try again

Instagram is rolling out a feature that will notify users when their comments may contain harmful content before others see them. (Chandan Khanna/AFP/Getty Images)

Instagram is rolling out a feature that will urge users to think twice before posting hateful comments, in an effort to minimize cyberbullying on the massive social media platform.

The new feature uses artificial intelligence to screen content and notify users if their post may be harmful or offensive. Users will see a message: "Are you sure you want to post this?" They will then have the option to remove or change the comment before anyone else is able to see it.

Hate Speech Online

Early tests of this feature found that some users are less likely to post harmful comments once they've had a chance to reflect on their post, Instagram chief Adam Mosseri wrote in a blog post.

Gmail has a similar feature that gives users 30 seconds to cancel an email after pressing send.

Other social media platforms have made attempts to monitor the type of content allowed on their platforms.

Twitter has started to flag hateful or offensive tweets from politicians, and on Tuesday announced it will now prohibit hate speech targeting religious groups by using dehumanizing language, a ban it says may extend to other categories like race and gender. Facebook has banned some white supremacists and other accounts for hateful or offensive posts. But there is no hard-and-fast rule for what these platforms are expected to restrict.

Monitoring harmful content on social media is challenging. Justin Patchin, co-director of the Cyberbullying Research Center, says he works with different platforms that are trying to find a solution to this problem.

With massive amounts of content being created every second, Instagram is just one of the companies attempting to use AI to monitor posts. Both Facebook and Twitter have tried to use the technology in the past. But AI monitoring comes with challenges, and the algorithms often have a hard time interpreting slang and the nuances in different languages.

Sponsored

Instagram's latest feature is different from previous attempts by big social platforms to prevent cyberbullying because it uses AI to warn users but ultimately allows them to make the decision on what to post.

"The transparency here is helpful to those who have wondered why these big social media companies aren't doing more technologically to address bullying," Patchin said.

Instagram is the first big platform to try this method of preventing hateful content from circulating in its app. However, it's a similar concept to the app created by Trisha Prabhu in 2013. The then 13-year-old created a social platform called ReThink, which also alerts users when their message may be offensive. ReThink was praised for its innovation, but Patchin says solutions need to be incorporated into already widely trafficked platforms to be the most effective.

Patchin says these big social companies are moving in the right direction and are getting closer to finding a method for monitoring harmful content and cyberbullying.

"Companies have devoted a lot of energy to refining these systems, and they're getting better every year," he said. "They do have a responsibility and obligation to lead the way and at least experiment with these kinds of technologies."

Instagram has plans to continue beefing up its safety features and will soon introduce a "restrict" feature, which allows users to filter content from specific accounts without blocking them. Instagram's Mosseri wrote in the blog post that the company decided to add that feature after users said they were worried that blocking accounts that were posting offensive comments on their page would lead to retaliation.

Copyright 2019 NPR. To see more, visit NPR.org.

lower waypoint
next waypoint