Instagram takes aim at ‘offensive’ captions with new warning prompts

In part of an ongoing effort to minimize online bullying, Instagram today announced that it’s rolling out a new feature which will notify users when they’ve written a caption that “may be considered offensive” on a photo or video.

The alerts will be powered by a new AI developed specifically for the task, which works by comparing captions to a database of others that have previously been reported for bullying. Users will need to specifically tap through a warning message before the post can be uploaded.

Users will be met with a warning message if their caption is deemed offensive.

The notification doesn’t completely block users from posting potentially offensive captions, but with its design, Instagram is clearly hoping that users will reconsider their words before posting, prompting offenders to either edit the caption, learn more or, finally, share the post anyway.

The social media platform hopes the feature will limit the reach of online bullying, and help educate users on what the platform doesn’t allow, or when their activity may be at risk of breaching its rules.

The new feature follows on from a similar change Instagram introduced in July, when it started notifying users in cases where proposed comments could be considered offensive. Instagram says it has found that the nudges do work in encouraging users to reconsider their words.

The flagging feature is rolling out in select countries from today, before eventually expanding globally next year – though which countries will be receiving the update first has not been specified.

No comments yet.

Leave a Reply

in development