Social media site Instagram has launched a set of new features to help protect its users from abuse on its platform.
“We have a responsibility to make sure everyone feels safe when they come to Instagram,” says the Facebook-owned picture sharing social media platform.
The company says that it is constantly listening to feedback from experts and users in its community to develop new ways to give people more control over their experience on Instagram and help protect them from abuse.
“We don’t allow hate speech or bullying on Instagram, and we remove it whenever we find it. We also want to protect people from having to experience this abuse in the first place.”
Here are the New Ways Instagram is Preventing Abuse:
Limits: Easily Preventing Unwanted Comments and DMs
To help protect people when they experience or anticipate a rush of abusive comments and DMs, Instagram is introducing Limits: a feature that is easily turned on and will automatically hide comments and DM requests from people who don’t follow you, or who only recently started following you.
“We developed this feature because we heard that creators and public figures sometimes experience sudden spikes of comments and DM requests from people they don’t know. In many cases, this is an outpouring of support — like if they go viral after winning an Olympic medal,” Instagram says.
“But sometimes it can also mean an influx of unwanted comments or messages. Now, if you’re going through that — or think you may be about to — you can turn on Limits and avoid it.”
Limits on Instagram.
Instagram’s research shows that a lot of negativity towards public figures comes from people who don’t actually follow them, or who have only recently followed them, and who simply pile on at the moment. The company says that the recent Euro 2020 final is an example, which resulted in a significant – and unacceptable – spike in racist abuse towards players.
However, creators have been telling the company that they don’t want to switch off comments and messages completely; they still want to hear from their community and build those relationships. Limits allow users to hear from their long-standing followers while limiting contact from people who might only be coming to their accounts to target them and harass them.
Limits is now available to everyone on Instagram globally. Go to your privacy settings to turn it on, or off, whenever you want. Instagram is also exploring ways to detect when users may be experiencing a spike in comments and DMs, so when this happens, the company says, it can prompt users to turn on Limits.
Stronger Warnings to Discourage Harassment
Instagram already shows a warning when someone tries to post a potentially offensive comment.
And if they try to post potentially offensive comments multiple times, the company will show an even stronger warning – reminding them of its Community Guidelines and warning them that it may remove or hide their comment if they proceed.
Now, rather than waiting for the second or third comment, Instagram is showing this second, stronger message the first time in an attempt of a stronger warning to users thinking about going against Community Guidelines.
Combatting Abuse in DMs and Comments
To help protect people from abuse in their DM requests, Instagram recently announced ‘Hidden Words’, which allows you to automatically filter offensive words, phrases and emojis into a Hidden Folder, that you never have to open if you don’t want to.
It also filters DM requests that are likely to be spammy or low-quality. The social network launched this feature in a handful of countries earlier this year, and it will be available for everyone globally by the end of August. “We’ll continue to encourage accounts with large followings to use it, with messages both in their DM inbox and at the front of their Stories tray,” Instagram says.
“We’ve expanded the list of potentially offensive words, hashtags and emojis that we automatically filter out of comments and will continue updating it frequently. We recently added a new opt-in option to ‘Hide More Comments’ that may be potentially harmful, even if they may not break our rules.”