If you’re a fan of English football, you might be aware of a four day boycott being held by Premier League clubs starting on the 30th April in an effort to combat abuse and discrimination on the networks. Indeed, with a number of footballers now having spoken out about experiencing racist, sexist, and other forms of abuse on Instagram, the platform is now trying to do more to protect users who are regularly targeted by hate speech.
From now on, direct messages (DM) containing words, or even emojis, deemed offensive, will be removed from view. The tool will focus on filtering message requests from people that users do not follow because this is the most common way people usually receive abusive messages. However, this will not impact a users’ regular DM inbox from people they also follow, which means they will still be able to receive messages from friends.
With the filter in place, any content deemed abuse will be automatically filtered into a separate hidden requests folder. Users can choose to open this folder if they wish, but the message text will be covered so that they are not immediately confronted with offensive language, unless they specifically tap to uncover it. Once seen, a user has the same options available to them as before: accept the message request, delete it, or report it.
What phrases and emojis will be blocked has been decided by Instagram, in collaboration with anti-discrimination and anti-bullying groups. However, users can also add their own definitions to this list, through the Hidden Words section of the app’s privacy settings.
The social media site has said that from now on it will also be disabling accounts of those users who are found to have repeatedly sent abusive private messages. And, that, in addition, it will prevent someone you’ve blocked from contacting you from a new account.
The impact of regularly receiving abusive messages on a user’s mental health can be significant and it’s been noted for a while that Instagram in particular has a massive problem with harassment. Many celebrities including: Ariana Grande, Khloe Kardashian, and Justin Bieber have quit the site in the past, stating online trolling and abuse as the reason for their departure.
To improve abuse in a user’s comments section on the photos and videos they post, the company has already rolled out a similar feature that filters out abusive words. Nonetheless, as part of these most recent changes to combat harassment, the company is also refining its algorithms for detecting these abusive comments. If users choose to disallow ‘offensive’ words in comments made on their content, through a feature in their settings, Instagram will now also hide common misspellings of these words – a way some users have gotten round the current policing.
The tool will be available in the UK, France, Ireland, Germany, Australia, New Zealand and Canada by the end of May, with more countries added subsequently.
It will be interesting to see how users targeted by such abuse find these new changes. Protecting mental health on social media is an ongoing battle between free speech and abuse, but with companies boycotting the platforms for their failure to protect users, this might be a turning point for a friendlier world online.
Will you be using this feature? How successful do you think it will be? Let us know in the comments section, and, if you have any questions or other technology queries tweet @techtroublesho1.