British MPs urge government to fine social media companies that fail to tackle online abuse

(Image by Erik Lucatero from Pixabay)

The UK Government’s planned Online Safety Bill has recently come under fire from a group of MPs who say that in its current form it is neither robust enough to tackle illegal content nor does it protect freedom of speech. In other words, it’s the worst of both worlds.

This week, MPs on the Petitions Committee are also calling on the government to take the Online Safety Bill further by introducing financial penalties for social media companies that don’t do enough to tackle abuse online experienced by users.

It also suggests that abusive users should face legal penalties where appropriate.

The issue of online abuse is a matter of concern to the general public. This is evidenced by the number of online petitions calling for action on this issue, where one of these petitions calling for the obligation to verify identity when opening a social media account received almost 700,000 signatures in six months. It was the most popular petition in 2021.

I previously explained why mandatory identification for social media accounts is a terrible idea, but it is clear that the problem of online abuse is a widespread problem and that more needs to be done to address it.

The Petitions Committee collected testimony in late 2021 from a range of witnesses including civil society and campaign groups, experts in legal, regulatory and technological responses to online harm and social media companies Meta, TikTok and Twitter.

In addition to suggesting online fines for social media companies that don’t do enough to protect users from online abuse, the Committee also recommends that users have the ability to filter interactions from other users who have failed. no account linked to a verified account. IDENTIFIER.

The Committee began its investigation in light of an electronic petition created by Bobby Norris in 2019 which called for online homophobia to be made a specific criminal offence. Norris then created a second petition later that year calling on the government to ensure that online trolls are held accountable for their actions via their IP address.

Commenting on the findings of the report, the Chair of the Petitions Committee, MP Catherine McKinnell, said:

Online abuse is a silent threat, and this report outlines our recommendations to help tackle the enormous harm it causes and ensure perpetrators face appropriate consequences for their actions.

We spoke to students across the country who told us they felt online abuse was just a part of living online. This is incredibly alarming and shows how important it is that we tackle this problem.

The issue of banned users returning to social media platforms and continuing to send abuse was raised in Bobby Norris’ petition, which prompted our investigation. We’ve heard that social media companies need to put a higher priority on preventing this type of recurrence, and the government should ensure this is part of companies’ new online safety obligations.

Even when the abuse does not reach a criminal threshold, it can still have a significant impact on those who experience it, not only on their health, but also on their ability to express themselves freely online. Social media platforms should take proactive steps to create safer online spaces for everyone.


The Committee’s report makes a number of recommendations, including:

  • As part of its new mandate as online safety regulator, Ofcome should regularly report on the impact of online abuse, illegal hate speech and violence against women and girls content on biggest social media platforms. This should include disaggregating estimates of the likelihood of a user encountering or being the target of abusive content based on characteristics such as race, disability, sexuality and gender, as well as differentiating between child and adult users. .

  • The Online Safety Bill should include a legal obligation for the government to consult with civil society organizations representing children and users most affected by online abuse on the continued effectiveness of the legislation in tackling against online abuse, and how it could be refined to better achieve this goal. .

  • It recommends that the Online Safety Bill include abuse based on characteristics protected by the Equality Act and Hate Crimes Act as priority harmful content in primary legislation.

  • It should also list hate crimes and offenses of violence against women and girls as specific offenses relevant under the bill’s illegal content security obligations and specify the particular offenses covered by these headings, as the bill already does for Terrorism and Child Sexual Exploitation and Abuse Offences.

  • Platforms should be required to separately consider the different risks faced by groups, including women, users from ethnic minorities, users with disabilities and LGBT+ users, and this requirement should be made explicit in the compliance obligations. risk assessment set out in Invoice Online Security.

  • The government should ensure that police and other law enforcement agencies have adequate resources to effectively investigate and prosecute communications, hate crimes and offenses of violence against women and girls committed online . Police officers should also receive adequate training to identify when these offenses were committed and to support victims of these offenses when they come forward.

  • Social media platforms must have robust methods to track users who post content that violates the platform’s terms of service, and must effectively enforce their own sanctions against such users. Social media platforms should also be required to demonstrate to Ofcom that they can identify previously banned users seeking to create new accounts and, where a platform’s rules prohibit such users from returning to the platform, that the platform applies these rules adequately. Ofcom should have the power to impose fines or take other enforcement action if a platform is unable to demonstrate this.

  • The government should expect the biggest social media platforms to offer users the ability to filter content based on the user’s verification status and to block content from users who have chosen not to verify their account. User verification need not take the form of an identity document, and Ofcom should conduct research to establish possible methods of account verification that provide a robust means of reducing users’ exposure to harmful content while being as inclusive and accessible as possible.

my catch

Online abuse impacts the experiences of so many people on social media – especially women, ethnic minority users, users with disabilities and LGBT+ users. The effects of being abused by strangers online are not only felt online, they ultimately lead to negative consequences for people offline as well. And for too long, social media companies have been left to “self-regulate,” which simply means abuse continues unabated. While I don’t think mandatory identity verification is a good idea, I think regulators and governments can do more to drive change within social media companies. The companies in question are reluctant to do anything to limit engagement and/or growth, and intervention is therefore necessary. Many of these recommendations make good sense – so we now await the government’s response.

Source link


Comments are closed.