Social media giants should reimburse the British public for money lost to fraudulent advertising seen on their websites, the chairman of Parliament’s Digital, Culture, Media and Sport Committee has said.
Julian Knight was questioning representatives from Facebook, Twitter, YouTube and TikTok about online safety during an evidence session in Parliament this week.
The session was in preparation for the Online Safety Bill, which is making its way through Parliament and will subject tech platforms to fines, restrictions and possible criminal liability if they fail to protect their users from harmful and illegal content. .
Online fraud is widespread and has increased during the pandemic, with over £2.3bn estimated to have been lost by consumers to cyber scams between April 2020 and April 2021. This figure is based on reported crimes, so the true figure is probably much higher. Last year, Ofcom, the UK’s communications regulator, also found that scams and data hacking are UK adults’ top concerns about using the internet.
A form of online fraud misleads consumers through fake advertisements, which either carry celebrity endorsements, promote products for sale on fake websites, or promote programs that offer early access to pots pensions or cryptocurrencies. The identity of Martin Lewis, founder of Moneysavingexpert.com, has often been used without his consent to promote scams, and he has lobbied the government about it.
[See also: What the Online Safety Bill means for social media]
Content from our partners
In parliament, Knight called for better regulation and oversight by social media giants on who can advertise on their platforms and asked why they have to wait for new legislation “to do the right thing and prevent people from being robbed blind by a collection of fraudsters”.
He added that people had not only lost thousands of pounds to the scams, but in some cases their livelihoods and lives, referring to people who had died by suicide.
Only financial services companies authorized by the regulator, the Financial Conduct Authority (FCA), should be able to advertise on social media sites, he said, adding that this should have been enforced years ago. years.
“For many years you took money from these scammers,” he said. “The only thing you could [have done] stopping this was to prevent anyone who was not authorized by the FCA from advertising with you. Personally, I think it’s a shame and it’s been going on for too long. You should repay the defrauded money to the British public.
Richard Earley, head of UK public policy at Facebook, said he “does not accept” that his company is not doing anything to stop the scams. Facebook, Microsoft and Twitter announced in late 2021 that they would now only allow ads for financial services from FCA-backed companies.
He said the platform already has technology in place to identify and remove fraudulent ads and has its own strict advertising compliance policies.
“You’re absolutely right that using the internet to try to commit fraud is a really serious problem,” he said. “We are not waiting for legislation to act. One of the big challenges we face is that advertising fraud and scams are designed to be difficult to distinguish from genuine advertising. »
FCA Chairman Charles Randell has also spoken out on the issue, notably challenging Google; in 2020 he said a framework was needed to prevent social media platforms and search engines from promoting scams, adding that the regulator currently pays for warning ads to counter the impact of fraudulent ads .
“It is frankly absurd that the FCA is paying hundreds of thousands of pounds to Google to warn consumers against investment advertising from which Google is already making millions in revenue,” he said.
[See also: Cyber-flashing is sexual harassment and must be made illegal, demand MPs]
Illegal content and child abuse
The four tech giants present at the evidence session were also asked about how they deal with illegal content, such as child sexual abuse and exploitation.
All four platforms said they have detection and removal tools in place, with Earley adding that Facebook has made its detection algorithm “open source” – available for free for other platforms.
Iain Bundred, head of public policy at YouTube, said the platform was working hard to remove this “odious material”, while Niamh McDade, deputy head of UK policy at Twitter, stressed the importance of collaboration of the platform to solve this problem. “We don’t want bad actors getting kicked off one platform and ending up on another,” she said. Elizabeth Kanter, UK government relations director at TikTok, spoke about the tools in place to prevent under-15s from interacting with strangers, such as restrictions on direct messaging.
Online anonymity and hate speech
McDade was asked about the danger of anonymous Twitter accounts, especially those that post racial slurs. John Nicolson, SNP MP and committee member, said the platform’s identity verification process, which requires a date of birth and a contact number or email address to create an account, was “deeply flawed”.
She replied that there was “no place for racism on Twitter” and that the fight against online abuse is a “priority”, but also that the platform wants to protect anonymity.
“We want to protect the ability to use a pseudonym on the platform,” she said. “Using a pseudonym doesn’t always equate to abuse, just like using a real name doesn’t always indicate someone isn’t abusive.”
Watch the full parliamentary session here.
[See also: What’s illegal offline must be illegal online, says Damian Collins]