Tech giants are not obligated to remove harmful content

The UK’s Online Safety Act, which aims to regulate the Internet, has been revised to remove a controversial but critical measure.

Matt Cardy | Getty Images News | Getty Images

LONDON – Social media platforms such as Facebook, TikTok and Twitter will no longer have to remove “legal but harmful” content under proposed changes to UK online safety legislation.

British lawmakers announced Monday that the Online Safety Act, which aims to regulate the internet, will be revised to remove the controversial but critical measure.

The government said the amendment would help protect free speech and give people more control over what they see online.

However, critics have described the move as a “major weakening” of the bill, which risks undermining the accountability of tech companies.

Previous proposals would have told tech giants to prevent people from seeing legitimate but harmful content online, such as self-harm, suicide and abusive posts.

Under what the government calls a “consumer-friendly ‘triple shield'”, the responsibility for choosing content will shift to internet users, with tech companies instead having to implement a system that allows them to filter harmful content that people don’t want to see.

Most importantly, companies must still protect children and remove illegal or prohibited content in their terms of service.

‘Adult empowerment,’ ‘protecting freedom of speech’

UK Culture Secretary Michelle Donelan said the new plans would “ensure that no tech firm or future government can use the laws as a license to censor legitimate views”.

“Today’s announcement focuses the Online Safety Bill on its original goals: protecting children and combating online criminal activity, while protecting free speech, holding tech firms accountable to their users, and empowering adults to make more informed choices about the platforms they use.” – the government’s statement says.

The opposition Labor Party said the amendment was a “major weakening” of the bill but could fuel disinformation and conspiracy theories.

Replacing harm prevention with an emphasis on free speech defeats the purpose of this bill.

Lucy Powell

shadow culture secretary, Labor Party

“To replace harm prevention with an emphasis on free speech defeats the purpose of this bill and will embolden abusers, COVID-deniers, fraudsters who will be encouraged to thrive on the internet,” said Shadow Culture Secretary Lucy Powell.

Meanwhile, suicide risk charity group Samaritans said increased user controls should not replace tech company accountability.

Julie Bentley, chief executive of Samaritans, said: “Increasing people’s control is no substitute for holding sites to account, and it’s like government prying defeat from the jaws of victory.”

The devil is in the detail

Monday’s announcement is the latest iteration of the UK’s sweeping Online Safety Act, which also includes identity verification tools and new criminal guidelines to tackle fraud and revenge porn.

It follows a months-long campaign by free speech advocates and online advocacy groups. Meanwhile, Elon Musk’s acquisition of Twitter has put online content moderation back in the spotlight.

The proposals will return to the British Parliament next week and become law before next summer.

However, commentators say the bill needs to be further refined to ensure loopholes are closed before then.

“The devil will be in the details. There is a risk that Ofcom’s policing of social media terms and ‘consistency’ requirements could encourage overzealous dismissals,” said Matthew Lesh, head of public policy at the free market think tank. This was reported by the Institute of Economics.

Communications and media regulator Ofcom will be responsible for most of the enforcement of the new law and will be able to fine companies up to 10% of their worldwide revenue for non-compliance.

“There are other issues that the government has not addressed,” Lesh said. “Requirements to remove content from which firms can ‘reasonably infer’ are illegal and set an extremely low threshold and risk pre-emptive automated censorship.”

Source link