Age checks, trolls and deepfakes: what’s in the online safety bill? | Internet security


The online safety bill returns to the House of Commons on Tuesday, with the government promising to introduce a major change: criminal penalties for tech executives whose platforms fail to protect children from harm online.

It’s the latest change to legislation that has sparked debate over a range of issues, from free speech to dealing with trolls and appropriate age verification for pornography sites. Here is a brief presentation of the bill as it stands.

How does the bill work?

A cornerstone of the bill is the duty of care it would place on tech companies to protect users from harmful content. The legislation will apply to platforms with user-generated content, including social media services such as Twitter, TikTok and Facebook, and search engines such as Google. Although many of these services are based outside the UK, if they are available to UK users they are within the scope of the Bill.

All tech firms covered by the bill would have to protect all users from illegal content. The types of content that platforms must remove include child sexual abuse material, revenge pornography, illegal drug or arms sales, and terrorism.

Tech platforms will also take care to ensure children’s online safety. This will include preventing children from accessing harmful content and enforcing age restrictions on social media platforms (the minimum age is usually 13). Platforms will have to explain how they enforce these age restrictions in their terms of service and what technology they use to police them.

In relation to both of these duties, tech firms will need to conduct risk assessments detailing the dangers their services may pose in terms of illegal content and ensuring the safety of children. They will then have to explain how they will mitigate these threats, for example through human moderators or using artificial intelligence tools, in a process overseen by the communications regulator Ofcom. It is expected to come into effect by the end of the year.

What are the penalties for companies under the law?

Ofcom will have a range of regulatory powers under the Bill. At its highest, it could impose fines of up to £18m, or 10% of global turnover – a hefty figure for a company like Meta, which will have revenues of just under $118bn in 2021. in the most extreme cases, rogue sites can be blocked by ordering payment providers, advertisers and ISPs to stop working with them. Ofcom will also have the power to issue notices to companies and platforms to improve their performance under the bill.

Can CEOs be jailed under the law?

Even before the government admitted backing the rebels on Monday, tech executives faced two years in prison under the law if they obstructed an Ofcom investigation or data request.

Now, they also face two years in prison if they persistently ignore enforcement notices that they have breached their duty of care. In the face of objections from tech companies to criminal liability, the government stresses that the new offense will not criminalize executives who “act in good faith to carry out their duties proportionately”.

Nevertheless, it will sharpen the minds of social media executives. The new offense will target senior managers who “ignore mandatory requirements”.

Are there other criminal acts?

The Bill will introduce a range of criminal offenses for England and Wales. These include encouraging people to self-harm, sharing pornographic “deeply fake” images, taking and sharing “degrading” images, cyber-flashing (sending unwanted sexual images) and sending or posting messages that threaten serious harm.

How does it deal with pornography and age verification?

If a platform publishes pornography, it must have “robust” processes to verify that the user is not of legal age. How this is done depends on the platform – there are a number of tools that can be used to verify a user’s age – but this will be checked by Ofcom. The government said any age verification method used by pornography sites would have to protect users’ data, reflecting privacy campaigners’ concerns that requiring porn websites to log in could make it easier to collect and leak information about a person’s browsing habits. .

Will it protect adults from online trolls and abuse?

Under the previous iteration, the bill took care of the big platforms to deal with harmful but not illegal content. This alarmed free speech advocates in Conservative backseats and elsewhere, so it was removed. Instead, tech firms will be required to remove certain types of “legal but harmful” content if it’s banned under their terms of service, under a provision that seeks to ensure platforms pay more than lip service to content regulations. Adults will also have the ability to screen out certain types of harmful content if they choose. This includes posts that are offensive or hateful based on race, ethnicity, religion, disability, gender, transgender or sexual orientation.



Source link