The UK’s problematic Online Safety Act is now law


panorios/Getty Images

Jeremy Wright was the first of five UK ministers charged with pushing through the British government’s landmark legislation on regulating the Internet, the Online Safety Bill. The current UK government likes to brand its initiatives as “world-beating,” but for a brief period in 2019 that might have been right. Back then, three prime ministers ago, the bill—or at least the white paper that would form its basis—outlined an approach that recognized that social media platforms were already de facto arbiters of what was acceptable speech on large parts of the Internet, but that this was a responsibility they didn’t necessarily want and weren’t always capable of discharging. Tech companies were pilloried for things that they missed, but also, by free speech advocates, for those they took down. “There was a sort of emerging realization that self-regulation wasn’t going to be viable for very much longer,” Wright says. “And therefore, governments needed to be involved.”

The bill set out to define a way to handle “legal but harmful” content—material that wasn’t explicitly against the law but which, individually or in aggregate, posed a risk, such as health care disinformation, posts encouraging suicide or eating disorders, or political disinformation with the potential to undermine democracy or create panic. The bill had its critics—notably, those who worried it gave Big Tech too much power. But it was widely praised as a thoughtful attempt to deal with a problem that was growing and evolving faster than politics and society were able to adapt. Of his 17 years in parliament, Wright says, “I’m not sure I’ve seen anything by way of potential legislation that’s had as broadly based a political consensus behind it.”

Having passed, eventually, through the UK’s two houses of Parliament, the bill received royal assent this week. It is no longer world-beating—the European Union’s competing Digital Services Act came into force in August. And the Online Safety Act enters into law as a broader, more controversial piece of legislation than the one that Wright championed. The act’s more than 200 clauses cover a wide spectrum of illegal content that platforms will be required to address and give platforms a “duty of care” over what their users—particularly children—see online. Some of the more nuanced principles around the harms caused by legal but harmful content have been watered down, and added in is a highly divisive requirement for messaging platforms to scan users’ messages for illegal material, such as child sexual abuse material, which tech companies and privacy campaigners say is an unwarranted attack on encryption.

Companies, from Big Tech down to smaller platforms and messaging apps, will need to comply with a long list of new requirements, starting with age verification for their users. (Wikipedia, the eighth-most-visited website in the UK, has said it won’t be able to comply with the rule because it violates the Wikimedia Foundation’s principles on collecting data about its users.) Platforms will have to prevent younger users from seeing age-inappropriate content, such as pornography, cyberbullying, and harassment; release risk assessments on potential dangers to children on their services; and give parents easy pathways to report concerns. Sending threats of violence, including rape, online will now be illegal, as will assisting or encouraging self-harm online or transmitting deepfake pornography, and companies will need to quickly act to remove them from their platforms, along with scam adverts.

In a statement, UK Technology Secretary Michelle Donelan said: “The Bill protects free speech, empowers adults and will ensure that platforms remove illegal content. At the heart of this Bill, however, is the protection of children. I would like to thank the campaigners, parliamentarians, survivors of abuse and charities that have worked tirelessly, not only to get this Act over the finishing line, but to ensure that it will make the UK the safest place to be online in the world.”

Enforcement of the act will be left to the UK’s telecommunications regulator, Ofcom, which said in June that it would begin consultations with industry after royal assent was granted. It’s unlikely that enforcement will begin immediately, but the law will apply to any platform with a significant number of users in the UK. Companies that fail to comply with the new rules face fines of up to £18 million ($21.9 million) or 10 percent of their annual revenue, whichever is larger.

Some of the controversy around the act is less about what is in it and more about what isn’t. The long passage of the legislation means that its development straddled the Covid-19 pandemic, giving legislators a live view of the social impact of mis- and disinformation. The spread of anti-vaccination and anti-lockdown messages became an impediment to public health initiatives. After the worst of the pandemic was over, those same falsehoods fed into other conspiracy theories that continue to disrupt society. The original white paper that was the bill’s foundation included proposals for compelling platforms to tackle this kind of content—which individually might not be illegal but which en masse creates dangers. That’s not in the final legislation, although the act does create a new offense of “false communications,” criminalizing deliberately causing harm by communicating something the sender knows to be untrue.



Source link