The bill aims to make the country ‘the safest place in the world to be online’ but has been mired by multiple delays and criticisms that it’s grown too large and unwieldy to please anyone.
At some point this year, the UK’s long-delayed Online Safety Bill is finally expected to become law. In the government’s words, the legislation is an attempt to make the UK “the safest place in the world to be online” by introducing a range of obligations for how large tech firms should design, operate, and moderate their platforms.
As any self-respecting Verge reader knows, content moderation is never simple. It’s difficult for platforms, difficult for regulators, and difficult for lawmakers crafting the rules in the first place. But even by the standards of internet legislation, the Online Safety Bill has had a rocky passage. It’s been developed over years during a particularly turbulent era in British politics, changing dramatically from year to year. And as an example of just how controversial the bill has become, some of the world’s biggest online organizations, from WhatsApp to Wikipedia, are preemptively refusing to comply with its potential requirements.
So if you’ve tuned out the Online Safety Bill over the past few years — and let’s be honest, a lot of us have — it’s time to brush up. Here’s where the bill came from, how it’s changed, and why lawmakers might be about to finally put it on the books.
So let’s start from the beginning. What is the Online Safety Bill?
The UK government’s elevator pitch is that the bill is fundamentally an attempt to make the internet safer, particularly for children. It attempts to crack down on illegal content like child sexual abuse material (CSAM) and to minimize the possibility that kids might encounter harmful and age-inappropriate content, including online harassment as well as content that glorifies suicide, self-harm, and eating disorders.
But it’s difficult to TL;DR the Online Safety Bill at this point, precisely because it’s become so big and sprawling. On top of these broad strokes, the bill has a host of other rules. It requires online platforms to let people filter out objectionable content. It introduces age verification for porn sites. It criminalizes fraudulent ads. It requires sites to consistently enforce their terms of service. And if companies don’t comply, they could be fined up to £18 million (around $22.5 million) or 10 percent of global revenue, see their services blocked, and even see their executives jailed.
In short, the Online Safety Bill has become a catchall for UK internet regulation, mutating every time a new prime minister or digital minister has taken up the cause.
How many prime ministers are we talking about here?
So far? Four.
Wait, how long has this bill been in the works for?
The Online Safety Bill started with a document called the “Online Harms White Paper,” which was unveiled way back in April 2019 by then-digital minister Jeremy Wright. The death of Molly Russell by suicide in 2017 brought into sharp relief the dangers of children being able to access content relating to self-harm and suicide online, and other events like the Cambridge Analytica scandal had created the political impetus to do something to regulate big online platforms.
The idea was to introduce a so-called “duty of care” for big platforms like Facebook — similar to how British law asks employers to look after the safety of their employees. This meant companies would have to perform risk assessments and make proactive solutions to the potential harms rather than play whack-a-mole with problems as they crop up. As Carnegie UK associate Maeve Walsh puts it, “Interventions could take place in the way accounts are created, the incentives given to content creators, in the way content is spread as well as in the tools made available to users before we got to content take down.”
The white paper laid out fines and the potential to block websites that don’t comply. At that point, it amounted to some of the broadest and potentially strictest online regulations to have been proposed globally.
What was the response like at the time?
Obviously, there was a healthy amount of skepticism (Wired’s take was simply titled “All that’s wrong with the UK’s crusade against online harms”), but there were hints of cautious optimism as well. Mozilla, for example, said the overall approach had “promising potential,” although it warned about several issues that would need to be addressed to avoid infringing on people’s rights.
If the British government was on to such a winner, why hasn’t it passed this bill four years later?
Have you paid attention to British politics at all in the past four years? The original white paper was introduced four prime ministers and five digital ministers ago, and it seems to have been forced into the back seat by more urgent matters like leaving the European Union or handling the covid-19 pandemic.
But as it’s passed through all these hands, the bill has ballooned in size — picking up new provisions and sometimes dropping them when they’re too controversial. In 2021, when the first draft of the bill was presented to Parliament, it was “just” 145 pages long, but by this year, it had almost doubled to 262 pages.
Where did all those extra pages come from?
Given the bill’s broad ambitions for making online life safer in general, many new elements were added by the time it returned to Parliament in March 2022. In no particular order, these included:
- Age checks for porn sites
- Measures to clamp down on “anonymous trolls” by requiring that services give the option for users to verify their identity
- Criminalizing cyberflashing (aka the sending of unsolicited nudes via social media or dating apps)
- Cracking down on scam ads
Over time, the bill’s definition of “safety” has started to look pretty vague. A provision in the May 2021 draft forbade companies “from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation,” echoing now familiar fears that conservative voices are unfairly “censored” online. Bloomberg called this an “anti-censorship” clause at the time, and it continues to be present in the 2023 version of the bill.
And last November, ministers were promising to add even more offenses to the bill, including downblousing and the creation of nonconsensual deepfake pornography.
Hold up. Why does this pornography age check sound so familiar?
The Conservative Party has been trying to make it happen since well before the Online Safety Bill. Age verification was a planned part of the Digital Economy Bill in 2016 and then was supposed to happen in 2019 before being delayed and abandoned in favor of rolling the requirements into the Online Safety Bill.
The problem is, it’s very difficult to come up with an age verification system that can’t be either easily circumvented in minutes or create the risk that someone’s most intimate web browsing moments could be linked to their real-life identity — notwithstanding a plan to let users buy a “porn pass” from a local shop.
And it’s not clear how the Online Safety Bill will overcome this challenge. An explainer by The Guardian notes that Ofcom will issue codes of practice on how to determine users’ ages, with possible solutions involving having age verification companies check official IDs or bank statements.
Regardless of the difficulties, the government is pushing ahead with the age verification requirements, which is more than can be said for its proposed rules around “legal but harmful” content.
And what exactly were these “legal but harmful” rules?
Well, they were one of the most controversial additions to the entire bill — so much so that they’ve been (at least partially) walked back.
Originally, the government said it should officially designate certain content as harmful to adults but not necessarily illegal — things like bullying or content relating to eating disorders. (It’s the less catchy cousin of “lawful but awful.”) Companies wouldn’t necessarily have to remove this content, but they’d have to do risk assessments about the harm it might pose and set out clearly in their terms of service how they plan to tackle it.
But critics were wary of letting the state define what counts as “harmful,” the fear being that ministers would have the power to censor what people could say online. At a certain point, if the government is formally pushing companies to police legal speech, it’s debatable how “legal” that speech still is.
This criticism had an effect. The “legal but harmful” provisions for adults were removed from the bill in late 2022 — and so was a “harmful communications” offense that covered sending messages that caused “serious distress,” something critics feared could similarly criminalize offensive but legal speech.
Instead, the government introduced a “triple shield” covering content meant for adults. The first “shield” rule says platforms must remove illegal content like fraud or death threats. The second says anything that breaches a website’s terms of service should be moderated. And the third says adult users should be offered filters to control the content they see.
The thinking here is that most websites already restrict “harmful communications” and “legal but harmful” content, so if they’re told to apply their terms of service consistently, the problem (theoretically) takes care of itself. Conversely, platforms are actively prohibited from restricting content that doesn’t breach the terms of service or break the law. Meanwhile, the filters are supposed to let adults decide whether to block objectionable content like racism, antisemitism, or misogyny. The bill also tells sites to let people block unverified users — aka those pesky “anonymous trolls.”
None of this impacts the rules aimed specifically at children — in those cases, platforms will still have a duty to mitigate the impact of legal but harmful content.
I’m glad that the government addressed those problems, leaving a completely uncontroversial bill in its wake.
Wait, sorry. We’re just getting to the part where the UK might lose encrypted messaging apps.
Excuse me?
Remember WhatsApp? After the Online Safety Bill was introduced, it took issue with a section that asks online tech companies to use “accredited technology” to identify child sexual abuse content “whether communicated publicly or privately.” Since personal WhatsApp messages are end-to-end encrypted, not even the company itself can see their contents. Asking it to be able to identify CSAM, it says, would inevitably compromise this end-to-end encryption.
WhatsApp is owned by Meta, which is persona non grata among regulators these days, but it’s not the only encrypted messaging service whose operators are concerned. WhatsApp head Will Cathcart wrote an open letter that was co-signed by the heads of six other messaging apps, including Signal. “If implemented as written, [this bill] could empower Ofcom to try to force the proactive scanning of private messages on end-to-end encrypted communication services - nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users,” says the letter. “In short, the bill poses an unprecedented threat to the privacy, safety and security of every UK citizen and the people with whom they communicate.”
We are proud to stand with others pushing back on a law that threatens UK citizens' right to safety and privacy.
— WhatsApp (@WhatsApp) April 18, 2023
You can read the full letter here: https://t.co/75sZt6AcXC
The consensus among legal and cybersecurity experts is that the only way to monitor for CSAM while maintaining encryption is to use some kind of client-side scanning, an approach Apple announced in 2021 that it would be using for image uploads to iCloud. But the company ditched the plans the following year amid widespread criticism from privacy advocates.
Organizations such as the Internet Society say that such scanning risks creating vulnerabilities for criminals and other attackers to exploit and that it could eventually lead to the monitoring of other kinds of speech. The government disagrees and says the bill “does not represent a ban on end-to-end encryption, nor will it require services to weaken encryption.” But without an existing model for how such monitoring can coexist with end-to-end encryption, it’s hard to see how the law could satisfy critics.
The UK government already has the power to demand that services remove encryption thanks to a 2016 piece of legislation called the Investigatory Powers Act. But The Guardian notes that WhatsApp has never received a request to do so. At least one commentator thinks the same could happen with the Online Safety Bill, effectively giving Ofcom a radical new power that it may never choose to wield.
But that hasn’t exactly satisfied WhatsApp, which has suggested it would rather leave the UK than comply with the bill.
Okay, so messaging apps aren’t a fan. What do other companies and campaigners have to say about the bill?
Privacy activists have also been fiercely critical of what they see as an attack on end-to-end encryption. The Electronic Frontier Foundation, Big Brother Watch, and Article 19 published an analysis earlier this year that said the only way to identify and remove child sexual exploitation and abuse material would be to monitor all private communications, undermining users’ privacy rights and freedom of expression. Similar objections were raised in another open letter last year signed by 70 organizations, cybersecurity experts, and elected officials. The Electronic Frontier Foundation has called the bill “a blueprint for repression around the world.”
Tech giants like Google and Meta have also raised numerous concerns with the bill. Google says there are practical challenges to distinguishing between illegal and legal content at scale and that this could lead to the over-removal of legal content. Meta suggests that focusing on having users verify their identities risks excluding anyone who doesn’t wish to share their identity from participating in online conversations.
Even beyond that, there are more fundamental concerns about the bill. Matthew Lesh, head of public policy at the Institute of Economic Affairs, notes that there’s simply a massive disparity between what is acceptable for children to encounter online and what’s acceptable for adults under the bill. So you either risk the privacy and data protection concerns of asking all users to verify their age or you moderate to a children’s standard by default for everyone.
That could put even a relatively safe and educational service like Wikipedia under pressure to ask for the ages of its users, which the Wikimedia Foundation’s Rebecca MacKinnon says would “violate [its] commitment to collect minimal data about readers and contributors.”
“The Wikimedia Foundation will not be verifying the age of UK readers or contributors,” MacKinnon wrote.
Okay, that’s a lot of criticism. So who’s in favor of this bill?
One group that’s been broadly supportive of the bill is children’s charities. The National Society for the Prevention of Cruelty to Children (NSPCC), for example, has called the Online Safety Bill “an urgent and necessary child protection measure” to tackle grooming and child sexual abuse online. It calls the legislation “workable and well-designed” and likes that it aims to “tackle the drivers of online harms rather than seek to remove individual pieces of content.” Barnardo’s, another children’s charity, has been supportive of the bill’s introduction of age verification for pornography sites.
Ian Russell, the father of the late Molly Russell, has called the Online Safety Bill “a really important piece of legislation,” though he’s pushed for it to go further when it comes to criminal sanctions for executives whose products are found to have endangered children’s well-being.
“I don’t think that without effective regulation the tech industry is going to put its house in order, to prevent tragedies like Molly’s from happening again,” Russell said. This sentiment appears to be shared by increasing numbers of lawmakers internationally, such as those in California who passed the Age-Appropriate Design Code Act in August last year.
Where’s the bill at these days?
As of this writing, the bill is currently working its way through the UK’s upper chamber, the House of Lords, after which it’ll be passed back to the House of Commons to consider any amendments that have been made. The government’s hope is to pass it at some point this summer.
Even after the bill passes, however, there will still be practical decisions made about how it’ll work in practice. Ofcom will need to decide what services pose a high enough risk to be covered by the bill’s strictest rules and develop codes of practice for platforms to abide by, including tackling thorny issues like how to introduce age verification for pornography sites. Only after the regulator completes this consultation process will companies know when and how to fully comply with the bill, and Ofcom has said it expects this to take months.
The Online Safety Bill has had a difficult journey through Parliament, and it’s likely to be months before we know how its most controversial aspects are going to work (or not) in practice.
Aucun commentaire:
Enregistrer un commentaire