End of an Era: The Online Safety Bill and Tech Self-Regulation

24 Mar 2022

What is the Online Safety Bill and how will it change the future of tech regulation?

In this blog, Samuel Piña takes a dive into how the Government’s solution to the growing issue of online regulation is shaping future policy in the sector.

The exposure to harmful materials online presents an ever-increasing challenge to policymakers. As our technologies become more powerful, and the possibilities of how we send and receive information begin to drift into the fantastical, exhibited by the capabilities of the Metaverse, the Online Safety Bill presents the Government’s solution to the growing issue of online regulation. Tech companies have long policed themselves over their own conduct and the Bill now marks an end to their self-regulation.

In April 2019, the Online Harms White Paper was published to address the principle that internet companies should have a duty of care to tackle harmful online content. Unveiled in Parliament recently as a set of “new world-leading online safety laws”, the newly minted Bill aims to limit people’s exposure to illegal content while still safeguarding freedom of speech.

Why now?

A variety of domestic and international events have precipitated increasingly urgent calls for the Bill, most recently the spread of Russian disinformation on the internet. Misleading content is also thought to have undermined public health with the propagation of Covid-19 conspiracy theories – a difficult operation to mitigate without the right legal tools during critical times of health crisis. Another serious case of harmful online content was exhibited by the tragic case of Molly Russell, which led to her death after she viewed content on Instagram linked to self-harm and suicide.

Such cases have fuelled calls to extend the powers of the Bill to bring to a close the “damaging era of tech self-regulation” with its failure to prevent harmful content creation and dissemination in the digital domain.

Digitally disruptive – what the Bill boils down to

The most forceful part of the Bill makes executives of tech companies criminally liable for failures to comply with some elements of the new statute. ‘Category 1 services’, such as Facebook and Twitter, are defined as “high risk and high reach” due to their high capacity to spread information to large audiences and fall squarely within scope of the Online Safety Bill. Such services will be subject to a new code of practice – to be outlined by Ofcom.

The Bill now makes it the duty of these services, and others’, to mitigate or remove online content that promotes online harms including hate crime, encouraging or assisting suicide, revenge porn, harassment and stalking, and threats of violence, that are shared via their platforms. These harms are listed under Schedule 4 of the Online Safety Bill.

Services must also prevent fraud by banning ads that target vulnerable people online. As to what the identified harms of this may be, Ofcom is also to create a ‘register of risks and risk profiles’ and later to publish them under Part 7, Chapter 3 of the Bill.

Also notable is the criminalisation of ‘Cyberflashing’. The Bill now makes this an offence, attracting two years’ imprisonment. This is part of a wider push by the Government to criminalise acts that typically target women and girls, bringing them to justice, with more resources said to be to the Crown Prosecution Service for this purpose. This will ensure that criminal law fits the new methods of sexual harassment possible through technology, taking a perpetrator focus rather than placing increased responsibility on the platforms.

While the executives will hold the power and responsibility to remove harmful content, consumers now hold the right of appeal, which will allow individuals to contest the verdict issued by the social media platform if they think that their post has been undeservedly removed.

Samuel Piña - Analyst, DRD Partnership

Bigger guns in the Ofcom arsenal

The Bill has also breathed new purpose into Ofcom, which will serve as the regulator for the proposed changes. Ofcom will be able to demand data from tech companies facilitating the distribution of online content. Ofcom, under the new provisions, is permitted to enter the premises of companies to access data and equipment if information requests are not fulfilled in a timely manner.

The Bill also states that ‘Category 1’ companies – those that host popular social media platforms – will have to carry out risk assessments of what harmful content may arise on their site. These platforms now have the responsibility to remove or limit certain types of content and to make this clear in their terms of conditions. This is known under the Bill as ‘legal but harmful’ content. Ofcom may issue fines of up to 10 per cent of companies’ global annual turnover, or even block sites, that fail to meet their responsibilities.

While the executives will hold the power and responsibility to remove harmful content, consumers now hold the right of appeal, which will allow individuals to contest the verdict issued by the social media platform if they think that their post has been undeservedly removed. The process seems clear cut, but social media platforms risk being flooded with endless removals and appeals. In the age of social media bots and spam, this could result in a system that fails to identify and tackle the true dangers in among false flags before the damage is done.

The future of the Bill

The Bill has a provisional feel to it, and the details of what falls under the timeframe of these information requests, and how this will affect other jurisdictions, will evolve at primary and secondary stages. Both the technology companies targeted by the Bill and the law firms gearing up to represent them against Ofcom, will have to wait a little longer to find out whether the implementation of the new legislation will lead to a stark increase in litigation.

In addition, the categories of ‘legal but harmful’ content remain to be set out in secondary legislation under the Online Safety Bill once enacted.