The UK Online Safety Act (the 'Act'), heralded as a progressive step toward internet regulation, has become law amid significant concerns. The Act received Royal Assent on 26 October 2023, just after the EU Digital Services Act. The EU also has further legislation with regard to child sexual abuse material ('CSAM') in the pipeline.

The Act's final version has sparked concern with regard to the potential impact on user privacy and online security. The Act covers a wide range of illegal content that platforms are mandated to address. It imposes a duty of care on platforms, especially concerning the content accessed by users, particularly children. Additionally, a divisive requirement has been included, necessitating messaging platforms to scan users' messages for illegal material. This provision has sparked concerns from tech companies and privacy advocates, who view it as an unwarranted assault on encryption practices. On top of understanding the Act itself, business also have to understand what the regulator, Ofcom, expects of them. Ofcom has so far issued over 1,700 pages of guidance in its 'Consultation: Protecting People from Illegal harms Online' (here). This is the first of four major consultations that Ofcom will publish over the coming 18 months, in its process of establishing new guidance and regulations for business to follow.

Various companies, ranging from major tech corporations to smaller platforms and messaging apps, are already required to adhere to a comprehensive set of new regulations. For example, the regulations mandate age-verification processes for users. Notably, Wikipedia, the eighth most-visited website in the UK, has declared that it cannot comply with the Act due to the violation of the Wikimedia Foundation's principles on data collection about its users. Additionally, platforms must ensure that younger users are shielded from age-inappropriate content, encompassing materials like pornography, cyberbullying, and harassment. How to achieve this has been much debated. Platforms are also obligated to provide risk assessments concerning potential threats to children on their services. Furthermore, platforms are expected to establish user-friendly channels for parents to report concerns effectively. Companies are mandated to promptly remove inappropriate content from their platforms, along with fraudulent advertisements. These regulations aim to create a safer online environment, particularly for vulnerable users, but compliance is seen by some as overly onerous.

Under the Act, larger platforms are obligated to monitor potentially harmful, although not explicitly illegal, content. They must implement standards consistently, a move criticised by free-speech advocates, who argue that this grants excessive control to private companies over acceptable online discourse. In contrast, some see this approach as a way for Big Tech to evade accountability for spreading falsehoods. The debate on this Act, including its impact on free speech, seems certain last for some time.

One of the most contentious aspects of the Act is s. 122, which seemingly compels companies to scan users' messages for illegal content. This requirement poses a significant challenge to platforms which use end-to-end encryption, such as WhatsApp and Signal. Critics argue that implementing this clause might necessitate breaking encryption, leading to potential privacy breaches and data security compromises. The only way to comply with the law would be to put so-called 'client-side scanning software' on users' devices, which would examine messages before sending, undermining the encryption. Despite objections raised by experts and tech companies, this section remains a part of the law, albeit the government suggested they will not use such, and/or the technology to allow it is not yet available. Nonetheless, it raises serious concerns among privacy advocates and champions of free speech.

An example of this concern can be seen in the statement of Matthew Hodgson, CEO of encrypted messaging company Element, who said:

"[The Act] gives Ofcom, as a regulator, the ability to obligate people like us to go and put third-party content monitoring [on our products] that unilaterally scans everything going through the apps. That's undermining the encryption and providing a mechanism where bad actors of any kind could compromise the scanning system in order to steal the data flying around the place."

Various other providers of services which rely on end-to-end encryption, including Signal and Meta (owner of WhatsApp) have threatened to withdraw their services from the UK. Now that it is law, we will have to see how it is actually enforced, and whether that does mean messaging apps opt to withdraw from the UK market. The UK government has claimed that it will not force platforms to adopt non-existent technology to scan users' messages, but it technically would have the power to do under the Act.

The Act also imposes several new obligations on online platforms, including age verification for users, preventing minors from accessing inappropriate content, and promptly removing illegal posts. Failure to comply with these regulations could result in hefty fines, with penalties reaching up to £18 million or 10% of the platform-owner's group's annual revenue, whichever is higher.

The Act's enforcement has been entrusted to the UK's telecommunications regulator, Ofcom. While the government claims that the legislation aims to protect free speech and empower adults, sceptics fear that it may inadvertently pave the way for increased surveillance powers. What is clear from the voluminous Ofcom guidance, is that compliance will require a significant focused effort and be a cost to business. The ongoing debate regarding the government's stance on encryption and how online platforms strike a balance between user safety and privacy rights remains a contentious issue. The outcome of this legislation holds significant implications for the future landscape of online communication and individual privacy in the United Kingdom.

Ofcom guidance following Online Safety Act

Companies are required to take stringent measures to protect users from illegal content online, as outlined by the online safety regulator, Ofcom. Under the Online Safety Act, Ofcom has released draft Codes of Practice for social media, gaming, pornography, search, and sharing sites. The focus is on safeguarding children from harmful content, including child sexual abuse, grooming, and pro-suicide material. Ofcom's role is to compel firms to address the causes of online harm and make their services safer without making decisions about individual content. While there is not as yet any Ofcom guidance specific to schools or other educational establishments, such establishments will need to pay attention to what guidance there is to ensure that they comply with the Online Safety Act.

To combat child sexual abuse and grooming, larger and higher-risk services must implement default measures, such as preventing children from appearing in suggested friend lists and restricting their visibility in connection lists. Hash matching technology should be used to detect and remove child sexual abuse material, and automated tools should identify URLs hosting illegal content. Additionally, services must provide crisis prevention information in response to suicide-related search requests.

The draft codes also propose measures to combat fraud and terrorism. Services should deploy keyword detection, remove posts linked to stolen credentials and verify accounts to reduce exposure to fake accounts. All services must block accounts run by proscribed terrorist organisations. To mitigate the risk of illegal harm, services should have an accountable person, well-resourced content moderation teams, easy reporting and blocking mechanisms, and safety tests for algorithms. Clearly a considerable investment in compliance is needed.

Ofcom is still consulting on the draft documents, engaging with industry experts, and collecting feedback to finalise the regulations, so monitoring of new guidance is needed. Services must conduct risk assessments. The implementation of the new online safety law will occur in phases, with additional protections for children from harmful content planned for consultation in spring 2024. Companies failing to comply may face enforcement actions and fines.

Ofcom's proposed measures

Ofcom has provided an overview of the proposals presented in its 'Illegal Harms' consultation and specified the services to which they apply. These proposals apply to user-to-user ('U2U') services, and to search services. Ofcom goes into more detail about U2U services, and these appear to be the primary focus of concern.

Measures proposed for U2U Services:

  • The proposed measures for U2U services are presented in a table in Ofcom's 'at a glance' summary (here – starting at page 2). Each row signifies a different measure, organised based on the discussion in various chapters of the consultation and corresponding draft Codes.
  • The suitability of certain measures for a particular service depends on its size and risk level. Services are categorized into two groups: Large services, defined as those with an average user base exceeding 7 million per month in the UK (c. 10% of the population), and Smaller services, which are all other services, including services provided by small and micro businesses.
  • Within each size category, services are further classified into:
    • 'Low risk': Denotes services assessed as low risk for all types of illegal harm in their risk assessment.
    • 'Specific risk': Refers to services assessed as medium or high risk for a particular kind of harm, for which specific measures are proposed. Different harm-specific measures are suggested based on the identified risk.
    • 'Multi risk': Represents services facing significant risks for multiple illegal harms. Additional measures are proposed for these services, targeting illegal harms more broadly, without focusing on specific risks. A service qualifies as multi-risk if it is assessed as medium or high risk for at least two different kinds of harms from the 15 priority illegal harms outlined in the Risk Assessment Guidance.
  • Measures may be applicable to the same service from both specific risk and multi-risk columns, contingent upon the identified medium or high risk for certain types of harm. If a service is medium or high risk for all kinds of harm, all measures from the specific risk and multi-risk columns may be relevant.

There are additional measures proposed for search services in a second table in the summary (here – starting at page 9).

It is worth noting that the criteria to be a Large Service mean that there will not actually be that many Large Services in the UK. The following services are likely to be included:

  • Alphabet (Google Search, Google Maps, Google Earth, Gmail, YouTube)
  • Meta (WhatsApp, Facebook, Instagram)
  • X (formerly known as Twitter)
  • Amazon
  • eBay
  • Wikimedia (Wikipedia)
  • The NHS
  • The BBC (BBC News, BBC Weather, BBC iPlayer)
  • Microsoft
  • Apple

Clearly online safety is still evolving, but it already poses risks to business not complying, so it makes sense to start addressing compliance now.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.