The UK government has published its initial consultation response on the Online Harms White Paper (see our previous post here). The new regulatory framework proposes introducing a ‘duty of care’ on online services in respect of harmful content. The government’s initial response reports on the findings from the public consultation, and provides an indication of how the legislation will be taken forward.

Online harms

The response paper confirms that the ‘duty of care’ will “only apply to companies that provide services which facilitate the sharing of user generated content or user interactions, for example through comments, forums or video sharing.” The government has indicated that further guidance will be provided, to help companies decide whether their services fall within scope of the new legislation.

An important clarification in the government’s response paper, is that the regulatory framework will establish different expectations on companies for unlawful content and content which may not be unlawful, but has the potential to cause harm.

Online services will need to ensure that unlawful content is removed expeditiously, and that “the risk of it appearing is minimised by effective systems.” The findings do not provide any further detail on this point, and it is unclear how this will work alongside the safe harbour provisions under the E-Commerce Directive. That said, the white paper published in April 2019, committed to increasing the responsibility of online services in a way that is compatible with the safe harbour provisions.

While the government has not defined content that is lawful, but considered harmful, the consultation response provides examples of such conduct – online bullying, intimidation in public life, self-harm and suicide imagery. The regulatory framework will not require online services to remove specific pieces of this content, nor will the regulator investigate or adjudicate on any individual complaints. Instead, the focus will be on online services’ wider systems and processes to deal with these online harms.

Online services will be required to set out in clear terms and conditions which content is deemed acceptable on their sites. The government also expects companies to ensure they have effective and accessible reporting mechanisms in place to allow users to report harmful content, or raise wider concerns that the company has breached its ‘duty of care’.

The government’s proposals are aimed at improving transparency about which content is acceptable on different online platforms. The response paper makes reference to concerns about some online services’ existing processes for removing content which have, in some cases, been criticised for being “opaque“. The consultation response also refers to enabling users to challenge companies when content is removed from their services.

The government has indicated that a differentiated framework will be established for online harms that materialise over private communications. As expected, some of the consultation responses expressed concerns that online services may have to proactively moderate private messages, which could impact on the privacy of users.

The regulator

The response paper indicates that Ofcom is to be appointed as the independent regulator responsible for overseeing online services’ systems and processes. The government’s reasoning for this decision, is that it allows Ofcom to build on its expertise and avoid fragmentation of the regulatory landscape. Ofcom already takes a risk-based approach to investigations, similar to that envisaged for the online harms regulator. Ofcom would also remain subject to the Public Sector Equality Duty in its work on online harms, and must consider how its approach or decisions affect people with ‘protected characteristics’.

The regulator will have a range of enforcement powers to take against online companies which fail to fulfil their ‘duty of care’. This includes warnings and notices, fines, business disruption measures, internet service provider blocking and senior management liability. It is not yet clear when these enforcement powers will be used. Respondents to the consultation highlighted concerns that excessive enforcement could have a detrimental effect on freedom of expression, by encouraging online services to over-block user-generated content.

The government has also committed to giving the regulator the power to require annual transparency reports from online services. These reports would, for example, outline the prevalence of harmful content on their services, and set out the measures being taken to address these.

Codes of practice

The regulator will issue codes of practice – which will outline the processes that online services need to adopt to demonstrate they have fulfilled their ‘duty of care’. The government has indicated there will not be a code of practice for each category of harmful content, recognising this would pose an unreasonable regulatory burden.

The response paper indicates that a higher level of protection should be afforded to children than for the typical adult user. While online services will be able to decide which lawful content is acceptable on their services, they will also be expected to take reasonable steps to protect children from harmful content. The proposals make reference to the ICO’s recently published Age Appropriate Design Code, which sets out standards to be met to protect children’s privacy online.

For online harms that are a risk to the safety of children (and those that are a risk to national security), the government is working with law enforcement and other stakeholders to produce interim codes of practice. These codes will be voluntary, but are intended to bridge the gap and incentivise companies to take early action prior to the regulator becoming operational.

The interim codes of practice, and more detailed proposals on the online harms regulation, are expected to be published in the spring. These should provide further clarification on the government’s direction for the new regulatory framework.