UK Parliament / Open data

Online Safety Bill

My Lords, this has been a grim but important debate to open the Committee’s proceedings today. As my noble friend Lady Harding of Winscombe and others have set out, some of the issues and materials about which we are talking are abhorrent indeed. I join other noble Lords in thanking my noble friend Lord Harlech for his vigilance and consideration for those who are watching our proceedings today, to allow us to talk about them in the way that we must in order to tackle them, but to ensure that we do so sensitively. I thank noble Lords for the way they have done that.

I pay tribute also to those who work in this dark corner of the internet to tackle these harms. I am pleased to reassure noble Lords that the Bill has been designed in a way that responds to emerging and new technologies that may pose a risk of harm. In our previous debates, we have touched on explicitly naming certain technologies and user groups or making aspects of the legislation more specific. However, one key reason why the Government have been resistant to such specificity is to ensure that the legislation remains flexible and future-proofed.

The Bill has been designed to be technology-neutral in order to capture new services that may arise in this rapidly evolving sector. It confers duties on any service that enables users to interact with each other, as well as search services, meaning that any new internet service that enables user interaction will be caught by it.

Amendment 125, tabled by the noble Baroness, Lady Kidron—whose watchful eye I certainly feel on me even as she takes a rare but well-earned break today—seeks to ensure that machine-generated content, virtual reality content and augmented reality content are regulated content under the Bill. I am happy to confirm to her and to my noble friend Lady Harding who moved the amendment on her behalf that the Bill is designed to regulate providers of user-to-user services, regardless of the specific technologies they use to deliver their service, including virtual reality and augmented reality content. This is because any service that allows its users to encounter content generated, uploaded or shared by other users is in scope unless exempt. “Content” is defined very broadly in Clause 207(1) as

“anything communicated by means of an internet service”.

This includes virtual or augmented reality. The Bill’s duties therefore cover all user-generated content present on the service, regardless of the form this content takes, including virtual reality and augmented reality content. To state it plainly: platforms that allow such content—for example, the metaverse—are firmly in scope of the Bill.

The Bill also ensures that machine-generated content on user-to-user services created by automated tools or machine bots will be regulated by the Bill where appropriate. Specifically, Clause 49(4)(b) means that machine-generated content is regulated unless the bot or automated tool producing the content is controlled by the provider of the service. This approach ensures that the Bill covers scenarios such as malicious bots on a social media platform abusing users, or when users

share content produced by new tools, such as ChatGPT, while excluding functions such as customer service chatbots which are low risk. Content generated by an artificial intelligence bot and then placed by a user on a regulated service will be regulated by the Bill. Content generated by an AI bot which interacts with user-generated content, such as bots on Twitter, will be regulated by the Bill. A bot that is controlled by the service provider, such as a customer service chatbot, is out of scope; as I have said, that is low risk and regulation would therefore be disproportionate. Search services using AI-powered features will be in scope of the search duties.

The Government recognise the need to act both to unlock the opportunities and to address the potential risks of this technology. Our AI regulation White Paper sets out the principles for the responsible development of AI in the UK. These principles, such as safety and accountability, are at the heart of our approach to ensuring the responsible development and use of artificial intelligence. We are creating a horizon-scanning function and a central risk function which will enable the Government to monitor future risks.

The Bill does not distinguish between the format of content present on a service. Any service that allows its users to encounter content generated, uploaded or shared by other users is in scope unless exempt, regardless of the format of that content. This includes virtual and augmented reality material. Platforms that allow such content, such as the metaverse, are firmly in scope of the Bill and must take the required steps to protect their users from harm. I hope that gives the clarity that my noble friend and others were seeking and reassurance that the intent of Amendment 125 is satisfied.

The Bill will require companies to take proactive steps to tackle all forms of online child sexual abuse, including grooming, live streaming, child sexual abuse material and prohibited images of children. If AI-generated content amounts to a child’s sexual exploitation or abuse offence in the Bill, it will be subject to the illegal content duties. Regulated providers will need to take steps to remove this content. We will shortly bring forward, and have the opportunity to debate in Committee, a government amendment to address concerns relating to the sending of intimate images. This will cover the non-consensual sharing of manufactured images—more commonly known as deepfakes. The possession and distribution of altered images that appear to be indecent photographs of children is ready covered by the indecent images of children offences, which are very serious offences with robust punishment in law.

1.15 pm

The noble Baroness, Lady Finlay of Llandaff, asked about an issue touched on in Amendment 85C. Under their illegal content safety duties, companies must put in place safety measures that mitigate and manage the risks identified in their illegal content risk assessment. As part of this, in-scope services such as Meta will be required to assess the level of risk of their service being used for the commission or facilitation of a priority offence. They will then be required to mitigate

any such risks. This will ensure that providers implement safety by design measures to mitigate a broad spectrum of factors that enable illegal activity on their platforms. This includes when these platforms facilitate new kinds of user-to-user interactions that may result in offences manifesting themselves in new ways online.

Schedules 5, 6 and 7, which list the priority offences, are not static lists and can be updated. To maintain flexibility and to keep those lists responsive to emerging harms and legislative changes, the Secretary of State has the ability to designate additional offences as priority offences via statutory instrument, subject to parliamentary scrutiny. It should be noted that Schedule 7 already contains several sexual offences, including extreme pornography, so-called revenge pornography and sexual exploitation, while Schedule 6 is focused solely on child sexual abuse and exploitation offences. Fraud and financial offences are also listed in Schedule 7. In this way, these offences are already captured, and mean that all in-scope services must take proactive measures to tackle these types of content. These schedules have been designed to focus on the most serious and prevalent offences, where companies can take effective and meaningful action. They are, therefore, primarily focused on offences that can be committed online, so that platforms are able to take effective steps proactively to identify and tackle such offences. If we were to add offences to these lists that could not be effectively tackled, it would risk spreading companies’ resources too thinly and diluting their efforts to tackle the offences we have listed in the Bill.

The Bill establishes a differentiated approach to ensure that it is proportionate to the risk of harm that different services pose. Category 1 services are subject to additional duties, such as transparency, accountability and free speech duties, as well as duties such as protections for journalistic and democratic content. These duties reflect the influence of the major platforms over our online democratic discourse. The designation of category 1 services is based on how easily, quickly and widely user-generated content is disseminated. This reflects how those category 1 services have the greatest influence over public discourse because of their high reach. Requiring all companies to comply with the full range of category 1 duties would impose a disproportionate regulatory burden on smaller companies, which do not exert the same amount of influence over public discourse. This would divert their resources away from the vital task of tackling illegal content and protecting children.

The noble Baroness, Lady Finlay, also asked about virtual training grounds. Instruction or training for terrorism is illegal under existing terrorism legislation, and terrorism is listed as a priority offence in this Bill. Schedule 5 to the Bill lists the terrorism offences that constitute priority offences. These are drawn from existing terrorism legislation, including the Terrorism Act 2000, the Anti-terrorism, Crime and Security Act 2001 and the Terrorism Act 2006. Section 6 of the 2006 Act covers instruction or training for terrorism and Section 2 of that Act covers dissemination of terrorist publications. Companies in scope of the Online Safety Bill will be required to take proactive steps to prevent users encountering content that amounts to an offence under terrorism legislation.

Amendments 195, 239, 263, 241, 301 and 286 seek to ensure that the Bill is future-proofed to keep pace with emerging technologies, as well as ensuring that Ofcom is able to monitor and identify new threats. The broad scope of the Bill means that it will capture all services that enable user interaction as well as search services, enabling its framework to continue to apply to new services that have not yet been invented. In addition, the Government fully agree that Ofcom must assess future risks and monitor the emergence of new technologies. That is why the Bill already gives Ofcom broad horizon-scanning and robust information-gathering powers, and why it requires Ofcom to carry out extensive risk assessments. These will ensure that it can effectively supervise and regulate new and emerging user-to-user services.

Ofcom is already conducting extensive horizon scanning and I am pleased to confirm that it is planning a range of research into emerging technologies in relation to online harms. The Bill also requires Ofcom to review and update its sectoral risk assessments, risk profiles and codes of practice to ensure that those reflect the risks and harms of new and emerging technology. The amendments before us would therefore duplicate existing duties and powers for Ofcom. In addition, as noble Lords will be aware, the Bill already has built-in review mechanisms to ensure that it works effectively.

My right honourable friends the Prime Minister and the Secretary of State for Science, Innovation and Technology are clear that artificial intelligence is the defining technology of our time, with the potential to bring positive changes, but also that the success of this technology is founded on having the right guardrails in place, so that the public can have the confidence that artificial intelligence is being used in a safe and responsible way. The UK’s approach to AI regulation will need to keep pace with the fast-moving advances in this technology. That is why His Majesty’s Government have deliberately adopted an agile response to unlock opportunities, while mitigating the risks of the technology, as outlined in our AI White Paper. We are engaging extensively with international partners on these issues, which have such profound consequences for all humankind.

Clause 159 requires the Secretary of State to undertake a review into the operation of the regulatory framework between two and five years after the provisions come into effect. This review will consider any new emerging trends or technologies, such as AI, which could have the potential to compromise the efficacy of the Bill in achieving its objectives. I am happy to assure the noble Viscount, Lord Colville of Culross, and the right reverend Prelate the Bishop of Chelmsford that the review will cover all content and activity being regulated by the Bill, including legal content that is harmful to children and content covered by user-empowerment tools. The Secretary of State must consult Ofcom when she carries out this review.

About this proceeding contribution

Reference

830 cc1010-3 

Session

2022-23

Chamber / Committee

House of Lords chamber
Back to top