My Lords, the debate on this group has been a little longer, deeper and more important than I had anticipated. It requires all of us to reflect before Report on some of the implications of the things we have been talking about. It was introduced masterfully by the noble Baroness, Lady Harding, and her comments—and those from the noble Baronesses, Lady Finlay and Lady Berridge—were difficult to listen to at times. I also congratulate the Government Whip on the way he handled the situation so that innocent ears were not subject to some of that difficult listening. But the questions around the implications of virtual reality, augmented
reality and haptic technology are really important, and I hope the Minister will agree to meet with the noble Baroness, Lady Berridge, and the people she referenced to reflect on some of that.
1 pm
The noble Baroness, Lady Fox, raised some of the right questions around the balance of this debate. I am a technology enthusiast, so I will quote shortly from my mobile phone, which I use for good, although a lot of this Bill is about how technology is used for bad. I am generally of the view that we have a responsibility to put some safety rails around this technology. I know that the noble Baroness agrees, in respect of children in particular. As ever, in responding to her, I end up saying “It’s all about balance” in the same way as the Minister ends up saying “It’s all about unintended consequences”.
Amendments 283ZZA and 283ZZB in my name are, as the noble Lord, Lord Allan, anticipated, about who controls autonomous bots. I was really grateful to hear his comments, because I put down the amendments on a bit of a hunch without being that confident that I understood what I was talking about technically. He understands what he is talking about much better than I do in this regard, so it is reassuring that I might be on to something of substance.
I was put on to it by reading a New York Times article about Geoffrey Hinton, the now labelled “Godfather of AI”. The article stated:
“Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyse. This becomes an issue, he said, as individuals and companies allow AI systems not only to generate their own computer code but actually run that code on their own”.
As a result, I went to OpenAI’s ChatGPT and asked whether it could create code. Of course, it replied that it could help me with creating code. I said, “Can you code me a Twitter bot?” It said, “Certainly, I can help you create a basic Twitter bot using Python. Here is an example of a Twitter bot that post tweets”. Then I got all the instructions on how to do it. The AI will help me get on and create something that starts then to be able to create autonomous behaviours and activity. It is readily available to all of us now, and that should cause us some concern.
The Bill certainly needs to clarify—as the amendment tabled by the noble Baroness, Lady Kidron, and introduced so well by the noble Baroness, Lady Harding, goes to—whether or not a bot is a user. If a bot is a user and the Minister can assure us of that, things get a lot easier. But given that it is possible to code a realistic avatar generating its own content and behaviour in the metaverse, the core question I am driving at is: who is responsible for that behaviour? Is it the person who is deemed to be controlling it, as it says in Clause 170(7), which talks about
“a person who may be assumed to control the bot or tool”?
As the noble Lord, Lord Allan, said, that is not always going to be that straightforward when behaviours start to be something that the AI itself generates, and it generates behaviours that are not expected by the person who might be perceived to have controlled it. No one really controls it; the creator does not necessarily
control it. I am just offering the simple amendment “or owns it” to allow some legal culpability to be clarified. It might be that the supplier of the virtual environment is culpable. These are questions that I am seeking to answer with my amendment from the Minister, so that we get clarity on how Ofcom is supposed to regulate all of these potential harms in the future.
Some months ago, I went to a Speaker’s Lecture given by Stuart Russell, who delivered the Reith Lectures around AI. He talked about the programming of an AI-powered vacuum cleaner that was asked to clear up as much dirt as possible. What then plays out is that the vacuum cleaner gets a bit of dirt up off the carpet and then spews it out and picks it up again, because that is the way of maximising the intent of the programming. It is very difficult to anticipate the behaviour of AI if you do not get the instructions exactly right. And that is the core of what we are worried about. Again, when I asked ChatGPT to give me some guidance on a speaking note to this question, it was quite helpful in also guiding me towards an embedded danger of bias and inequity. The AI is trained by data; we know a certain amount about the bias of data, but it is difficult to anticipate how that will play out as the AI feeds and generates its own data.
The equity issues that can then flow are something that we need to be confident that this legislation will be able to deal with. As the right reverend Prelate the Bishop of Chelmsford reminded us, when the legal but harmful elements of the Bill were taken out between draft stage and publication, we lost the assessment of future risk as being something that was in place before, which I think was an unintended consequence of taking those things out. It would be great to see those back, as Amendment 139 and Amendment 195 from the right reverend Prelate the Bishop of Oxford suggest. The reporting that the noble Baroness, Lady Finlay, is proposing in her amendments is important in giving us as Parliament a sense of how this is going. My noble friend Lord Stevenson tabled Amendment 286 to pay particular regard to the metaverse, and I support that.
Ultimately, the key test for the Minister is, as others have said, that tech is changing really fast. It is changing the online environment and our relationship with it as humans very quickly indeed; the business models will change really quickly as a result and they, by and large, are likely to drive quite a lot of the platform behaviour. But can the regulator, as things are currently set out in this legislation, react and change quickly enough in response to that highly dynamic environment? Can we anticipate that what is inconceivable at the moment is going to be regulatable by this Bill? If not, we need to make sure that Parliament has opportunities to revisit this. As I have said before, I strongly support post-legislative scrutiny; I personally think a permanent Joint Committee of both Houses around digital regulation, so that we have some sustained body of expertise of parliamentarians in both Houses to keep up with this, would be extremely useful to Parliament.
As a whole, I think these amendments are really helpful to the Minister and to Parliament in pointing us towards where we can strengthen the future-proofing of the Bill. I look forward to the Minister’s response.