My Lords, it is a great pleasure to follow two such wonderfully informed and informative speeches and I thank the noble Lords for tabling this amendment. So that the Committee can understand my position in this, I will say that I wrote part of a master’s thesis 20 years ago on artificial intelligence. That of course is an age in terms of these things, but I had cause to engage with the issues of medicine and artificial intelligence just last year when I was asked to take part in a debate on the subject. One of the things that I found was that a lot of the language has not changed. Twenty years ago, AI was almost there and now, while we have a great deal of big data, how much closer we are to actual artificial intelligence is another question. The noble Lord, Lord Clement-Jones, referred to what happened this year with the exam results fiasco, which was very much a cautionary tale about the use of this.
4 pm
I would position myself as a cautious sceptic of AI in general. The noble Lord, Lord Freyberg, was absolutely right in deducing that the reason I put my name down to speak on this amendment was that I am particularly concerned about the social issues associated with such systems. We have seen, in the last couple of years, a great deal of rising awareness of the ways in which they can discriminate against certain populations that are already discriminated against—be that according to gender or socioeconomic status, or people from BAME and other backgrounds.
I want to draw noble Lords’ attention to an excellent article in the Journal of Information Policy from 2018, which I think was one of the early articles on this issue, entitled, “How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions and Policy Implications”. It talks about software and systems engineers, scholars, researchers and people making decisions for business, education, social services and other enterprises having to ask themselves lots of questions. I would, of course, add into that list regulators. When is it appropriate to collect and use sensitive information? When does it cause harm and when does it prevent harm? My personal addition to this list is: when are you not collecting information that you need? The article states:
“Answering these questions requires understanding the practical and moral sides of a process involving people, data, and computation conceived as a sociotechnical system.”
What we are looking at here is growing complexity. As the noble Lord, Lord Clement-Jones, said, the risk of a black box AI system is that some very bad things can happen inside that black box. Furthermore, I come back to the points I made in the earlier group of amendments: we have a real issue of confidence in medicine and in the systems, and a need for transparency and for independent oversight. Simply relying on what are often for-profit companies making decisions and reassuring us that things are going to be fine has not worked out well in the past. As we enter this age of increasing complexity, it is crucial that we do not operate on that basis in the future.
I commend the noble Lords for tabling Amendment 83 and the consequent amendments. I look forward to the Minister’s response to my comments and theirs.
In particular, I ask the Minister: if regulation of this crucial area of medicine is not going to appear in the Bill—if this amendment or something like it is not going to be adopted—how will the Government ensure oversight?