It is a pleasure to serve under your chairship this afternoon, Dame Maria, and I congratulate the hon. Member for Birkenhead (Mick Whitley), both on securing this very important debate and on his excellent speech.
Artificial intelligence is an enabling technology. It is driving the digital age, but it is based on a series of points of data that are gathered by computer systems and processed in order to make decisions. It still requires a huge amount of human intervention in determining what data will be drawn on and therefore what decisions should be made. Consequently, there has to be a level of human responsibility, as well.
We can see already from the development of AI that it is not just question of computer systems learning from existing patterns of behaviour; they are also effectively thinking for themselves. The development of AI in chess is a good example of that. Not only are AI systems learning to make the moves that a human would make, always selecting the perfect combination and, therefore, being much more successful. When given the command to win the game, AI systems have also developed ways of playing that are unique, that the human mind has not thought of or popularised, and that are yet more efficient at winning. That is very interesting for those interested in chess. Perhaps not everyone is interested in chess, but that shows the power of AI to make autonomous decisions, based on data and information it is given. Humans invented the game of chess, but AI can learn to play it in ways not thought of by humans.
The application of AI in the defence space is even more scary, as touched on by the hon. Member for Birkenhead. AI-enabled weapons systems can be aggressive, make decisions quickly and behave in unpredictable ways. The human strategist is not able to keep pace with them and we would require AI-driven defence systems to protect ourselves from them. It would be alarming to live in a world where aggressive technology driven by AI can be combatted only by AI, with no human intervention in the process. It is scary to think of a security situation, like the Cuban missile crisis in the 1960s, where the strategies are pursued solely by AI. Therefore, we will have to think as we do in other areas of warfare, where we have bans on certain types of chemical weapons. There are certain systems that are considered so potentially devastating that they will not be used—there are moratoriums on their use and deployment. When thinking about AI in the defence space, we may well have to consider what security to build into it as well. We also need to think about the responsibility of companies that develop AI systems just for their commercial interests. What responsibility lies on them for the systems that they have created?
The hon. Gentleman was right to say that this is like an industrial revolution. With industrial revolutions comes great change. People’s ways of living and working
can be disrupted, and they are replaced by something new. We cannot yet say with certainty what that something new could be. There are concerns, which I will come to in a moment, about the regulation of AI. There could be amazing opportunities, too. One can imagine working or classroom environments where children could visit historical events. I asked someone who works in education development how long it could take before children studying the second world war could put on a headset, sit in a virtual House of Commons and watch Winston Churchill deliver one of his famous speeches, as if they were actually sitting there. We are talking about that sort of technology being possible within the next decade.
The applications for learning are immense. Astronauts who practise going to the international space station do so from metaverse-style, AI-driven virtual spaces, where they can train. At the same time as we think about the good things that it can do, we should also consider the fact that very bad spaces could be created. In our debates on the Online Safety Bill, we have been concerned about abusive online behaviour. What if such abusive behaviour took place in a video chatroom, a virtual space, that looks just as real as this room? Who would be responsible for that?
It is beholden on the companies that develop these new technologies and systems to have responsibility for the output of those systems. The onus should be on the companies to demonstrate that what they are developing is safe. That is why my right hon. Friend the Chancellor of the Exchequer was right to set out in the Budget statement last year that the Government would fund a new AI sandbox. We have seen AI sandboxes developed in the EU. In Washington state in the United States, AI sandboxes are used to research new facial recognition technologies, which is particularly sensitive. The onus should be on the developer. The role of the regulator should be to say, “There are certain guidelines you work within, and certain things we might consider unsafe or unethical. You develop your technologies and new systems and put them through a sandbox trial. You make it easy for the regulator to ask about the data you are drawing from, the decisions the system you have put in place is making, the outcomes it is creating and whether they are safe.”
We have already seen that learned behaviour through data can create unfair biases in systems. There was a case where Amazon used AI to sift through CVs for recruitment. The AI learned that it was largely men hired for the roles, and therefore discarded the CVs of women applying for the position because it assumed they would not be qualified. We should be concerned about biases built into data systems being exacerbated by AI.
Some people talk about AI as if it is a future technology—something coming—but it exists today. Every one of us experiences or interacts with AI in some way. The most obvious way for a lot of people is through the use of apps. The business model of social media apps is driven by recommendation, which is an AI-driven system. The system—Facebook, TikTok, Instagram or whatever it is—is data profiling the user and recommending content to keep them engaged, based on data, and it is AI driving those recommendation tools.
We have to be concerned about whether those systems create unfair practices and behaviours in the workplace. That is why the hon. Member for Birkenhead is right to raise this issue. If a gig economy worker—a taxi driver or a delivery courier—is paid only when they are in receipt of jobs on the app, does the app create a false incentive for them to be available for work all the time? Do they have to commit to being available to the app for most of the day, because if they do not it drives the work to people who have high recommendation scores because they are always available? Do people who cannot make themselves available all the time find that the amount they can earn is much less, if they do not get paid for waiting time when they use such apps? If that becomes the principal way in which a lot of tasks are driven, AI systems, which are built to be efficient and make it easy for people to access the labour market, could create biases that favour some workers over others. People with other jobs or family commitment, in particular, might not be able to make themselves available.
We should consider not just the way the technology works but the rights that citizens and workers have if their job is based on using those apps. The employer—the app developer—should treat the people who work for them as employees, rather than as just freelance agency workers who happen to be available at any particular time of the day. They have some sort of working relationship that should be honoured and respected.
The basic principle that we should apply when we think about the future of AI and its enormous potential to create growth and new jobs, and build fantastic new businesses, is that the rights that people enjoy today—their rights as citizens and employees—should be translated into the future world of technology. A worker should not lose their working rights simply because their relationship with their employer or their customer is through an app, and because that experience is shaped by the collection and processing of data. Ultimately, someone is doing that processing, and someone has created that system in order to make money from it. The people doing that need to be responsible for the technology they have created.
2.52 pm