It is a pleasure to see you in the Chair, Dame Maria. This has been a thoughtful and engaging debate on an important subject, and the contributions have raised very important issues.
I particularly thank my hon. Friend the Member for Birkenhead (Mick Whitley) for introducing this debate. I thought his opening remarks about me were uncharacteristically generous, so I had a suspicion that it did not all come from him—if he wants to blame the computer, that’s fine! As he did, I refer to my entry in
the Register of Members’ Financial Interests. My hon. Friend has a long history in the workplace and has seen how automation has changed work—particularly the kind done at Vauxhall Motors in Ellesmere Port—dramatically over many years. What we are talking about today is an extension of that, probably at a greater pace and with greater consequences for jobs than we have seen in the past.
My hon. Friend the Member for Birkenhead said there will be winners and losers in this; that is very important. We must be cognisant of sectors affected by AI where there will probably be more losers than winners, including manufacturing, transport and public administration. My hon. Friend hit the nail on the head when he said that we must have a rights-based and people-focused approach to this incredibly complicated subject. He was right to refer to the TUC paper about the issue. We cannot go far wrong if we hold to the principles and recommendations set out there.
The hon. Member for Folkestone and Hythe (Damian Collins) made an excellent contribution, showing a great deal of knowledge in this area. He is absolutely right to say that there has to be a level of human responsibility in the decision-making process. His references to AI in defence systems were quite worrying and sounded like something from the “Terminator” films. It sounds like dramatic science fiction, but it is a real, live issue that we need to address now. He is right that we should ensure that developers are able to clearly demonstrate the data on which they are basing their decisions, and in saying that the gig economy is a big part of the issue and that the intervention of apps in the traditional employment relationship should not be used as a proxy to water down employment rights.
The hon. Member for Watford (Dean Russell) also gave a very considered speech. He summed it up when he said that this is both amazing and terrifying. We have heard of some wonderful things that can be done, but also some extremely worrying ones. He gave examples of deception, as well as of the wonderful art that can be created through AI, and encapsulated why it is so important that we have this debate today. Although the debate is about the potential impacts of AI, it is clear that change is happening now, and at a dramatic pace that we need to keep up with; the issue has been affecting workers for some time now.
When we survey the Government’s publications on the impact of AI on the market, it is readily apparent that they are a little bit behind the curve when it comes to how technologies are affecting the way work is conducted and supervised. In the 2021 report, “The Potential Impact of Artificial Intelligence on UK Employment and the Demand for Skills”, and the recent White Paper that was published last month, there was a failure to address the issues of AI’s role in the workplace. The focus in both publications was the bigger picture, but I do not think they addressed in detail the concerns we have discussed today.
That is not to downplay the wider structural economic change that AI could bring. It has the potential to have an impact on demand for labour and the skills needed, and on the geographical distribution of work. This will be a central challenge for any Government over the next few decades. As we have heard, the analysis already
points in that direction, with the 2021 Government report estimating that 7% of jobs could be affected in just five years and 18% in 10 years, with up to 30% of jobs over 20 years facing the possibility of automation. That is millions of people who may be displaced in the labour market if we do not get this right.
I will focus my comments on the impact on individual workers, because behind the rhetoric of making the UK an AI superpower, there are statements about having a pro-innovation, light-touch and coherent regulatory framework, with a desire not to legislate too early or to place undue burdens on business. That shows that the Government are, unfortunately, content to leave workers’ protections at the back of the queue. It is telling that in last month’s White Paper—a document spanning 91 pages—workplaces are mentioned just three times, and none of those references are about the potential negative consequences that we have touched on today. As we are debating this issue now, and as the Minister is engaged on the topic, we have the opportunity to get ahead of the curve, but I am afraid that the pace of change in the workplace has completely outstripped the pace of Government intervention over the last number of years.
It has been four years since we saw the Government’s good work plan, which contained many proposals that might help mitigate elements of AI’s use in the workplace. The Minister will not be surprised to hear me mention the employment Bill, which has been promised on many occasions and could have been an opportunity to consider some of these issues. We need an overarching, transformative legislative programme to deal with these matters, and the many other issues around low pay and chronic insecurity in the UK labour market—and we need a Labour Government to provide that.
With an absence of direction from Government, there is already a quiet revolution in the workplace being caused by AI. Workers across a broad range of sectors have been impacted by management techniques derived from the use of artificial intelligence. The role of manager is being diluted. Individual discretion, be it by the manager or worker, has in some instances been replaced by unaccountable algorithms. As we have heard, such practices carry risks.
Reports both in the media and by researchers have found that workplaces across a range of sectors are becoming increasingly monitored and automated, and decisions of that nature are becoming normalised. A report on algorithmic systems by the Institute for the Future of Work noted that that is ultimately redefining work in much narrower terms than can be quantified by any algorithm, with less room for the use of human judgment. Crucially, the institute found that workers were rarely involved in or even consulted about these types of data-driven technologies. The changes have completely altered those people’s experience of work, with greater surveillance and greater intensification, and use in disciplinary procedures. Members may be aware that there is now a greater use of different varieties of surveillance, including GPS, cameras, eye-tracking software, heat sensors and body-worn devices, so the activities of workers can be monitored to an extent that was hitherto unimaginable.
Of course, surveillance is not new, but the way it is now conducted reduces trust, and makes workers feel more insecure and as if they cannot dispute the evidence
that the technology tells people. Most at risk of that monitoring, as the Institute for Public Policy Research has said, are those in jobs with lower worker autonomy, those with lower skills, and those without trade union representation. The latter is an area where the risk increases substantially, which tells us everything that we need to know about the importance of becoming a member of a trade union. The news today that the GMB is making progress in obtaining recognition at Amazon is to be welcomed in that respect.
Increased surveillance and monitoring is not only problematic in itself; it can lead to an intensification of work. Testimony from workers in one study stated that they are expected to be conducting work that the system can measure for 95% of the working day. Time spent talking to colleagues, using the bathroom or even taking a couple of minutes to make a cup of tea will not be registered as working, and will be logged for a manager to potentially take action against the individual. That pressure cannot be conducive to a healthy workplace in the long run. It feels almost like automated bullying, with someone monitoring their every move.
Many businesses now rely on AI-powered systems for fully automated or semi-automated decision making about task allocation, work scheduling, pay, progression and disciplinary proceedings. That presents many dangers, some of which we have talked about. Due to the complexities in the technology, AI systems can sometimes be a trusted black box by those who use them. The people using them assume that the outcome that emerges from the AI system is free of bias and discrimination, and constitutes evidence for the basis of their decisions, but how does someone contest a decision if they cannot question an algorithm?
As we have heard, there is potential for algorithmic bias. AI technology can operate only on the basis of the information put into it. Sometimes human value judgments form the basis of what is fed into the AI, and how the AI analyses it. As the hon. Member for Folkestone and Hythe mentioned, there are some famous examples, such as at Amazon, where AI was found to be systematically disconsidering women for particular job applications because of the way the algorithm worked. There is little transparency and a lack of checks and balances regarding how the technology can be used, so there is a palpable risk of AI-sanctioned discrimination running riot without transparency at the forefront.
I would like the Minister to commit to looking at how the technology works in the workplace at the moment, and to making an assessment of what it is being used for and its potential to discriminate against people with protected characteristics. The Data Protection and Digital Information (No. 2) Bill will create new rights where wholly automated decision making is involved, but the question is: how will someone know when a fully automated decision has been taken if they are not told about it? Is there not a risk that many employers will slot into the terms and conditions of employment a general consent to automated decision making, which will remove the need for the person to be notified all together?
A successful AI strategy for this country should not be built on the back of the poor treatment of workers, and it is the Government’s role to create a legal and regulatory environment that shields workers from the most pernicious elements of these new technologies.
That cannot be fixed by introducing single policies that tinker at the edges; it requires a long overdue wholesale update to our country’s employment laws. As the Minister will know, our new deal for working people will set out a suite of policies that address that. Among other things, it will help to mitigate the worst effects of AI, and will introduce measures that include a right to switch off, which will guard against some of the egregious examples of AI being used to intensify people’s work.
As the organised representation of the workforce, trade unions should be central to the introduction of any new technologies into the workplace. Not only will that enable employers and their representatives to find agreeable solutions to the challenges raised by modern working practices, but it will encourage more transparency from employers as to how management surveillance and disciplinary procedures operate. Transparency has been picked up a few times and it is key to getting this right.
Artificial intelligence’s impact is already being felt up and down the country, but the Government have not been quick enough to act, and its worst excesses are already out there. The need for transparency and trust with technology is clear, and we need to make sure that that has some legislative backing. It is time for a Labour Government to clear that up, stand up for working people and bolster our labour market so that new technologies that are already with us can be used to make work better for everyone.
3.31 pm