My Lords, it is a pleasure to follow the noble Lord, Lord Browne of Ladyton, in supporting his Amendment 32, which he introduced so persuasively and expertly. A few years ago, I chaired the House of Lords Select Committee on AI, which considered the economic, ethical and social implications of advances in artificial intelligence. In our report published in April 2018, entitled AI in the UK: Ready, Willing and Able?, we addressed the issue of military use of AI and stated:
“Perhaps the most emotive and high-stakes area of AI development today is its use for military purposes”,
recommending that this area merited a “full inquiry” on its own. As the noble Lord, Lord Browne of Ladyton, made plain, regrettably, it seems not yet to have attracted such an inquiry or even any serious examination. I am therefore extremely grateful to the noble Lord for creating the opportunity to follow up on some of the issues we raised in connection with the deployment of AI and some of the challenges we outlined. It is also a privilege to be a co-signatory with the noble and gallant Lord, Lord Houghton, who too has thought so carefully about issues involving the human interface with technology.
The broad context, as the noble Lord, Lord Browne, has said, is the unknowns and uncertainties in policy, legal and regulatory terms that new technology in military use can generate. His concerns about complications and the personal liabilities to which it exposes deployed forces are widely shared by those who understand the capabilities of new technology. That is all the more so in a multilateral context where other countries may be using technologies that we would either not deploy or the use of which could create potential vulnerabilities for our troops.
Looking back to our report, one of the things that concerned us more than anything else was the grey area surrounding the definition of lethal autonomous weapon systems—LAWS. As the noble Lord, Lord Browne, set out, when the committee explored the issue, we discovered that the UK’s then definition, which included the phrase
“An autonomous system is capable of understanding higher-level intent and direction”,
was clearly out of step with the definitions used by most other Governments and imposed a much higher threshold on what might be considered autonomous. This allowed the Government to say:
“the UK does not possess fully autonomous weapon systems and has no intention of developing them. Such systems are not yet in existence and are not likely to be for many years, if at all.”
Our committee concluded that, in practice,
“this lack of semantic clarity could lead the UK towards an ill-considered drift into increasingly autonomous weaponry.”
This was particularly in light of the fact that, at the UN Convention on Certain Conventional Weapons group of governmental experts in 2017, the UK opposed the proposed international ban on the development and use of autonomous weapons. We therefore recommended that the UK’s definition of autonomous weapons should be realigned to be the same or similar with that being used by the rest of the world. The Government, in their response to the report of the committee in June 2018, replied that:
“The Ministry of Defence has no plans to change the definition of an autonomous system.”
They did say, however,
“The UK will continue to actively participate in future GGE meetings, trying to reach agreement at the earliest possible stage.”
Later, thanks to the Liaison Committee, we were able on two occasions last year to follow up on progress in this area. On the first occasion, in reply to the Liaison Committee letter of last January which asked,
“What discussions have the Government had with international partners about the definition of an autonomous weapons system, and what representations have they received about the issues presented with their current definition?”
The Government replied:
“There is no international agreement on the definition or characteristics of autonomous weapons systems. Her Majesty’s Government has received some representations on this subject from Parliamentarians”.
They went on to say:
“The GGE is yet to achieve consensus on an internationally accepted definition and there is therefore no common standard against which to align. As such, the UK does not intend to change its definition.”
So, no change there until later in the year in December 2020, when the Prime Minister announced the creation of the autonomy development centre to,
“accelerate the research, development, testing, integration and deployment of world-leading AI,”
and the development of autonomous systems.
In our follow-up report, AI in the UK: No Room for Complacency, which was published in the same month, we concluded:
“We believe that the work of the Autonomy Development Centre will be inhibited by the failure to align the UK’s definition of autonomous weapons with international partners: doing so must be a first priority for the Centre once established.”
The response to this last month was a complete about-turn by the Government, who said:
“We agree that the UK must be able to participate in international debates on autonomous weapons, taking an active role as moral and ethical leader on the global stage, and we further agree the importance of ensuring that official definitions do not undermine our arguments or diverge from our allies.”
They go on to say:
“the MOD has subscribed to a number of definitions of autonomous systems, principally to distinguish them from unmanned or automated systems, and not specifically as the foundation for an ethical framework. On this aspect, we are aligned with our key allies. Most recently, the UK accepted NATO’s latest definitions of ‘autonomous’ and ‘autonomy’, which are now in working use within the Alliance. The Committee should note that these definitions refer to broad categories of autonomous systems, and not specifically to LAWS. To assist the Committee we have provided a table setting out UK and some international definitions of key terms.”
6.30 pm
The NATO definition sets a much less high bar for what is considered autonomous, which is a
“system that decides and acts to accomplish desired goals, within defined parameters, based on acquired knowledge and … an optimal but potentially unpredictable course of action.”
The Government went on to say:
“The MOD is preparing to publish a new Defence AI Strategy and will continue to review definitions as part of ongoing policy development in this area.”
I apologise for taking noble Lords at length through this exchange of recommendation and response but, if nothing else, it demonstrates the terrier-like quality of Lords Select Committees in getting positive responses from government. This latest response is extremely welcome. In the context of the amendment from the noble Lord, Lord Browne, and the issues that we have raised, we need to ask a number of further questions. What are the consequences of the MoD’s thinking? What is the defence AI strategy designed to achieve? Does it include the kind of inquiry that our Select Committee was asking for? Now that we subscribe to the common NATO definition of LAWS, will it deal specifically with the liability and international and domestic legal and ethical framework issues which are central to this amendment? If not, a review of the type envisaged by this amendment is essential.
The final report of the US National Security Commission on Artificial Intelligence, referred to by the noble Lord, Lord Browne, has taken a comprehensive approach to the issues involved. He has quoted three very important conclusions and asked whether the Government agree in respect of our own autonomous weapons. Three further crucial recommendations were made by the commission:
“The United States must work closely with its allies to develop standards of practice regarding how states should responsibly develop, test, and employ AI-enabled and autonomous weapon systems”,
and the
“United States should actively pursue the development of technologies and strategies that could enable effective and secure verification of future arms control agreements involving uses of AI technologies.”
Finally, of particular importance in this context,
“countries must take actions which focus on reducing risks associated with AI-enabled and autonomous weapon systems and encourage safety and compliance with IHL when discussing their development, deployment, and use”.
Will the defence AI strategy or indeed the integrated review undertake as wide an inquiry, and would it come to the same or similar conclusions?
The MoD seems to have moved some way towards getting to grips with the implications of autonomous weapons in the last three years but, if it has not yet considered the issues set out in the amendment, it clearly should as soon as possible update the legal frameworks for warfare in the light of the new technology, or our service personnel will be at considerable legal risk. I hope it will move further in response to today’s short debate.