Most Commented Posts
The Ministry of Defence (MoD) has unveiled its Defence artificial intelligence strategy outlining how the UK will work closely with the private sector to prioritise research, development and experimentation in artificial intelligence (AI) to “revolutionise our Armed Forces capabilities.”
Published on 15 June 2022 during London Tech Week’s AI Summit, the strategy aims to make the MoD the “most effective, efficient, trusted and influential Defence organisation for our size” when it comes to AI.
The strategy’s four main objectives are: to transform the MoD into an AI-ready organisation; to adopt and exploit AI at pace and scale for defence advantage; to strengthen the UK’s defence and security AI ecosystem; and to shape global AI developments to promote security, stability and democratic values.
A policy document on the Ambitious, safe and responsible use of AI, developed in partnership with the government’s Centre for Data Ethics and Innovation (CDEI), was published alongside the strategy, which sets out five principles to promote the ethical development and deployment of AI by the military.
These principles include human-centricity, responsibility, understanding, bias and harm mitigation, and reliability.
The MoD previously published a data strategy for defence on 27 September 2021, which set out how the organisation will ensure data is treated as a “strategic asset, second only to people”, as well as how it will enable that to happen at pace and scale.
“We intend to exploit AI fully to revolutionise all aspects of MoD business, from enhanced precision-guided munitions and multi-domain Command and Control to machine speed intelligence analysis, logistics and resource management,” said Laurence Lee, second permanent secretary of the MoD, in a blog published ahead of the AI Summit, adding that the UK government intends to work closely with the private sector to secure investment and spur innovation.
“For MoD to retain our technological edge over potential adversaries, we must partner with industry and increase the pace at which AI solutions can be adopted and deployed throughout defence.
“To make these partnerships a reality, MoD will establish a new Defence and National Security AI network, clearly communicating our requirements, intent, and expectations and enabling engagement at all levels. We will establish an industry engagement team in the Defence AI Center [DAIC] to enable better defence understanding and response to the AI sector. It will also promote the best and brightest talent and exchange of expertise between defence and industry.”
According to the strategy, overall strategic coherence will be managed by the Defence AI and Autonomy Unit (DAU) and the DAIC, which will set policy frameworks and act as the focal point for AI research and development (R&D).
It added that the MoD will also create a head of AI profession role that sits within the DAIC and has responsibilities for developing a skills framework, as well as recruitment and retention offers.
The DAIC will also lead on delivering an engagement and interchange function to “encourage seamless interchange between MoD, academia and the tech sector.”
It added that, through secondments and placements, the MoD will being in “talented AI leaders from the private sector with a remit to conduct high-risk innovation and drive cultural change; create opportunities for external experts to support policy-making; and develop schemes for Military of Defence leaders to gain tech sector experience”.
UK defence secretary Ben Wallace, writing in the foreword of the strategy, claimed that AI technologies were essential to defence modernisation, and further outlined various concepts the MoD will be exploring through its R&D efforts and engagement with industry.
“Imagine a soldier on the front line, trained in highly developed synthetic environments, guided by portable command and control devices analysing and recommending different courses of action, fed by database capturing and processing the latest information from hundreds of small drones capturing thousands of hours of footage,” he said.
“Imagine autonomous resupply systems and combat vehicles, delivering supplies and effects more efficiently without putting our people in danger. Imagine the latest directed energy weapons using lightning-fast target detection algorithms to protect our ships, and the digital backbone which supports all this using AI to identify and defend against cyber threats.”
Wallace added that he also recognised the “profound issues” raised by a military organisation’s use of AI: “We take these very seriously – but think for a moment about the number of AI-enabled devices you have at home and ask yourself whether we shouldn’t make use of the same technology to defend ourselves and our values.
“We must be ambitious in our pursuit of strategic and operational advantage through AI, while upholding the standards, values and norms of the society we serve, and demonstrating trustworthiness.”
Lethal autonomous weapons systems
Regarding the use of Lethal Autonomous Weapons Systems (LAWS), the strategy claimed the UK was “deeply committed to multilateralism” and will therefore continue to engage with the UN Convention on Certain Conventional Weapons (CCW).
“The CCW’s discussions will remain central to our efforts to shape international norms and standards, as will our support to wider government in forums such as the Global Partnership for Artificial Intelligence and the Council of Europe,” it said.
“Our immediate challenge, working closely with allies and partners, is to ensure ethical issues, related questions of trust, and the associated apparatus of policies, process and doctrine do not impede our legitimate, responsible and ethical development of AI, as well as our efforts at collaboration and interoperability.”
This was the only explicit mention of LAWS in the entire 72-page strategy document.
During a Lords debate in November 2021, MoD minister Annabel Goldie refused to rule out the use of LAWS, but said the UK would not deploy such systems without human oversight.
Asked about the government’s stance on CCW discussions at the time, Goldie added there was no consensus on regulation of LAWS: “The UK and our partners are unconvinced by the calls for a further binding instrument. International humanitarian law provides a robust principle-based framework for the regulation of weapons deployment and use.”
Responding, the Liberal Democrats digital spokesperson Timothy Clement-Jones said this stance put the UK “at odds with nearly 70 countries and thousands of scientists in its unwillingness to rule out lethal autonomous weapons”.
The Campaign to Stop Killer Robots, a global civil society coalition of more than 180 organisations, has been calling for legally binding instruments to prohibit or restrict LAWS since its launch in 2013, and argues that the use of force should remain fully in human control.
“Killer robots change the relationship between people and technology by handing over life and death decision-making to machines. They challenge human control over the use of force, and where they target people, they dehumanise us – reducing us to data points,” it said on its website.
“But technologies are designed and created by people. We have a responsibility to establish boundaries between what is acceptable and what is unacceptable. We have the capacity to do this, to protect our humanity and ensure that the society we live in, that we continue to build, is one in which human life is valued – not quantified.”
Nato AI strategy
In October 2021 the North Atlantic Treaty Organisation (Nato) published its own AI strategy, which outlined how the military alliance, which the UK is a founding member of, will approach the development and deployment of AI technologies.
Speaking during the AI Summit on 16 June 2021 about the organisation’s data-driven transformation, Nato’s head of data and AI policy Nikos Loutas said the four main objectives of the strategy were to promote the responsible use of AI; accelerate and mainstream and its use; to protect and monitor the use of AI, as well as Nato’s ability to innovate; and to identify and safeguard against the use of malicious AI by both state and non-state actors.
“What we also see is that artificial intelligence and data are also are going to provide the baseline for a number of other emerging technologies within the alliance, including autonomy, quantum computing, biotech, you name it – so there’s also an element of building the foundations that others are going to work on,” said Loutas.
He added that Nato has already identified a range of use cases at different levels of maturity, and is actively working with “industry, allies and partner nations” to develop those further.
“Some are purely experimentation, some are about capability development, everything is there, but what is important is that all those use cases address the specific needs and specific operational priorities of the alliance,” he said.