Most Commented Posts
A new policy paper from the Department for Digital, Culture, Media and Sport (DCMS) has highlighted the lack of clarity, overlaps, inconsistency and gaps in UK regulation of artificial intelligence (AI) technologies.
To address these shortcomings, the authors of the Establishing a pro-innovation approach to regulating AI: policy statement recommend that the UK develops a clear framework for regulating AI. Specifically, the UK’s approach will differ from what the European Union (EU) has proposed.
“The EU has grounded its approach in the product safety regulation of the Single Market and, as such, has set out a relatively fixed definition in its legislative proposals,” wrote the authors of the paper. “While such an approach can support efforts to harmonise rules across multiple countries, we do not believe this approach is right for the UK. We do not think that it captures the full application of AI and its regulatory implications. Our concern is that this lack of granularity could hinder innovation.”
Digital minister Damian Collins said: “We want to make sure the UK has the right rules to empower businesses and protect people, as AI and the use of data keeps changing the ways we live and work. It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”
Instead of giving responsibility for AI governance to a central regulatory body, as the EU is doing through its AI Act, the UK government’s proposals will allow different regulators to take a tailored approach to the use of AI in a range of settings. According to the DCMS, this better reflects the growing use of AI in a range of sectors and creates proportionate and adaptable regulation to support the rapid adoption of AI and help booth UK productivity and growth.
The proposal presented in the policy paper are described as “a pro-innovation framework for regulating AI”, which, according to the DCMS, aims to address issues where there is clear evidence of real risk or missed opportunities. “We will ask that regulators focus on high-risk concerns rather than hypothetical or low risks associated with AI,” wrote the report’s authors. “We want to encourage innovation and avoid placing unnecessary barriers in its way.”
The DCMS said the policy paper proposes that a clear framework is established that sets out how the government will respond to the opportunities of AI, as well as new and accelerated risks. The preferred approach is based on defining a set of core characteristics of AI to inform the scope of the AI regulatory framework. This definition can then be adapted by regulators according to their specific domains or sectors.
Wendy Hall, acting chair of the AI Council, said: “We welcome these important early steps to establish a clear and coherent approach to regulating AI. This is critical to driving responsible innovation and supporting our AI ecosystem to thrive. The AI Council looks forward to working with government on the next steps to develop the whitepaper.”
The 10-week call for evidence will run until 26 September. The DCMS has asked organisations and individuals working across AI to provide feedback on the proposals.