What is the smart way to regulate Artificial Intelligence?
1 Nov 2023
Michael Rose (DRD Partnership) and Ashley Williams (Mishcon de Reya) take a look at what the impact of the upcoming AI Safety Summit might be on UK regulation.
You would be forgiven for not knowing who to believe about the future of AI. We are treated to almost daily predictions veering between doom-laden dystopia or problem-solving nirvana. How, therefore, to regulate a tech phenomenon we don’t yet fully understand the implications of? The Prime Minister, alongside world leaders, tech-industry moguls and expert academics, will be hoping an answer begins emerging at this week’s AI Safety Summit at Bletchley Park.
In September, hundreds of figures from the AI world said ‘mitigating the risk of extinction from AI should be a global priority’, suggesting the need for a planet-wide regulator. The Prime Minister has put considerable political weight behind this week’s summit. He will be hoping the gathering starts, in the Government’s own words, to address mitigating AI risks through ‘internationally coordinated action’.
The problem, as some see it, lies in regulation currently being pursued at a national state level. Put simply, there is a risk of divergent rules for a technology that does not respect borders. Take the EU and the UK. Two markets very much intertwined, particularly in terms of tech, yet the former is adopting a ‘strong regulation’ approach, whilst the latter is more ‘wait and see’.
We are very fortunate in the UK to benefit from an impressive level of homegrown talent with world-class AI companies. However, the reality is very few of these companies will have a purely UK domestic focus. When there is clear divergence of approach to regulation on a global scale, it is useful to assess what the high-water mark will be and in respect of AI regulation this is likely to be the EU’s pending AI Act. As with the GDPR, the EU’s AI Act will have “extra-territorial effect”, which means even companies based outside the EU will have to comply with the Act if they are putting their AI solution into service in the EU. A lighter touch approach by the UK (or other countries) could become a moot point if companies need to comply with the highwater mark set by Brussels.
The EU is within ‘touching distance’ of passing some of the most substantive AI laws in the world. By contrast, the UK has charged existing regulators with setting AI guidance covering their respective sectors. The UK, as the smaller market, risks losing out, with cross-border divergence confusing businesses about which rules they need to follow.
There are benefits to the UK’s approach to regulation. Avoiding the EU’s rules-based approach to legislation and focusing on a principles-based approach to regulation creates more flexibility to handle advances in technology as our knowledge and understanding develops. However, this is at the cost of certainty and both providers and users need to understand the rules of the game in order to confidently make decisions around the development and use of AI. Charging each existing regulator with setting AI guidance for their respective sectors compounds this problem as it raises the risk of conflicting guidance for those operating across multiple sectors. A well-resourced central support function which aligns guidance across regulators and acts as a central touch point for companies would go a long way to help avoid these risks.
It is crucial for companies to develop targeted, tailored maps of who to engage with about AI regulation. With a fast-paced topic like this, these will be subject to constant change as the chorus of voices grows, adapts and develops.
Michael Rose (DRD Partnership) and Ashley Williams (Mishcon de Reya)
Policymakers in London and Brussels can’t know yet which regulatory approach will prove most effective. This divergence makes it difficult to know who companies should talk to about AI rules. It is therefore crucial for companies to develop targeted, tailored maps of who to engage with about AI regulation. With a fast-paced topic like this, these will be subject to constant change as the chorus of voices grows, adapts and develops.
The UK has the opportunity to really influence the approach to regulating AI on a global scale. The AI Safety Summit, coupled with the elite, international AI guestlist, is a demonstration of the role the UK believes it has to play in respect of aligning global leaders on AI regulation.
The big question this week is ‘can the world agree on the rules of AI?’. An affirmative answer would certainly help smooth regulatory divergence concerns. Don’t expect a codified charter of AI global regulations as delegates file out of Bletchley Park, but they can make a start. This is the UK’s chance to become a ‘key player’ on AI global regulation, showcasing the country’s ability to be a convening power, leading the way on global regulatory alignment.