An AI eternal wait for business: can the UN keep pace with machine learning?
24 Sep 2024
Claire Davidson and Michael Rose examine the UN’s efforts to create a global regulatory framework for artificial intelligence, and asks whether it can ever keep up with the rapid pace of change.
At this month’s UN General Assembly Week (UNGA), artificial intelligence (AI) technology has been high up on the agenda. Preceding it was a new UN report calling for greater global oversight of AI technology, a goal long discussed but with limited progress. To keep momentum going, yesterday UNGA unanimously agreed the ‘Pact for the Future’ – a lengthy text that includes as two annexed documents the Global Digital Impact, dealing with regulating AI, and the Declaration on Future Generations, which pushes for national and international decision-making to focus on the wellbeing of future generations.
With potentially profound and permanent ramifications for all aspects of day-to-day life, the UN report’s conclusion that “to place its governance in the hands of a few developers, or the countries that host them, will create a deeply unfair situation” is unsurprising. The language used by the UN and the problem that they seek to address echoes that of climate change. Indeed, the latter reasserts the Paris Agreement commitments on climate change.
Focus particularly concerns the anticipated inequality between wealthy countries, generally leading the way in creating and deploying AI technology, and less well-off countries who are excluded from the benefits. For example, only two weeks ago, the US, UK and EU signed up to an agreed set of cross-border AI standards. Rich countries exclusively writing the rules is what the UN wants to avoid.
Reputation, rather than governments, may therefore be a company’s main driver of AI rules.
Both reports are alive to the risk of history repeating itself. As with climate change, what incentive do less well-off nations have to sign up to potentially restrictive or exclusionary rules? In short, a global AI treaty akin to the Kyoto Protocol or Paris Climate Accord, is a long way off.
So, if you’re a business, especially a global one, looking to understand the rules of the game, where do you turn? National governments are moving at notably varying speeds when it comes to establishing regulation. The EU has dashed to the front of the pack with an AI Act that classifies, codifies and establishes limits on the use of AI technology beyond anything other jurisdictions have enacted. In a world first, it seeks to define ‘risk’ in AI systems.
In America, the cradle of AI tech, the rules vary depending on if you’re dealing with the Securities and Exchange Commission, the Federal Trade Commission or any of the other myriad of federal regulators and agencies. It also depends where you are located/operating, with states such as Colorado, California and Washington introducing AI-specific regulation, but most of the others avoiding doing so. The UK has a similar patchwork approach, with the new Labour government omitting a pledge for AI specific legislation in July’s King’s Speech.
Reputation, rather than governments, may therefore be a company’s main driver of AI rules. According to a Public First poll, 49% of Brits listed unemployment as their biggest fear regarding the rise of AI, whilst more than a quarter described themselves as ‘worried’ about it. Clearly, much of the public fears the influence AI companies may have over their lives. It is therefore the responsibility of companies themselves to demonstrate and communicate the guardrails they are putting in place to ensure responsible use and user protections.
It is more effective to put in place a strategy for communicating these guardrails ahead of deployment rather than scrambling to respond to a real or perceived shortcoming. Signaling that content is AI-generated or influenced is an obvious example of responsible use. If users all over the world are to have confidence in the benefits of AI, it is the responsibility of companies to make the considerable effort to build that confidence. Governments are not rushing to do it for them.
The UN Secretary-General stated this week that “the adoption of the Pact for the Future, the Global Digital Compact and the Declaration on Future Generations opens pathways to new possibilities and opportunities”. Yet history suggests that waiting for the United Nations or national governments to reach a global accord risks creating a major gulf in AI technologies capability and society’s confidence in those deploying it for commercial or other benefit. Handled incorrectly and this gulf risks becoming established.