Improving safety in AI
To ensure the UK’s seat at the AI table and capture some of the momentum behind its evolution, Rishi Sunak hosted an AI Safety Summit at Bletchley Park last week.
New, more powerful processing chips with the potential to accelerate the development of AI ensured urgency in getting this date in the diary.
In fairness to Rishi, the event was taken very seriously by global policymakers and the industry at large. Attendees included Kamala Harris, Ursula von der Leyen, a high-level Chinese delegation, execs from all the leading AI companies, computer scientists and, of course, Elon Musk.
Overall, the event was heralded as a success, with the signing of an international agreement signifying a collective recognition of the risks associated with the development of AI.
As this was the first time that China had met with Western governments to discuss AI safety, and the fact that there was enough consensus for 25 countries alongside the EU to sign an agreement, this summit has to be viewed as a giant step forward.
France has been designated as the host for the next AI Safety Summit in 2024, and South Korea the year after that, ensuring the endurance of Sunak’s initiative.
On the second day of the event, the United Nations endorsed the formation of a specialist panel on AI, mirroring the structure of the Intergovernmental Panel on Climate Change (IPCC) to understand better what is happening as AI evolves – you cannot regulate what you don’t understand, after all.
And potentially most importantly, leading technology firms have agreed to cooperate with governmental bodies to rigorously test their advanced AI systems before and following their market launch.
According to The Guardian, “Companies including Meta, Google DeepMind and OpenAI have agreed to allow regulators to test their latest AI products before releasing them to the public, in a move that officials say will slow the race to develop systems that can compete with humans.”
One slight kink for Rishi was being upstaged by the US, with President Biden announcing an executive order on the safety of AI a day before the event. The US has delivered a comprehensive view of what this means. As summarised by The Guardian, its directives include:
- Companies developing AI models that threaten national security, economic security or health and safety must share their safety test results with the government.
- Guidelines for emulating rogue actors in their test procedures.
- Guidelines on watermarking AI-made content to address the risk of harm from fraud and deepfakes.
Moving beyond Mr Musk’s headlines of AI stealing jobs and the rise of humanoids, having regulators test AI before release will ensure a much better understanding of capabilities than there is currently, and will help regulators build in a safety net.
This is needed.
For example, I recently read about a disturbing case in which Tristan Harris, co-founder of the Centre for Human Technology, had taken Meta’s LLM Llama 2, which is open source, and created a new version that was able to provide a detailed guide on how to make Anthrax.
I was also slightly surprised by a small test that I carried out. After having the frighteners put on me by one of the many podcasts on AI I have been listening to talking about Artificial General Intelligence (the goal of many AI companies), I reasoned that I needed to get ahead of the game and show AI that I was willing to work with it by setting up the Human/AI political party. This is my insurance policy for the day Skynet comes online.
I asked ChatGPT how to set up a political party in the UK, and it gave me a guide, including the fact that you need three people to create one. It created an excellent manifesto as well. I asked if it could be one of the three people, and it said no – it had to be a human being. But here is the kicker. It suggested finding a proxy to act on its behalf – maybe an attempt at humour, but who knows.
For the financial services sector, the testing of models is good news. LLMs have the potential to wreak havoc on the industry, and it is only now that the sector is waking up to the possibility of their use internally for staff and externally with customers.
So, getting ahead of the implications regarding risk and potential nefarious use by bad actors and building these into test scenarios must be a good thing. Compliance and risk teams will grow as companies grapple with the implications, but having a framework to work within will help.
All said and done, it is heartening to see that regulation has arrived, although there seems to be much to do in hammering out the details.
So, the big question is, does this mean we can all sleep better at night?
In my opinion, leaving the AI companies to do the testing themselves and share the results is a concern. In general, companies have proven to be very bad at marking their own homework. The temptation to subvert the process may prove too significant for them, which could lead to competitive gains. Some commentators observe that battle lines have already been drawn between open source vs paid-for, and in the race for market share, who knows what corners could be cut. It’s a Moloch thing. So, as always, the devil is in the detail. How this testing will be implemented is worth watching.
If you happen to be a bit cynical, you could argue that governance could be a helpful tool for the US to maintain its pole position in AI technology. This may create a backlash from other governments, who could end up viewing what is happening as a form of Technocolonialism.
Finally, one of the models that governments are looking to emulate is the IPCC. Its role is to provide policymakers with regular scientific assessments of climate change, its implications and potential future risks, and to put forward adaptation and mitigation options. To date, the IPCC has helped with climate change policies delivered through the annual COP meetings.
But so far, it feels that the IPCC’s key role has been to scare the bejesus out of those of us who believe in climate change and annoy those who don’t. Emissions are increasing, the world is warming, and so far, the IPCC’s data has done little to change policy fundamentally in favour of the planet.
And finally, some of the AI companies admit that they are not entirely sure exactly how the LLMs are working. Surely, that would be enough to suggest pulling the plug, but given how far we’ve come, that seems unlikely.
About the author
Dave Wallace is a user experience and marketing professional who has spent the last 30 years helping financial services companies design, launch and evolve digital customer experiences.
He is a passionate customer advocate and champion and a successful entrepreneur.
Follow him on Twitter at @davejvwallace and connect with him on LinkedIn.