AI Safety Summit

The first Global AI Safety Summit took place on the 1-2 November in Bletchley Park with five key objectives, including developing a shared understanding of the risks posed by frontier AI, agreeing areas for potential collaboration on AI safety research and showcasing how the safe development of AI can enable AI to be used for good globally.

The techUK perspective

techUK CEO Julian David attended the AI Safety Summit. He has provided his perspectives and reflections on the event in a one-on-one conversation with our Head of Data Analytics, AI, and Digital ID, Katherine Holden:

Discussions

A responsible approach to seizing the opportunities of AI was the thread that ran through the Summit with world leaders focusing in on how to enable humanity to seize the seismic opportunities of artificial intelligence by first seeking to understand and mitigate the potential risks of powerful emerging frontier AI technologies.   

This approach was also clear in the Summit’s outcomes and Prime Minister Rishi Sunak will be pleased to have secured broad international agreement, including the US, China and EU all sharing a stage together as well as commitments from the leading developers of frontier AI technologies.
 

Outcomes

So what were the key outcomes of the inaugural Global AI Safety Summit:  

  • The Bletchley Declaration: signed by 28 countries, including the USA, China and European Union the Bletchley Declaration recognises that if the opportunities of AI are to be seized there must be an international effort to research, understand and mitigate the risks posed by frontier AI technologies.  

  • More AI Safety Summits: the Bletchley Declaration confirmed additional meetings in 2024 with South Korea to host a mini virtual summit in six months while France will host the next full in person AI Safety Summit 12 months from now.  

  • AI Safety Institute: The UK announced that it will put its Frontier Models Taskforce on a permanent footing in the form of a New AI Safety Institute creating a UK based but internationally facing resource with the purpose of evaluating frontier systems, advancing research on AI Safety and sharing information between a global network of Government’s, private companies and civil society organisations. The Institute was announced alongside endorsements from the US, Singaporean, German, Canadian and Japanese Governments as well as from major frontier AI labs.  

  • Senior government representatives from leading AI nations, and major AI organisations, agreed to a plan for safety testing of frontier AI models: The plan involves testing models both pre- and post-deployment, and a role for governments in testing, particularly for critical national security, safety and society harms.  

  • The UK unites with global partners to accelerate development in world’s poorest countries using AI: UK and partners to fund safe and responsible AI projects for development around the world, beginning in Africa, with £80 million collaboration

  • Investment in the ‘AI Research Resource’ for the AI Safety Institute: The investment into the AI Research Resource has been tripled to £300 million, up from £100 million (announced in March 2023), in a bid to further boost UK AI capabilities. The investment will connect Isambard-AI (based at Bristol University) to a newly announced Cambridge supercomputer called ‘Dawn’. Connecting these two supercomputers will give researchers access to resources with more than 30-times the capacity of the UK’s current largest public AI computing tools.  

Overall, both Rishi Sunak and Michelle Donelan, the Science and Technology Secretary, who led the first day of the summit, will be happy with what the UK has achieved. Securing a consensus on a process for approaching frontier AI technologies and establishing a new track in global AI discussions.  

However, as always, the proof will be in how these initial agreements and forums develop in the years ahead and if they can lead to tangible progress between countries who often have differing views.  

Resources for members

Over the following weeks we will continue to update this page with further events, articles, and other relevant content to help inform and build on the conversations taking place at the Summit. If you’d like to contribute, please get in touch with [email protected]

1 - Members' thoughts and use case studies

 

2 - A guide to the AI Safety Summit

Get our tech and innovation insights straight to your inbox

Sign-up to get the latest updates and opportunities from our Technology and Innovation and AI programmes.

 


Katherine Holden

Katherine Holden

Associate Director, Data Analytics, AI and Digital ID, techUK

Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme. 

Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.

Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.    

Katherine has a BSc degree in Biology from the University of Nottingham.

Email:
[email protected]
Phone:
020 7331 2019

Read lessmore