AI Safety

Innovation in AI, particularly Generative AI, is developing rapidly, with daily news stories highlighting the opportunities and risks. Ensuring AI safety is now a priority for government, civil society, and industry alike.

techUK's AI Safety program focuses on supporting members to navigate the AI safety landscape as international agreements and initiatives are operationalised. A key program priority is understanding the ambitions and progress of the UK's AISI and the wider AI safety discourse, particularly the role of standards. The program facilitates industry engagement and collaboration with UK stakeholders including the AISI. It also ensures the voice of UK industry on AI safety is included in this increasingly global conversation, such as at the G7 and future AI Safety Summits.

 

Resources for members

We will continue to update this page with further articles, use cases and other relevant content to help inform and build on the AI Safety, AI Inclusion and AI Innovation discourse. If you’d like to contribute, please get in touch with [email protected]

Members use cases

These use cases are categorised by the themes of the AI Seoul Summit, namely AI safety, innovation and inclusivity.

 

Events 

Looking Ahead to the AI Seoul Summit

Ahead of the AI Seoul Summit, on 14 May 2024 techUK welcomed Secretary of State for Science, Innovation and Technology Michelle Donelan MP at an industry event focused on bringing different voices to discuss their expectations and hopes for the upcoming AI Seoul Summit.

With over 50 senior representatives from businesses across the techUK membership, the event served as a platform for direct engagement with the UK Government, setting the stage for the forthcoming global summit set to take place in Seoul on May 21-22.

You can review the programme for the AI Seoul Summit 2024 co-hosted by the Republic of Korea and the United Kingdom on the 21 and 22 May here.

Reflecting on Bletchley

The first global AI Safety Summit in the UK in 2023 facilitated consensus on approaching frontier AI technologies and established a new track in global AI discussions. The Summit took place on the 1-2 November in Bletchley Park with five key objectives, including developing a shared understanding of the risks posed by frontier AI, agreeing areas for potential collaboration on AI safety research and showcasing how the safe development of AI can enable AI to be used for good globally. It emphasised governments and industry collaborating to understand and mitigate risks of emerging AI while seizing opportunities. 

The techUK perspective

techUK CEO Julian David attended the AI Safety Summit. He has provided his perspectives and reflections on the event in a one-on-one conversation with our Head of Data Analytics, AI, and Digital ID, Katherine Holden:

Discussions

A responsible approach to seizing the opportunities of AI was the thread that ran through the Summit with world leaders focusing in on how to enable humanity to seize the seismic opportunities of artificial intelligence by first seeking to understand and mitigate the potential risks of powerful emerging frontier AI technologies.   

This approach was also clear in the Summit’s outcomes and Prime Minister Rishi Sunak will be pleased to have secured broad international agreement, including the US, China and EU all sharing a stage together as well as commitments from the leading developers of frontier AI technologies.
 

Members' thoughts and use case studies from Bletchley

 

Outcomes

So what were the key outcomes of the inaugural Global AI Safety Summit:  

  • The Bletchley Declaration: signed by 28 countries, including the USA, China and European Union the Bletchley Declaration recognises that if the opportunities of AI are to be seized there must be an international effort to research, understand and mitigate the risks posed by frontier AI technologies.  

  • More AI Safety Summits: the Bletchley Declaration confirmed additional meetings in 2024 with South Korea to host a mini virtual summit in six months while France will host the next full in person AI Safety Summit 12 months from now.  

  • AI Safety Institute: The UK announced that it will put its Frontier Models Taskforce on a permanent footing in the form of a New AI Safety Institute creating a UK based but internationally facing resource with the purpose of evaluating frontier systems, advancing research on AI Safety and sharing information between a global network of Government’s, private companies and civil society organisations. The Institute was announced alongside endorsements from the US, Singaporean, German, Canadian and Japanese Governments as well as from major frontier AI labs.  

  • Senior government representatives from leading AI nations, and major AI organisations, agreed to a plan for safety testing of frontier AI models: The plan involves testing models both pre- and post-deployment, and a role for governments in testing, particularly for critical national security, safety and society harms.  

  • The UK unites with global partners to accelerate development in world’s poorest countries using AI: UK and partners to fund safe and responsible AI projects for development around the world, beginning in Africa, with £80 million collaboration

  • Investment in the ‘AI Research Resource’ for the AI Safety Institute: The investment into the AI Research Resource has been tripled to £300 million, up from £100 million (announced in March 2023), in a bid to further boost UK AI capabilities. The investment will connect Isambard-AI (based at Bristol University) to a newly announced Cambridge supercomputer called ‘Dawn’. Connecting these two supercomputers will give researchers access to resources with more than 30-times the capacity of the UK’s current largest public AI computing tools.  

Overall, both Rishi Sunak and Michelle Donelan, the Science and Technology Secretary, who led the first day of the summit, will be happy with what the UK has achieved. Securing a consensus on a process for approaching frontier AI technologies and establishing a new track in global AI discussions.  

However, as always, the proof will be in how these initial agreements and forums develop in the years ahead and if they can lead to tangible progress between countries who often have differing views.  

 

Get our tech and innovation insights straight to your inbox

Sign-up to get the latest updates and opportunities from our Technology and Innovation and AI programmes.

 


Katherine Holden

Katherine Holden

Associate Director, Data Analytics, AI and Digital ID, techUK

Katherine joined techUK in May 2018 and currently leads the Data Analytics, AI and Digital ID programme. 

Prior to techUK, Katherine worked as a Policy Advisor at the Government Digital Service (GDS) supporting the digital transformation of UK Government.

Whilst working at the Association of Medical Research Charities (AMRC) Katherine led AMRC’s policy work on patient data, consent and opt-out.    

Katherine has a BSc degree in Biology from the University of Nottingham.

Email:
[email protected]
Phone:
020 7331 2019

Read lessmore

Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music. 

Email:
[email protected]
Website:
tessbuckley.me
LinkedIn:
https://www.linkedin.com/in/tesssbuckley/

Read lessmore

Sue Daley

Sue Daley

Director, Technology and Innovation

Sue leads techUK's Technology and Innovation work.

This includes work programmes on cloud, data protection, data analytics, AI, digital ethics, Digital Identity and Internet of Things as well as emerging and transformative technologies and innovation policy. She has been recognised as one of the most influential people in UK tech by Computer Weekly's UKtech50 Longlist and in 2021 was inducted into the Computer Weekly Most Influential Women in UK Tech Hall of Fame. A key influencer in driving forward the data agenda in the UK Sue is co-chair of the UK government's National Data Strategy Forum. As well as being recognised in the UK's Big Data 100 and the Global Top 100 Data Visionaries for 2020 Sue has also been shortlisted for the Milton Keynes Women Leaders Awards and was a judge for the Loebner Prize in AI. In addition to being a regular industry speaker on issues including AI ethics, data protection and cyber security, Sue was recently a judge for the UK Tech 50 and is a regular judge of the annual UK Cloud Awards.

Prior to joining techUK in January 2015 Sue was responsible for Symantec's Government Relations in the UK and Ireland. She has spoken at events including the UK-China Internet Forum in Beijing, UN IGF and European RSA on issues ranging from data usage and privacy, cloud computing and online child safety. Before joining Symantec, Sue was senior policy advisor at the Confederation of British Industry (CBI). Sue has an BA degree on History and American Studies from Leeds University and a Masters Degree on International Relations and Diplomacy from the University of Birmingham. Sue is a keen sportswoman and in 2016 achieved a lifelong ambition to swim the English Channel.

Email:
[email protected]
Phone:
020 7331 2055
Twitter:
@ChannelSwimSue,@ChannelSwimSue

Read lessmore