22 May 2024
by Tess Buckley

Key Outcomes of the AI Seoul Summit 

On 21-22 May 2024, six months after the historic Bletchley Summit hosted by the UK, the international community convened virtually and in South Korea for the AI Seoul Summit to build on the momentum and further global cooperation on AI Safety, Innovation and Inclusion. The two-day summit brought together leaders from governments, industry, civil society, and academia to discuss the responsible development and deployment of frontier AI. 

The AI Seoul Summit reaffirmed the international community's commitment to shaping the trajectory of AI development through global cooperation and shared guidelines, setting the stage for continued dialogue and concerted action in the months ahead on the road to the France Summit. This insight outlines the key outcomes of the AI Seoul Summit. 

 

Pre-Seoul Summit: Reports and events to inform Seoul Summit dialogue  

  • Global Commitment to AI Safety reaffirmed: On May 14, 2024, techUK welcomed the Rt Hon Michelle Donelan MP, Secretary of State for Science, Innovation and Technology, at an industry event focused on bringing different voices to discuss their expectations and hopes for the upcoming AI Seoul Summit. With over 50 senior representatives from businesses across the techUK membership, the event served as a platform for direct engagement between the industry and the UK Government, setting the stage Seoul.

  • Scientific Report on Safety of Advanced AI published:  The interim International Scientific Report on the Safety of Advanced AI, published on 17 May, is the product of a collaborative effort following the Bletchley Park AI Safety Summit in November 2023. This is the first international scientific report on advanced AI safety which presents the current and anticipated AI capabilities, the kinds of risks that we should expect and demonstrates approaches to mitigate and evaluate risks to better inform public policies. 

  • UK AI Safety Insitute published fourth progress report: The  progress report, published on May 20, highlights several significant developments and initiatives such as onboarding over 30 technical researchers, appointing Jade Leung as the CTO, launching Inspect an open-sourced AI Safety evaluation platform, published their first technical blog, supported the interim report on AI Safety, opened a new office in San Franscisco and partnered with the Canadian AI Safety Institute:.

 

Outcomes from Day 1: 21 May 

 

Outcomes from Day 2: 22 May 

  • 27 nations signed up to develop proposals for assessing AI risks over the coming months: The 'Seoul Ministerial Statement' sees these countries agreeing to develop shared risk thresholds for frontier AI development and deployment, including agreement on  when model capabilities could pose 'severe risks' without appropriate mitigations and further identifying severe risks, such as helping malicious actors acquire or use chemical or biological weapons or AI's ability to evade human oversight. By aligning their efforts, these nations aim to foster a safer and more responsible development and deployment of AI capabilities globally.

  • The UK AI Safety Institute (AISI), partnering with the Alan Turing Institute, UKRI, and other institutes, announced £8.5 million in research funding for 'systemic AI safety:' Moving beyond just the risks of individual AI models, this funding will focus on understanding and mitigating the systemic risks that AI poses when integrated into larger systems and infrastructures. The AISI will invite grant proposals that directly address systemic AI safety problems or improve understanding in this area, prioritising applications offering actionable approaches to significant systemic AI risks. This initiative aims to broaden AI safety efforts to encompass the complex systems and infrastructures in which AI operates, recognising the potential for wide-ranging societal impacts. 

 

Post-Seoul Summit 

  • The AI Fringe will host a discussion on the AI Seoul Summit, covering topics such as AI safety, innovation, and inclusion. The panel discussions will feature prominent members of the UK AI ecosystem as well as representatives from the organisers of the next official AI Summit in France. These experts will share insights on the outcomes of the Seoul Summit and the responsible development of AI technologies. You can register to join this insightful event here.

 

If you found this summary helpful and want to learn more about techUK's programming in AI, for AI Adoption, please contact [email protected]. If you are interested in AI Policy, contact [email protected]. If you are interested in Digital Ethics and AI Safety, contact [email protected]

 

Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music. 

Email:
[email protected]
Website:
tessbuckley.me
LinkedIn:
https://www.linkedin.com/in/tesssbuckley/

Read lessmore

Dani Dhiman

Dani Dhiman

Policy Manager, Artificial Intelligence and Digital Regulation, techUK

Dani is Policy Manager for Artificial Intelligence & Digital Regulation at techUK, and previously worked on files related to data and privacy. She formerly worked in Vodafone Group's Public Policy & Public Affairs team supporting the organisation’s response to the EU Recovery & Resilience facility, covering the allocation of funds and connectivity policy reforms. Dani has also previously worked as a researcher for Digital Catapult, looking at the AR/VR and creative industry.

Dani has a BA in Human, Social & Political Sciences from the University of Cambridge, focussing on Political Philosophy, the History of Political Thought and Gender studies.

Email:
[email protected]
LinkedIn:
https://www.linkedin.com/in/danidhiman,https://www.linkedin.com/in/danidhiman

Read lessmore

 

Related topics

Authors

Tess Buckley

Tess Buckley

Programme Manager, Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.

Email:
[email protected]

Read lessmore