22 Jan 2024
by Tess Buckley

First Progress Report Towards Ambitions of the AI Safety Institute

In the first progress report foundational AI safety research advances include the establishment of a specialised research team and expert advisory board, alongside partnerships with leading organisations (ARC Evals, Advai, The Centre for AI Safety, Collective Intelligence Project, Faculty, Gryphon Scientific, RAND, Redwood Research, and Trail of Bits).  

The AISI has set three priority areas to achieve its ambitions, including evaluations of advanced AI models, conducting foundational AI Safety research and facilitating information exchange. These are the key commitments from the first progress report. 

1) Foundational AI Safety research  

  • Presented work plan and mission to build an AI research team that can evaluate risk at the frontier of AI with technical evaluations by a neutral third party 

  • Presentation of the name change from Taskforce to Frontier AI taskforce  

  • Established an expert advisory board spanning AI research and national security: Yoshua Bengio, Paul Christiano, Matt Collins, Anne Keast-Butler, Alex van Someren, Helen Stokes-Lampard and Matt Clifford  

  • Growing the Taskforce team to include leading experts such as Yarin Cal and David Krueger.  

2) Facilitating Information exchange 

  • Initial partnerships with leading organisations with ARC Evals, Advai, The Centre for AI Safety, Collective Intelligence Project, Faculty, Gryphon Scientific, RAND, Redwood Research and Trail of Bits. 

You can read more about the second and third progress reports and the ambitions of the institute.

If you would like to learn more, please email [email protected].

Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

 

Related topics

Authors

Tess Buckley

Tess Buckley

Programme Manager, Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.

Email:
[email protected]

Read lessmore