18 Dec 2025
by Tess Buckley, Sue Daley

UK AI Security Institute releases inaugural Frontier AI Trends Report

On Thursday 18 December the Department for Science, Innovation and Technology’s (DSIT)’s AI Security Institute (AISI) announced the publication of its inaugural Frontier AI Trends Report. The report is based on a series of wide-ranging evaluations of over 30 state-of-the-art models primarily general-purpose Large Language Models, and where relevant open-source models. As the first public analysis of trends by AISI, this detailed report draws on two years' worth of evaluations of frontier AI systems since November 2023, presenting aggregated results to illustrate high-level trends in AI progress. It is published with the intention to improve public understanding about fast‑moving AI capabilities and strengthen transparency. 

Some of the high-level trends that the report observes in AI progress includes: 

  • Capabilities are advancing rapidly, with models progressing across cyber security, chemistry and biology assistance, and autonomous task‑completion. In several areas, systems are now matching or surpassing human expert performance. (Section 3) 
  • Safeguards have improved, but vulnerabilities remain, and the level of protection varies substantially across developers and misuse categories. (Section 4) 
  • Precursors to autonomous behaviour, including extended task horizons and self‑replication‑related skills, are increasing, though there is no evidence of harmful or spontaneous behaviour. (Section 5) 
  • Societal impacts are emerging, from how people use AI for political information and emotional support to early deployment of agents in high‑stakes sectors like finance. (Section 6)  
  • Open‑source models are catching up, with the capability gap narrowing to around 4–8 months behind frontier closed systems. (Section 7)  

The report is the first of many, with AISI planning to provide iterative versions, which will provide up-to-date and public visibility into frontier AI development. It notes the promise of capabilities surpassing expert baselines, whilst noting the novel risks of such progress in frontier AI.  According to the report, the key moving forward will be to anticipating long-term developments while also ensuring near-term adoption is ‘secure, reliable and aligned with human intent.’

To support this, the report suggests that we will need: 

  • Safeguards that are updated with capabilities  
  • Rigorous and independent evaluations to track emerging impacts  
  • Collaboration across government, industry and academy to develop solutions to open questions 

The following sections provide more information on the key trends observed in the report. 

Capabilities   

The report points to progress in general-purpose AI systems accelerating through the development of AI agents. The report notes that scaffolding techniques remain a key factor in pushing the frontier forward. These AI systems are increasingly completing more complex multi-step tasks on user's behalf. With this capability comes both opportunity (reducing administrative burdens) and risks (lower barriers for malicious actors).  

The report discusses implications of specific capabilities for domains which are key for the UK’s security and innovation, namely, chemistry & biology, and cyber. Progress in these domains include:  

Chemistry and Biology  

  • AI models are showing improvements in knowledge on chemistry and biology, well beyond PhD-level expertise (page 13)  
  • When equipped with tools like search or code execution, scaffolded AI agents are becoming increasingly useful for assisting with – or even automating – elements of biological design (page 14)  
  • Models can now consistently produce detailed and accurate protocols for a range of complex scientific tasks and assist users in troubleshooting these protocols (page 16) 
  • Models can combine vision capabilities with advanced knowledge and reasoning to provide troubleshooting advice beyond just text (page 18) 

Cyber 

  • AI models are improving at cyber tasks across all difficulty levels (page 20) 
  • Enhanced access to tools, via better model scaffolding, consistently improves performance on our cyber evaluations (page 21)  
  • Frontier AI models still struggle to complete realistic, step-by-step cyber challenges that require success at multiple stages (page 22)  

Safeguards 

The report notes that as the capabilities discussed above advance, malicious actors may misuse the AI systems. To mitigate these risks, industry is employing technical interventions (safeguards) to prevent users from eliciting harmful information or action from systems. The AISI actively works with frontier developers to identify and fix vulnerabilities. As discussed in the report, the AISI has: 

  • Found universal jailbreaks for every system they have tested (page 24) 
  • Seen significant progress in the safeguards of certain AI systems, particularly in the biological misuse domain (page 24)  
  • Observed that safeguards improvements have been uneven with certain AI systems and malicious request categories much better defended than others (page 25)  
  • Noticed more capable models do not necessarily have better safeguards (page 27) 
  • Found safeguards won’t prevent all AI misuse, but they may help maintain a crucial gap between some beneficial and malicious uses (page 28) 

Loss of Control 

In Section 3, the report discusses how AI systems also have the potential to pose risks that emerge from the models themselves when they behave in unintended ways. This section focuses on two capabilities within existing models that could lead to loss of control over advanced AI systems:  

  1. Self – replication: Where models create copies of themselves without being explicitly prompted to do so 

  • In controlled environments, AI models are increasingly exhibiting some of the capabilities required to self-replicate across the internet, however they are currently unlikely to succeed in real world conditions (page 30) 
  1. Sandbagging: where strategic underperformance during evaluations can misrepresent a model’s true capabilities  

  • Some models can sandbag in controlled environments when prompted to do so (page 32)  
  • AISI has methods for detecting sandbagging but they may become less effective as models grow more capable (page 32)  
  • AISI has yet to detect any instances of models intentionally sandbagging during testing runs (page 33)  

As AI systems and their capabilities advance the AISI will continue investigating how well models can self-replicated and subvert monitoring techniques. 

Societal Impacts 

Section 6 of the report identifys societal impacts in three areas which are exacerbated by the increasing capability of AI systems These are: 

  1. Political information-seeking and persuasion 

  • The persuasiveness of AI models is increasing with scale (page 35)  
  • Targeted post-training can increase persuasive capabilities further (page 36)  
  • The same factors that make models more persuasive tend to also make them less accurate (page 37) 
  • In real-world settings, AI models may not increase belief in misinformation any more than self-directed internet search (page 38) 
  1. Emotional dependence 

  • A substantial minority of UK citizens have used AI models for emotional support or social interactions (page 39)   
  1. Critical infrastructure  

  • There is an increase in tooling that enables AI agents to perform high-stakes tasks in some critical sectors (page 41) 

Open-source models  

Open-source models are those which have parameters and course code which can be freely modified and distributed and are advancing rapidly. According to the report, in the past two years, the general capability gap between open and closed source models has narrowed. According to external data, the gap is currently between four and eight months (page 43).  

The report acknowledges that while open source is helpful for decentralising control, allowing developers to innovate and deploy systems for different purposes, this centralisation also creates security challenges with malicious actors being able to tamper with safeguards and modify base models with more ease.

Author

Tess Buckley

Tess Buckley

Senior Programme Manager in Digital Ethics and AI Safety, techUK

Sue Daley OBE

Sue Daley OBE

Director, Technology and Innovation

Technology and Innovation programme activities

techUK bring members, industry stakeholders, and UK Government together to champion emerging technologies as an integral part of the UK economy. We help to create an environment where innovation can flourish, helping our members to build relationships, showcase their technology, and grow their business. Visit the programme page here.

 

Upcoming events

Latest news and insights 

Learn more and get involved

 

Sign-up to get the latest updates and opportunities across Technology and Innovation.

 

Here are five reasons to join the Tech and Innovation programme

Download

Join techUK groups

techUK members can get involved in our work by joining our groups, and stay up to date with the latest meetings and opportunities in the programme.

Learn more

Become a techUK member

Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.

Learn more

Meet the team 

Sue Daley OBE

Sue Daley OBE

Director, Technology and Innovation

Laura Foster

Laura Foster

Associate Director - Technology and Innovation, techUK

Kir Nuthi

Kir Nuthi

Head of AI and Data, techUK

Rory Daniels

Rory Daniels

Head of Emerging Technology and Innovation, techUK

Tess Buckley

Tess Buckley

Senior Programme Manager in Digital Ethics and AI Safety, techUK

Usman Ikhlaq

Usman Ikhlaq

Programme Manager - Artificial Intelligence, techUK

Chris Hazell

Chris Hazell

Programme Manager - Cloud, Tech and Innovation, techUK

Elis Thomas

Elis Thomas

Programme Manager, Tech and Innovation, techUK

Ella Shuter

Ella Shuter

Junior Programme Manager, Emerging Technologies, techUK

Harriet Allen

Harriet Allen

Programme Assistant, Technology and Innovation, techUK

Sara Duodu  ​​​​

Sara Duodu ​​​​

Programme Manager ‑ Quantum and Digital Twins, techUK

 

 

Related topics

Authors

Tess Buckley

Tess Buckley

Programme Manager, Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.

Email:
[email protected]

Read lessmore

Sue Daley

Sue Daley

Director, Technology and Innovation, techUK