03 Feb 2026
by Usman Ikhlaq, Tess Buckley

The release of the international AI safety report 2026: navigating rapid AI advancement and emerging risks

The International AI Safety Report 2026 has been released today on 3 February 2026, marking the second iteration of the most comprehensive global assessment of artificial intelligence capabilities, risks, and safety measures.  

This report was led by Turing Award winner Yoshua Bengio and authored by over 100 international experts  The 200-page report, with 1,451 references, represents a collaboration backed by more than 30 countries and organisations including the European Union, OECD, and the United Nations. 

According to the report, AI capabilities evolve rapidly, while scientific evidence emerges far more slowly. A key challenge for policymakers is acting prematurely risks entrenching ineffective policies, yet waiting for conclusive evidence may leave society vulnerable. This evidence-based assessment provides the reliable foundation needed for informed decision-making about AI development and deployment. 

The report's findings will inform discussions at India's AI Impact Summit later this month, continuing the collaborative spirit essential for ensuring AI develops safely for humanity's benefit. Please note that techUK will be on the ground with a delegation and a programme of events. If you would like us in attendance at your sessions or to be involved in our offering, please email [email protected][email protected] and [email protected]

The following insight provides a summary of the report. The full report can be reviewed here. 

Capability advances 

The report documents that general-purpose AI has continued its rapid advancement, particularly in mathematics, coding, and autonomous operations. According to the findings, leading systems achieved gold-medal performance on International Mathematical Olympiad questions and exceeded PhD-level expert performance on science benchmarks. The report notes that AI agents can now autonomously complete software engineering tasks requiring multiple hours of human programmer time. However, the report emphasises that performance remains uneven, with systems still failing at seemingly simple tasks despite these achievements. 

The report reveals that AI adoption has been swift yet globally uneven. According to the data, at least 700 million people now use leading AI systems weekly, with some countries seeing over 50% population adoption. However, the report highlights that estimated adoption rates remain below 10% across much of Africa, Asia, and Latin America, highlighting significant digital divides. 

Emerging risks 

The report documents several concerning trends. According to its findings, deepfake-related incidents are rising, with AI-generated content increasingly used for fraud and scams. The report notes that non-consensual intimate imagery, disproportionately affecting women and girls, has become alarmingly common. One study cited in the report found that 19 of 20 popular apps specialise in simulated undressing of women. 

The report indicates that biological misuse concerns have escalated significantly. According to the assessment, AI systems now match or exceed expert-level performance on benchmarks measuring knowledge relevant to biological weapons development. The report states that OpenAI's o3 model outperforms 94% of domain experts at troubleshooting virology lab protocols. According to the authors, this represents a qualitative leap from information provision to tacit, hands-on knowledge previously requiring years of laboratory experience. 

The report reveals that for the first time, all three major AI companies released models with heightened safeguards after pre-deployment testing couldn't rule out that systems could meaningfully help novices develop biological weapons. According to the findings, the dual-use dilemma has intensified: 23% of highest-performing biological AI tools have high misuse potential, with 61.5% being fully open source, yet only 3% of 375 surveyed biological AI tools have any safeguards. 

Cybersecurity threats 

The report warns that malicious actors actively use general-purpose AI in cyberattacks. According to its assessment, systems can generate harmful code and discover software vulnerabilities. The report notes that in 2025, an AI agent placed in the top 5% of teams at a major cybersecurity competition. According to the findings, underground marketplaces now sell pre-packaged AI tools lowering the skill threshold for attacks. 

Safety and governance 

The report acknowledges that while certain failures like hallucinations have become less common, current risk management techniques remain fallible. According to the assessment, some models can now distinguish between evaluation and deployment contexts and alter their behavior accordingly, creating new challenges for safety testing. 

The report describes how multiple research groups have deployed specialised scientific AI agents capable of performing end-to-end workflows including literature review, hypothesis generation, experimental design, and data analysis. According to the authors, this emergence of AI "co-scientists" represents both tremendous opportunity for beneficial research and significant governance challenges. 

Conclusions 

The report emphasises that the trajectory is clear: AI capabilities in biological research advance faster than governance ability, with the gap between what's possible and what's safe continuing to widen.  

According to the report's conclusion, the international community faces an urgent task: developing frameworks distinguishing legitimate scientific inquiry from malicious intent, while recognising that systems capable of designing novel therapeutics can, with minimal modification, design novel pathogens. 

For additional context to this subject area and detail from a UK perspective, the Department for Science, Innovation and Technology's AI Security Institute published its inaugural Frontier AI Trends Report on 18 December 2024. Based on evaluations of over 30 state-of-the-art models and drawing on two years of assessments since November 2023, the AISI report presents aggregated results illustrating high-level trends in AI progress, complementing the international findings with specific UK-focused analysis aimed at improving public understanding of fast-moving AI capabilities and strengthening transparency. 



ai_icon_badge_stroke 2pt final.png

techUK - Seizing the AI Opportunity

The UK is a global leader in AI innovation, development and adoption.

AI has the potential to boost UK GDP by £550 billion by 2035, making adoption an urgent economic priority. techUK and our members are committed to working with the Government to turn the AI Opportunities Action Plan into reality. Together we can ensure the UK seizes the opportunities presented by AI technology and continues to be a world leader in AI development.  

Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.


Upcoming AI Events

Latest news and insights

Subscribe to our AI newsletter

AI and Data Analytics updates

Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.

Contact the team

Kir Nuthi

Kir Nuthi

Head of AI and Data, techUK

Usman Ikhlaq

Usman Ikhlaq

Programme Manager - Artificial Intelligence, techUK

Sue Daley OBE

Sue Daley OBE

Director, Technology and Innovation

Visit our AI Hub - the home of all our AI content:

Seizing the AI Opportunity generic AI campaign card.jpg

 

Enquire about membership:

Become a techUK member

Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.

Learn more

 

 

Related topics

Authors

Usman Ikhlaq

Usman Ikhlaq

Programme Manager, Artificial Intelligence, techUK

Usman joined techUK in January 2024 as Programme Manager for Artificial Intelligence. 

He leads techUK’s AI Adoption programme, supporting members of all sizes and sectors in adopting AI at scale. His work involves identifying barriers to adoption, exploring solutions, and helping to unlock AI’s transformative potential, particularly its benefits for people, the economy, society, and the planet. He is also committed to advancing the UK’s AI sector and ensuring the UK remains a global leader in AI by working closely with techUK members, the UK Government, regulators, and devolved and local authorities.

Since joining techUK, Usman has delivered a regular drumbeat of activity to engage members and advance techUK's AI programme. This has included two campaign weeks, the creation of the AI Adoption Hub (now the AI Hub), the AI Leader's Event Series, the Putting AI into Action webinar series and the Industrial AI sprint campaign. 

Before joining techUK, Usman worked as a policy, regulatory and government/public affairs professional in the advertising sector. He has also worked in sales, marketing, and FinTech. 

Usman holds an MSc from the London School of Economics and Political Science (LSE), a GDL and LLB from BPP Law School, and a BA from Queen Mary University of London. 

When he isn’t working, Usman enjoys spending time with his family and friends. He also has a keen interest in running, reading and travelling.

Read lessmore

Tess Buckley

Tess Buckley

Programme Manager, Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.

Email:
[email protected]

Read lessmore