The release of the international AI safety report 2026: navigating rapid AI advancement and emerging risks
The International AI Safety Report 2026 has been released today on 3 February 2026, marking the second iteration of the most comprehensive global assessment of artificial intelligence capabilities, risks, and safety measures.
This report was led by Turing Award winner Yoshua Bengio and authored by over 100 international experts The 200-page report, with 1,451 references, represents a collaboration backed by more than 30 countries and organisations including the European Union, OECD, and the United Nations.
According to the report, AI capabilities evolve rapidly, while scientific evidence emerges far more slowly. A key challenge for policymakers is acting prematurely risks entrenching ineffective policies, yet waiting for conclusive evidence may leave society vulnerable. This evidence-based assessment provides the reliable foundation needed for informed decision-making about AI development and deployment.
The report's findings will inform discussions at India's AI Impact Summit later this month, continuing the collaborative spirit essential for ensuring AI develops safely for humanity's benefit. Please note that techUK will be on the ground with a delegation and a programme of events. If you would like us in attendance at your sessions or to be involved in our offering, please email [email protected], [email protected] and [email protected].
The following insight provides a summary of the report. The full report can be reviewed here.
Capability advances
The report documents that general-purpose AI has continued its rapid advancement, particularly in mathematics, coding, and autonomous operations. According to the findings, leading systems achieved gold-medal performance on International Mathematical Olympiad questions and exceeded PhD-level expert performance on science benchmarks. The report notes that AI agents can now autonomously complete software engineering tasks requiring multiple hours of human programmer time. However, the report emphasises that performance remains uneven, with systems still failing at seemingly simple tasks despite these achievements.
The report reveals that AI adoption has been swift yet globally uneven. According to the data, at least 700 million people now use leading AI systems weekly, with some countries seeing over 50% population adoption. However, the report highlights that estimated adoption rates remain below 10% across much of Africa, Asia, and Latin America, highlighting significant digital divides.
Emerging risks
The report documents several concerning trends. According to its findings, deepfake-related incidents are rising, with AI-generated content increasingly used for fraud and scams. The report notes that non-consensual intimate imagery, disproportionately affecting women and girls, has become alarmingly common. One study cited in the report found that 19 of 20 popular apps specialise in simulated undressing of women.
The report indicates that biological misuse concerns have escalated significantly. According to the assessment, AI systems now match or exceed expert-level performance on benchmarks measuring knowledge relevant to biological weapons development. The report states that OpenAI's o3 model outperforms 94% of domain experts at troubleshooting virology lab protocols. According to the authors, this represents a qualitative leap from information provision to tacit, hands-on knowledge previously requiring years of laboratory experience.
The report reveals that for the first time, all three major AI companies released models with heightened safeguards after pre-deployment testing couldn't rule out that systems could meaningfully help novices develop biological weapons. According to the findings, the dual-use dilemma has intensified: 23% of highest-performing biological AI tools have high misuse potential, with 61.5% being fully open source, yet only 3% of 375 surveyed biological AI tools have any safeguards.
Cybersecurity threats
The report warns that malicious actors actively use general-purpose AI in cyberattacks. According to its assessment, systems can generate harmful code and discover software vulnerabilities. The report notes that in 2025, an AI agent placed in the top 5% of teams at a major cybersecurity competition. According to the findings, underground marketplaces now sell pre-packaged AI tools lowering the skill threshold for attacks.
Safety and governance
The report acknowledges that while certain failures like hallucinations have become less common, current risk management techniques remain fallible. According to the assessment, some models can now distinguish between evaluation and deployment contexts and alter their behavior accordingly, creating new challenges for safety testing.
The report describes how multiple research groups have deployed specialised scientific AI agents capable of performing end-to-end workflows including literature review, hypothesis generation, experimental design, and data analysis. According to the authors, this emergence of AI "co-scientists" represents both tremendous opportunity for beneficial research and significant governance challenges.
Conclusions
The report emphasises that the trajectory is clear: AI capabilities in biological research advance faster than governance ability, with the gap between what's possible and what's safe continuing to widen.
According to the report's conclusion, the international community faces an urgent task: developing frameworks distinguishing legitimate scientific inquiry from malicious intent, while recognising that systems capable of designing novel therapeutics can, with minimal modification, design novel pathogens.
For additional context to this subject area and detail from a UK perspective, the Department for Science, Innovation and Technology's AI Security Institute published its inaugural Frontier AI Trends Report on 18 December 2024. Based on evaluations of over 30 state-of-the-art models and drawing on two years of assessments since November 2023, the AISI report presents aggregated results illustrating high-level trends in AI progress, complementing the international findings with specific UK-focused analysis aimed at improving public understanding of fast-moving AI capabilities and strengthening transparency.
AI Leader’s Series: Neuro AI
Our AI Leader's Series continues in 2026 with a session on Neuro AI on 5 March. This event will explore how insights from neuroscience can inspire the next generation of AI systems, focusing on adaptive, energy-efficient neuro-inspired architectures that mirror the brain's remarkable computational capabilities.
Join us for the next instalment of our AI Leader's Series on 28 April, focusing on Bio Intelligence. This event will explore how biological systems can inspire the next generation of AI, examining bio-intelligent systems that integrate biological and digital components to create hybrid architectures with unprecedented capabilities.
The UK is a global leader in AI innovation, development and adoption.
AI has the potential to boost UK GDP by £550 billion by 2035, making adoption an urgent economic priority. techUK and our members are committed to working with the Government to turn the AI Opportunities Action Plan into reality. Together we can ensure the UK seizes the opportunities presented by AI technology and continues to be a world leader in AI development.
Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.
The International AI Safety Report 2026 has been released today on 3 February 2026, marking the second iteration of the most comprehensive global assessment of artificial intelligence capabilities, risks, and safety measures.
The FCA has opened applications for the second cohort for their AI Live Testing service which offers firms the opportunity to test AI systems in live market conditions with regulatory support. Applications are open until 2 March 2026.
Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.
Contact the team
Kir Nuthi
Head of AI and Data, techUK
Kir Nuthi
Head of AI and Data, techUK
Kir Nuthi is the Head of AI and Data at techUK.
She holds over seven years of Government Affairs and Tech Policy experience in the US and UK. Kir previously headed up the regulatory portfolio at a UK advocacy group for tech startups and held various public affairs in US tech policy. All involved policy research and campaigns on competition, artificial intelligence, access to data, and pro-innovation regulation.
Kir has an MSc in International Public Policy from University College London and a BA in both Political Science (International Relations) and Economics from the University of California San Diego.
Outside of techUK, you are likely to find her attempting studies at art galleries, attempting an elusive headstand at yoga, mending and binding books, or chasing her dog Maya around South London's many parks.
Usman joined techUK in January 2024 as Programme Manager for Artificial Intelligence.
He leads techUK’s AI Adoption programme, supporting members of all sizes and sectors in adopting AI at scale. His work involves identifying barriers to adoption, exploring solutions, and helping to unlock AI’s transformative potential, particularly its benefits for people, the economy, society, and the planet. He is also committed to advancing the UK’s AI sector and ensuring the UK remains a global leader in AI by working closely with techUK members, the UK Government, regulators, and devolved and local authorities.
Since joining techUK, Usman has delivered a regular drumbeat of activity to engage members and advance techUK's AI programme. This has included two campaign weeks, the creation of the AI Adoption Hub (now the AI Hub), the AI Leader's Event Series, the Putting AI into Action webinar series and the Industrial AI sprint campaign.
Before joining techUK, Usman worked as a policy, regulatory and government/public affairs professional in the advertising sector. He has also worked in sales, marketing, and FinTech.
Usman holds an MSc from the London School of Economics and Political Science (LSE), a GDL and LLB from BPP Law School, and a BA from Queen Mary University of London.
When he isn’t working, Usman enjoys spending time with his family and friends. He also has a keen interest in running, reading and travelling.
Sue leads techUK's Technology and Innovation work. This includes work programmes on AI, Cloud, Data, Quantum, Semiconductors, Digital ID and Digital ethics as well as emerging and transformative technologies and innovation policy. In 2025, Sue was honoured with an Order of the British Empire (OBE) for services to the Technology Industry in the New Year Honours List. She has also been recognised as one of the most influential people in UK tech by Computer Weekly's UKtech50 Longlist and was inducted into the Computer Weekly Most Influential Women in UK Tech Hall of Fame.
A key influencer in driving forward the tech agenda in the UK, in December 2025 Sue was appointed to the UK Government’s Women in Tech Taskforce by the Technology Secretary of State. She also sits on the UK Government’s Smart Data Council, Satellite Applications Catapult Advisory Group, Bank of England’s AI Consortium and BSI’s Digital Strategic Advisory Group. Previously, Sue was a member of the Independent Future of Compute Review and co-chaired the National Data Strategy Forum. As well as being recognised in the UK's Big Data 100 and the Global Top 100 Data Visionaries in 2020, Sue has been shortlisted for the Milton Keynes Women Leaders Awards and has been a judge for the Loebner Prize in AI, the UK Tech 50 and annual UK Cloud Awards. She is a regular industry speaker on issues including AI ethics, data protection and cyber security.
Prior to joining techUK in January 2015, Sue was responsible for Symantec's Government Relations in the UK and Ireland. Before that, Sue was senior policy advisor at the Confederation of British Industry (CBI). Sue has an BA degree on History and American Studies from Leeds University and a Master’s Degree in International Relations and Diplomacy from the University of Birmingham. Sue is a keen sportswoman and in 2016 achieved a lifelong ambition to swim the English Channel.
Visit our AI Hub - the home of all our AI content:
Enquire about membership:
Become a techUK member
Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.
Usman joined techUK in January 2024 as Programme Manager for Artificial Intelligence.
He leads techUK’s AI Adoption programme, supporting members of all sizes and sectors in adopting AI at scale. His work involves identifying barriers to adoption, exploring solutions, and helping to unlock AI’s transformative potential, particularly its benefits for people, the economy, society, and the planet. He is also committed to advancing the UK’s AI sector and ensuring the UK remains a global leader in AI by working closely with techUK members, the UK Government, regulators, and devolved and local authorities.
Since joining techUK, Usman has delivered a regular drumbeat of activity to engage members and advance techUK's AI programme. This has included two campaign weeks, the creation of the AI Adoption Hub (now the AI Hub), the AI Leader's Event Series, the Putting AI into Action webinar series and the Industrial AI sprint campaign.
Before joining techUK, Usman worked as a policy, regulatory and government/public affairs professional in the advertising sector. He has also worked in sales, marketing, and FinTech.
Usman holds an MSc from the London School of Economics and Political Science (LSE), a GDL and LLB from BPP Law School, and a BA from Queen Mary University of London.
When he isn’t working, Usman enjoys spending time with his family and friends. He also has a keen interest in running, reading and travelling.
Programme Manager, Digital Ethics and AI Safety, techUK
Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.
Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace.
Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical.
Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.
On 11 September, techUK held a workshop from 9:30 to 12:30 with DSIT’s Responsible Technology Adoption Unit (RTA), featuring an address from Felicity Burch, Director of RTA and facilitation by Nuala Polo, AI Assurance Lead of RTA with attendance from techUK’s Digital Ethics working group members. This session allowed for testing and feedback on a forthcoming assurance tool set for public consultation in Autumn 2024.
Nick Fitzpatrick, Manager, Frontier Economics discusses why ownership is a critical component of the conversation about getting the most out of geospatial data for the public good. Part of techUK's #GeospatialFuture campaign week