Securing the AI Frontier: Threats and Trust in 2025
AI has had an undeniable impact on our digital lives. From ovens that adjust temperatures in real time to technologies helping to clear plastic pollution from the ocean, AI’s uses range from the banal to the beautiful. Unfortunately, AI is something of a Jekyll and Hyde persona – while it can do immense good, it also presents enormous security risks.
AI’s Impact on Cybersecurity
AI's impact on cybersecurity has been more profound than almost any other sector.
Generative AI enables cybercriminals to effortlessly create convincing phishing messages, eliminating tell-tale spelling, formatting, or grammar mistakes that typically give them away. These tools allow even unsophisticated attackers to write in any language, mimic individual writing styles, and even create deep-fake videos to fool their victims.
One doesn’t have to look far for examples of this kind of incident: in 2024, a finance worker inadvertently paid out $25 million after a video call with a deepfake of their chief financial officer.
That said, the impact of AI on cybersecurity isn’t wholly negative. Considering the UK still suffers from a serious cyber skills gap, AI’s ability to process vast amounts of information quickly can make all the difference for overstretched, understaffed analyst teams, enabling them to respond to threats they would otherwise have missed.
However, in 2025, we must also turn our attention to agentic AI.
The Rise of Agentic AI
Agentic AI is the newest, most exciting, and most concerning evolution of artificial intelligence. Unlike “traditional” AI, these systems possess a degree of autonomy, meaning they can act independently, make decisions, set goals and plans, execute tasks, learn and adapt, and even reason. They don’t just react to prompts; they can proactively work towards a goal, adjusting their approach as necessary, much as a human would in the real world.
Of course, these capabilities are enormously valuable. Agentic AI has unprecedented potential for efficiency, innovation, and automation; many organizations have already used it effectively. Agentic AI can supercharge consumer chatbots, streamline tasks like enrolling clinical trial participants and managing post-hospitalization care, and enhance cybersecurity threat detection and response.
However, it should go without saying that agentic AI systems also present enormous risk. The threats to agentic AI themselves aren’t necessarily new, but the autonomy and ubiquity of these tools amplify potential consequences far beyond those posed by traditional software. These systems can misinterpret objectives, act unpredictably, or be manipulated, leading to unintended outcomes. Moreover, the complexity and self-modifying nature of these models can outpace human oversight, leading to challenges in ensuring transparency and accountability.
API Threats and Their Impact on AI Systems
It would be impossible to discuss AI security without mentioning API threats. AI systems rely on APIs to function, meaning that for cybercriminals seeking to compromise AI systems, APIs have become a major attack vector.
According to the 2025 Imperva Bad Bot Report, APIs have become a prime target for malicious bot attacks, posing a serious threat to AI systems that rely on them for data exchange and automation. 44% of advanced bot traffic now targets API, exploiting business logic to commit fraud, scrape data, and hijack accounts, potentially undermining the integrity of AI outputs.
As agentic AI systems become more autonomous, their dependence on APIs makes them uniquely vulnerable to subtle, large-scale abuse. Considering that automated traffic now surpasses human activity online and AI tools are accelerating bot development, securing APIs has never been more important.
The Evolving Role of IAM in the AI Era
Traditional identity verification tools, which are part of broader IAM solutions, rely on domain-based verification. Essentially, they trust the device and hope the person behind it is who they say they are. However, as DNS spoofing has become more common, this technique has become inadequate. Facial recognition has emerged as a potential solution to this problem, but the rise of AI and deepfakes has compromised its effectiveness.
IAM and facial recognition tools must evolve to recognize deepfakes, carrying out the following practices:
-
Create an initial profile collecting user-submitted data
-
Use device intelligence, behavioural analytics, and biometrics for identity affirmation.
-
Prove identity with facial recognition technology that is equipped with liveness detection to weed out deepfakes.
-
Enrol device by securely linking the validated identity to a device.
-
Ensure compliance with backend checking against anti-money-laundering (AML) and sanction lists.
Data Quality Becomes Mission-Critical
As the public sector accelerates AI adoption, the quality and integrity of data become mission critical. Poor data leads to poor decisions—especially when training AI models. That’s where Data Security Posture Management (DSPM) solutions come in. By automatically classifying sensitive data, applying robust encryption, and managing keys across hybrid environments, DSPM helps ensure that data used in AI systems is both high quality and secure. These tools offer continuous visibility into where data resides, who has access, and how it’s protected—enabling government organisations to build AI systems on a foundation of trust, compliance, and accountability.
Trusted AI: The Foundation for Secure Adoption
These developments drive home the importance of robust governance frameworks and trusted AI models. The European Union’s AI Act is the most developed example, categorizing AI applications based on risk levels and enforcing stringent requirements for high-risk systems. While the UK isn’t quite as far along, initiatives like the AI Safety Institute and the AI Opportunities Action Plan aim to promote responsible AI development through risk assessments and ethical guidelines.
Ultimately, AI has created a brave, exciting, and somewhat terrifying new world for cybersecurity. But it’s important not to lose our heads: with the right tools, technologies, and procedures, it’s more than possible to combat the threats to and created by AI and harness the immense benefits for the public sector and society.
Thales has a robust portfolio of AI cybersecurity solutions to help you navigate the evolving technology and security landscape with confidence. See AI cybersecurity solutions.

techUK - Seizing the AI Opportunity
For the UK to fully seize the AI opportunity, citizens and businesses must have trust and confidence in AI. techUK and our members champion the development of reliable and safe AI systems that align with the UK’s ethical principles.
AI assurance is central to this mission. Our members engage directly with policy makers, regulators, and industry leaders to influence policy and standards on AI safety and ethics, contributing to a responsible innovation environment. Through these efforts, we help build public trust in AI adoption whilst ensuring our members stay ahead of regulatory developments.
Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.
Upcoming AI events
Latest news and insights
Subscribe to our AI newsletter
AI and Data Analytics updates
Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.
Contact the team
Visit our AI Hub - the home of all our AI content:
