02 Apr 2026

The EU AI Act is two years old – here’s your three-step enterprise playbook

Written by The EU AI Act is two years old – here’s your three-step enterprise playbook .

The AI market accelerated like no industry before. Analysts forecast the AI market will expand substantially over the coming decade and is projected to hit $4.8 trillion by 2033. From generative AI to AI Agents and more, it is emerging as the dominant frontier technology, and its rapid growth reinforces why proportionate, risk-based regulations matters. The EU AI Act -- a first of its kind, sector-agnostic regulatory framework -- was adopted by the European institutions in 2023 and many of its provisions came into effect in 2024. The Act is intended to set out obligations tied to the risk an AI system poses to safety and fundamental rights, rather than prohibiting broadswaps of the technology.  

However, as the EU AI Act comes into full view with guidance unfolding or delayed, and with an omnibus bill waiting in the wings that could upheaval the Act in key areas, the question is: how should companies best prepare, and what steps should be prioritized over another? 

Where we stand now key provisions of the AI Act are already binding (for example, general prohibitions and certain transparency obligations), while others follow a phased timeline. Notably, on 2 August 2026, major obligations under the EU AI Act will take effect. However the path to implementation is not without hurdles. Recent discussions surrounding the omnibus bill and the potential for further delays or structural changes have introduced a layer of regulatory uncertainty. Political shifts and administrative complexities within the EU block mean that while the framework is set, the specific timelines and fine-print requirements could still evolve. For enterprises, this means the goal isn’t just static compliance, but building an agile strategy that can withstand a shifting legal landscape. 

Importantly, while the legislation is European, its reach extends far beyond the EU’s borders. The Act’s extraterritorial reach means that organizations that develop, deploy, import, or distribute AI systems affecting the EU market should treat it as applicable and prepare accordingly. For enterprises the implication is clear: innovation is not prohibited, but accountability is a must, transparency is a requirement and documentation is key. Ultimately, the question isn’t whether you can use AI, it’s whether you can demonstrate that you’re using it responsibly, transparently and with appropriate safeguards.  

Organisations that have already incorporated privacy and governance into their AI stack will be in a far better position to adapt, regardless of whether the omnibus bill triggers further adjustment to the rollout. At Box, our focus is on enabling customers to use AI responsibly through strong data governance, enterprise controls, and transparent vendor relationships. Enterprises that treat regulatory preparation as an opportunity to build robust governance — rather than a checklist exercise — will not only reduce compliance risk but also strengthen customer trust and long‑term resilience as the regulatory landscape continues to mature. 

With this theme in mind, provided below is a practical three-step playbook for enterprises to help prepare for AI regulation in the EU here and now, while taking a pragmatic, risk-focused approach. 

1. Understand your role in the AI value chain 

The EU AI Act recognises that AI systems are rarely built and deployed by a single organisation. It defines roles across an AI value chain: providers, deployers, importers and distributors, and downstream providers; each has different obligations. 

Most enterprises aren’t building large general-purpose AI models themselves – they’re integrating AI capabilities from trusted vendors into existing workflows and platforms. In many cases, that makes them “deployers” under the Act. Even if you’re not developing the underlying model, you remain accountable for how it’s applied. For instance, are you using AI in recruitment funnels, to screen candidates and review large volumes of applications? Is AI integrated into your customer experience channels? Are chatbots the first point-of-contact for queries? Each of these questions are key to not only understanding your role, but whether you are using AI within high-risk categories, which is tied to additional obligations.  

Preparation begins with visibility. Map where AI is embedded across your business and identify which systems rely on third-party models. This visibility audit is key to spotting regulatory pitfalls, especially if future changes to the Act redefine the thresholds for high-risk applications. 

The good news is that you don’t need to solve every compliance question overnight. But you do need clarity on where AI lives and is being used in your organisation. Without that foundation, meaningful governance is impossible. And this visibility audit is key to spotting any regulatory pitfalls and building the very foundation of governance. Without it, you cannot assign responsibilities, perform impact assessments, or demonstrate compliance to regulators. 

We’re quickly entering into a big enforcement window around the EU AI Act, in the same way we have seen with GDPR, which has now levied over €6.8bn in fines. What we’ve seen with GDPR lights the path for the EU AI Act; these accountability and visibility processes can’t be sidelined, unless you want to face costly non-compliance consequences. 

2. Move from principles to proof 

Many organisations have already published responsible AI principles. That’s a strong start, but under the EU AI Act, high-level commitments won’t be enough. Regulators will expect evidence of oversight. This means being able to demonstrate how AI use is governed in practice: who can access it? What guardrails are in place? Is usage logged and traceable? 

This is where governance needs to shift from policy documents to operational controls. 

For example, if generative AI is being used to analyse enterprise content, are you minimising the amount of data exposed to models? Are existing access permissions preserved? Can administrators monitor usage patterns and investigate anomalies? 

By operationalising privacy and security in AI practices today, businesses reduce the risk of future regulatory misalignment. If the omnibus bill or subsequent legislative reviews lead to delays, this extra time should be used to strengthen evidence-based governance rather than pausing efforts. So, ask yourself, if regulators requested evidence of AI oversight tomorrow, could you provide it clearly and coherently? If the answer is no or maybe, you’ve got work to do. 

3. Treat trust as a strategic advantage 

It’s tempting to view the EU AI Act as a regulatory hurdle, especially with the potential for further delays creating a "wait and see" atmosphere. However, forward-looking organisations see a catalyst to build durable trust and real, practical AI governance. Customers are asking sharper questions about how their data interacts with AI, and boards want reassurance that risks are managed. 

Designing for trust starts with data minimisation and demands flexibility. The EU AI Act will not be the final word on AI governance. Organisations that build adaptable frameworks now, capable of incorporating new transparency requirements or shifting oversight thresholds, will be better prepared for what comes next. Trust is about layering compliance with a culture where AI can scale sustainably. 

Act now, scale with confidence 

We have already raced through the past two years of the EU AI Act, and while the August 2026 window remains the target, enterprises must stay vigilant regarding the omnibus bill's impact on the final timeline. Policy development, system configuration, cross-functional alignment and employee training all take time. 

The organisations that start now—auditing use cases, clarifying roles, and embedding controls—will be ready not only for the current version of the EU AI Act, but for the broader, evolving regulatory landscape. The businesses that thrive will be those that build trust into their AI strategy from day one and prove they can back it up, regardless of potential legislative changes on the horizon. 


  TechTogether - Hubpage CTA

About the campaign

techUK’s TechTogether campaign continues with a focus on ‘Evolving Online Safety'. Our insights this week focus on ensuring AI systems are designed, governed and deployed responsibly, with diverse perspectives shaping how technology impacts society, strengthening cyber defences and reducing vulnerabilities as organisations adopt new technologies and expand digital services, and addressing workplace culture, leadership and systemic barriers to ensure diverse voices shape the future of technology.


TechTogether week 4


 

Skills, Talent and Diversity updates

Sign-up to get the latest updates and opportunities from our Skills, Talent and Diversity programme.

 

Here are the five reasons to join the Skills, Talent and Diversity programme

Download

Join techUK groups

techUK members can get involved in our work by joining our groups, and stay up to date with the latest meetings and opportunities in the programme.

Learn more


Women in Tech Widget Cards

Other opportunities to get involved:


Other related insights: