14 Jul 2025
by Ben Pirt

The public sector's legacy burden: can AI truly help?

Guest blog by Ben Pirt, Principal Software Engineer at Made Tech #techUKdigitalPS

From Made Tech’s extensive experience in the public sector, we understand the weight of legacy is a struggle felt across our public services. Outdated, complex legacy systems aren't just inconvenient, they also present security vulnerabilities, high operational costs and need specialised maintenance.

Adding to this complexity is the assumption that AI can modernise legacy software by itself - a  potential dangerous view of what's possible if left unchecked.

This creates a critical challenge: how can public services embrace the future when they're tied to the past? I’ll explore the practical ways industry can partner with government to navigate these challenges. With a goal to enhance efficiency and leverage the power of emerging technologies, when it makes sense.

AI, where can it help legacy systems?

First, we must recognise AI’s limitations. While some may still see a silver bullet, it’s far from it. That’s not to say it doesn't have its merits. AI can be a great support tool, but it’s not the main event.

For example, AI can be a great advantage in helping teams make sense of complex, outdated codebases in legacy systems. It can support your work by doing tasks like porting part of a code base from one language to another so it's easier to understand. Using AI in this way can unlock a faster understanding of legacy systems.

In projects we’ve used it when challenged by old code that didn’t seem to make any sense. By feeding it into an AI, we asked it to summarise what it was doing and rename variables in the code so they made sense. This technique was not just useful in helping our team get to grips with the legacy code, but it also set us up to easily check the outputs coming from the AI were correct.

Recognising AI risks

While AI can offer usefulness in legacy projects, it's crucial to inject a dose of caution. Industry must guide government in the risks. This is especially important when they touch critical, long-standing systems that serve the public.

It’s vital to avoid relying on AI when your team doesn’t completely understand what it’s doing behind the scenes. In the context of a complex legacy codebase, if you can't validate the AI's internal logic or its outputs, then you can’t validate any AI-generated suggestions or code. That should ring alarm bells.

We're not talking about simple prompts to instruct AI to generate, refine and debug a simple web service rather than manually writing it. We're talking about public sector systems where errors could have severe, real-world impact.

Human expertise remains core in quality control. This brings the critical issue of responsibility. Who's going to sign off on AI outputs and say, "Yes, that's correct and I take responsibility if something goes wrong"?

These are just some risks that come with using AI in government legacy projects that industry needs to communicate clearly and effectively. Our role isn't just to provide the tools, but to ensure they're used responsibly.

Data privacy when AI meets legacy

A major point I always come back to when it comes to legacy and AI is about where all that data is going to live. We're talking about old, often intricate systems that hold vast amounts of sensitive data. When we consider using AI to help understand these complex codebases, or to help with migration and testing, we run into a fundamental challenge - data security.

As we've seen early on with large language models (LLMs) like ChatGPT, they learn from the data you feed into it. The risks are that it could give information back to other users and effectively leak your private data. In government, where the data involved in legacy systems is often sensitive like citizen information, financial records and critical infrastructure details, the risk is amplified.

The privacy risk is also not just about customer data, but about keeping proprietary, closed-source code. It might have vulnerabilities and keeping it closed might be the only real line of defence while it's being modernised.

We must help government ensure that any AI tools they use have their learning material restricted as a key foundational step. This isn't a technicality, it's about preventing data breaches originating from the systems we're trying to modernise.

Why deep code understanding still matters

This brings us to a current risk in software development. One that has big implications for how we'll ever truly conquer those legacy systems. There's a real danger that junior developers are going to become “prompt engineers”. That’s where their job is largely feeding prompts into LLMs, iterating and not really understanding what they're doing under the hood.

If they’re not coding, how will they gain experience and deep architectural knowledge? This insight is core in helping government organisations on those tricky legacy problems.

Understanding a multi-decade-old, often undocumented, complex system requires more than just prompts. It needs debugging skills, logical reasoning and an engineer's intuition built from years of hands-on coding. There’s a big risk to the industry's ability to tackle future complex systems. That includes existing legacy ones and it's something we need to be aware of and actively working against.

Hands-on coding experience remains a central part of development for making sure government has the skilled workforce capable of not just patching, but modernising legacy IT for decades to come.

A measured approach to AI in government

While not an exhaustive list, these are some of the ways industry can help government with legacy challenges in a way that makes responsible use of emerging technologies. AI isn't a magic bullet. Its usefulness lies not in replacing human expertise, but as a support.

AI’s potential can only be realised if we proceed with caution and clear understanding of its limitations. Given the fast-moving nature of AI, staying up-to-date with the latest developments is a challenge, making a cautious and clear understanding even more critical.

By making sure human knowledge remains at the core of responsible AI implementation, we can build a more efficient and secure future.


 

Central Government Programme activities

The techUK Central Government Programme provides a forum for government to engage with tech suppliers. We advocate for the govtech sector, evangelise tech as a solution to public sector challenges, facilitate market engagement, and help make the public sector an easier market to operate in. Visit the programme page here.

 

Upcoming events

Latest news and insights 

Learn more and get involved

 

Central Government updates

Sign-up to get the latest updates and opportunities from our Central Government programme.

 


 

 

 

 

Here are the five reasons to join the Central Government Programme

Download

Join techUK groups

techUK members can get involved in our work by joining our groups, and stay up to date with the latest meetings and opportunities in the programme.

Learn more

Become a techUK member

Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.

Learn more

Meet the team 

Heather Cover-Kus

Heather Cover-Kus

Head of Central Government Programme, techUK

Ellie Huckle

Ellie Huckle

Programme Manager, Central Government, techUK

Charles Bauman

Charles Bauman

Junior Programme Manager - Central Government, techUK

Francesca Richiusa

Francesca Richiusa

Programme Team Assistant for Public Sector Markets, techUK

 

 

Authors

Ben Pirt

Ben Pirt

Principal Software Engineer, Made Tech