21 Apr 2026
by Tom Galpin-Swan

Disconnected AI, more than just GPUs and open models

Disconnected AI isn’t just about hosting models behind the firewall. In the most secure environments, delivering real AI value demands precision engineering across platforms, workflows, governance and controls. This article explores why disconnected AI is fundamentally an engineering challenge — and where success or failure is truly decided.

By Tom Galpin-Swan, Master Technologist, DXC Technology

Disconnected AI is about delivering AI capability without external dependencies, most often driven by high security postures. As security classifications increase, reliance on public cloud and external providers reduces, and at the highest levels, environments may be fully disconnected. In these settings, all AI capability must remain inside the same security boundary. Yet, the challenge extends beyond deploying AI on premises to delivering meaningful AI capability without weakening the protective posture of the environment.

A common assumption is that this is mainly an infrastructure problem. Buy the GPUs, host an open-source or open-weight model locally, and the organisation has effectively addressed the AI challenge in a disconnected setting. In reality, that is only the starting point. Hardware and model hosting create the possibility of AI, but not the value. Consistent outcomes depend on whether the organisation can achieve the quality and throughput needed for real operational impact.

This is where the difference between hyperscaler AI and on-premises AI becomes critical. Hyperscaler platforms combine frontier models with highly optimised infrastructure, mature tooling, orchestration, observability and embedded supporting services. That broader ecosystem is a major part of why frontier offerings can produce strong outcomes with comparatively modest engineering effort. The model matters, but it is not acting alone.

By contrast, on-premises AI in disconnected environments starts from a very different position. Hosting a model locally, even on capable GPU infrastructure, does not recreate the surrounding platform advantages that hyperscalers provide by default. The organisation still needs to engineer the full stack inside the boundary, from model hosting and integration through to orchestration, monitoring, reporting, identity, policy, audit and assurance. Without that broader capability, the result is often a working model that falls short of the performance, usability or dependability that users now associate with modern AI.

 So, the real question is not simply what model can be hosted, but what outcome needs to be achieved, and what engineering is required to achieve it. In disconnected AI, value is shaped by three factors working together: model capability, platform contribution and engineering precision. Hyperscaler platforms paired with frontier models tend to be more forgiving. On-premises deployments using open models generally require much more deliberate design to approach the same standard of output.

This is where DXC turns potential into performance. Our strength is not simply in deploying AI inside secure environments, but in reducing the gap between frontier expectations and what can be delivered on premises. We do that by increasing engineering precision across the entire solution. That starts with deconstructing the client requirement into clear tasks, data needs and success measures. From there, we select and optimise models for secure domain demands, design prompts and workflows for efficiency and reliability, and ensure that infrastructure and platforms are tuned for throughput and resilience.

Just as importantly, we apply governance and assurance from the outset. In disconnected environments, AI cannot be treated as an isolated model deployment. It must operate as part of a coherent stack inside the same secure boundary, with the controls, auditability and operational discipline needed to support dependable use in practice. That combination of software engineering, platform engineering, AI expertise and experience working with secure customers is what enables us to move quickly while remaining aligned to the realities of highly secure delivery.

The AI market will continue to move fast. New models will emerge, expectations will rise, and the gap between what is possible in hyperscaler ecosystems and what can be delivered on premises will remain a defining challenge for secure organisations. Disconnected AI is not about replicating public cloud conditions perfectly. It is about understanding where the value really comes from, then engineering the right combination of model, platform and controls to deliver that value inside the boundary.

Disconnected AI is more than just GPUs and open models. It is an engineering challenge, and for organisations in the strictest environments, that is exactly where success or failure will be decided.

Learn more about DXC Cybersecurity and DXC AI & Data solutions.

 

DXC-Full-Color.png

 

Related topics

Authors

Tom Galpin-Swan

Tom Galpin-Swan

Master Technologist, DXC Technology

Tom Galpin-Swan is a Master Technologist at DXC Technology. A seasoned technologist with a 20-year career in High Performance Computing, he has more recently been expanding his expertise into AI. As a Technical Leader, he drives innovation from concept to production, collaborating with diverse stakeholders to deliver business value. With a talent for communicating complex technical concepts in a relatable way, Tom brings non-technical people on the journey, empowering teams to harness the power of AI and drive growth through technology.