22 Jun 2022

Guest Blog from StackHPC: Anything-as-a-service through Cloud

Author: John Taylor, CEO and Co-Founder at StackHPC

The ubiquity of cloud computing has radically transformed how organisations process and access their, and our, data. With high-performance computing (HPC) and data analytics techniques, and the burgeoning application of AI and machine learning, the potential of unprecedented levels of processing power awaits. This convergence, coupled with quantum computing being realisable over the next decade, means the fourth industrial revolution will truly be upon us and the convergence of HPC, AI and Cloud thus offers considerable competitive advantages.

Combining the skills and knowledge to identify opportunities to leverage these technologies and deliver services and add value is now a crucial consideration in this software-defined, digital, data-driven transformation.

A distributed continuum of compute

However, organisations exploiting HPC and AI infrastructure today are faced with a continuum of computational resources, ranging from conventional HPC clusters, on-premises cloud through to public cloud; each resource provides particular benefits in terms of cost, performance, agility, sovereignty, accessibility and security. These resources by-and-large are disparate and achieving interoperability between workflows is not straightforward. Efficient software ecosystems do exist for the management of computation, networking and storage on HPC facilities, cloud infrastructures and edge-based systems, however, these address parochial requirements of their respective infrastructure layer and typically fail to smoothly interoperate.

This disjunct presents barriers to organisations that are locked into static HPC infrastructure that traditionally use, for example, job-based scheduling and lack the flexibility to adopt other workflow types that can interoperate over cloud resources in a more agile manner. For example, at very large scale, modern workflows now comprise complex HPC simulation applications coupled with AI services and surrogate models to steer computation.

In the continuum of compute, the problem of data gravity becomes acute at the cloud-to-edge interface such as in IoT applications in city infrastructure, scientific and medical instrumentation, autonomous vehicles and Agritech, exposed by the costly movement of data. Cloud has always offered a means to centralise data and bring computation to it, however Edge seeks to exploit the availability of processing technologies that can perform computationally intensive tasks at data source more effectively and thereby distribute fractions of data back for further analytics or training. Creating frictionless movement of data across such federated infrastructures imposes significant inertia due to security and trust which will need to be solved at both the policy and technical level.

Realising the potential of these disparate infrastructures requires a bringing together of HPC and AI at all levels into a hybrid cloud, enjoying the benefits of both worlds.


A “hybrid cloud” brings distinct advantages allowing HPC service providers to offload, for example, “smaller HPC workloads'' freeing up expensive and dedicated infrastructure for larger scale simulations; it helps organisations develop knowledge and skills in the effective use of cloud native workflow management and interfaces that will reduce the barrier to entry for exploitation and drive further service offerings.    

At StackHPC we adopt a cloud first and software-defined approach based on the foundation of these cloud interfaces and the access patterns required in modern cloud native workflows operating over bare metal (traditional HPC clusters), virtualised (traditional cloud) and containerised infrastructure. The cloud first strategy is based on an “Open” approach in which cloud native methods exist across the continuum and skills can be developed and transferred.

We have worked with leading industry and research organisations in providing private cloud infrastructure for HPC and AI which, to some extent, has established their sovereignty and now see the need to extend that footprint into other clouds and services. Such “anything-as-a-service” models now represent significant value in the HPC and AI market and, we believe, are set to increase further through the next decade. In this respect, we look forward to working on our first QCaaS infrastructure in this software-defined future.

Is the UK ready for the future of compute?

Exploring the convergence of HPC, AI, Quantum and Cloud

Find out more

techUK Quantum Report

20 key recommendations to support the commercialisation of quantum technologies.

Read more