28 Oct 2021

AI for Rapid Automated Calibration of Quantum Devices

Dr. Nathan Korda, Director of Research, Mind Foundry, discusses the use of AI to solve the quantum device calibration challenge, a blocker to reliable up-time on useful numbers of qubits.

Why are quantum computing devices powerful?  

Quantum computing (QC) devices offer an entirely new way to perform computations, driven by quantum mechanics. If we can make them work at a large enough scale, they could herald new ages in drug discovery or materials design, and they will crack our most secure encryption methods. But what makes these devices so powerful? 

All classical (normal) computing devices perform computations by flicking switches off and on. Each switch represents the smallest quantity of information possible - a bit - the answer to a yes/no question. It is either off representing “no”, or on representing “yes”. So at any point in a computation on a classical, 10-bit device you can hold a single set of answers to 10 yes/no questions. 

You’ve probably heard how parallelising large computations can make them faster. This works by breaking the large computation down into smaller computations which can all be done at the same time. If you want to parallelise a computation in a classical computer, you need to have more bits (switches) to do the different, smaller computations simultaneously - it takes a larger computer, but you can do the whole problem faster. At the heart of the promise of QC is a super-charged form of parallelisation, called quantum parallelism, which doesn’t require the use of more hardware. 

Quantum bits, or qubits, are like switches, but they can be to some degree off and to some degree on at the same time - a qubit can hold answers to two yes/no questions at once. This is called a quantum superposition. Things get really interesting when you have more than one qubit. In theory, 10 qubits can represent in some way every possible set of answers to 10 yes/no questions, all at once. Remember, a classical computer with 10 bits can represent only one set of yes/no answers. Just 300 qubits should be able to represent more possible sets of answers than there are atoms in the universe, all at once. If we are clever enough about it, this property of QC allows parallel computations without the need for extra qubits. The advantage is big: QC devices can be exponentially more efficient at computation than classical computing devices, and do computations that are just too big to perform on any practical classical computer. 

How do QC devices work, and what is the calibration problem?  

To create quantum superpositions that are usable for computation, QC devices need to robustly and precisely control a quantum property of a quantum object: for example, the spins of individual electrons. To realise truly transformational quantum parallelism they must be able to control superposition states that are fully entangled across hundreds of qubits

So, QC devices require immense precision to operate. They are also difficult to manufacture without significant variation. Even worse, the control they affect over the quantum objects is nearly impossible to isolate from external environmental influences, leading to qubit decoherence. QC devices must be regularly tuned, and their ability to reliably operate regularly benchmarked and verified. 

This is what we refer to as the calibration problem for quantum devices. 

AI for quantum qubit calibration  

In practice, calibration is a massive blocker to the development of quantum hardware that is capable of entangling 100s of qubits. It is also a major blocker to the future effective operation of, for example, cloud quantum computation services. Labs building or running quantum computers spend hours of specialist lab technicians’ time calibrating their machines every day. Even the Sycamore device that recently demonstrated quantum supremacy took 24 hours to calibrate before the experiment could be run. And since qubits degrade over time, that calibration sequence has to happen every time the device is used. 

Recent work has shown that AI can solve this problem. Whether the calibration task is minimising state preparation error, measuring qubit decoherence times, or designing the most reliable qubit gates by shaping microwave pulses, AI can learn the physical expertise of lab technicians from judiciously gathered data, and apply it to achieve automated calibration routines orders of magnitude faster than manual processes. If this success can be generalized and systematised across QC hardware types we could see commercially relevant QC devices years earlier than currently projected, and put to work solving the world’s most important problems. 

A cautionary word: a cross-disciplinary approach required  

We should remember that whenever we build AI systems to solve a problem of this magnitude we need to take an interdisciplinary approach. We need to be close to the problem and to the experts who understand it better than anyone. Such expertise has tended to reside all together only in Big Tech companies such as Google and IBM, but the United Kingdom has all the expertise it needs to solve this problem. 

A recent InnovateUK-funded project will bring this expertise together, joining together leading QC and Machine Learning software companies, Riverlane and Mind Foundry, with cutting edge hardware device manufacturers, Oxford Ionics and SeeQC, as well as world leading RTOs in QC, NPL and the University of Edinburgh. 

Author: 

Dr. Nathan Korda, Director of Research, Mind Foundry 

 

Quantum Commercialisation Week

Click here to read more insights published during techUK's Quantum Commercialisation Week

Click Here

 

Laura Foster

Laura Foster

Programme Manager, Technology and Innovation, techUK

Laura is techUK’s Programme Manager for Technology and Innovation.

She supports the application and expansion of emerging technologies across business, including Geospatial Data, Quantum Computing, AR/VR/XR and Edge technologies.

Before joining techUK, Laura worked internationally in London, Singapore and across the United States as a conference researcher and producer covering enterprise adoption of emerging technologies. This included being part of the strategic team at London Tech Week.

Laura has a degree in History (BA Hons) from Durham University, focussing on regional social history. Outside of work she loves reading, travelling and supporting rugby team St. Helens, where she is from.

Email:
[email protected]
LinkedIn:
www.linkedin.com/in/lauraalicefoster

Read lessmore

Zoe Brockbank

Programme Coordinator, Policy, Tech and Innovation, techUK

Zoe is a Programme Assistant, supporting techUK's work across Policy, Technology and Innovation.

The team makes the tech case to government and policymakers in Westminster, Whitehall, Brussels and across the UK on the most pressing issues affecting this sector and supports the Technology and Innovation team in the application and expansion of emerging technologies across business, including Geospatial Data, Quantum Computing, AR/VR/XR and Edge technologies.

Before joining techUK, Zoe worked as a Business Development and Membership Coordinator at London First and prior to that Zoe worked in Partnerships at a number of Forex and CFD brokerage firms including Think Markets, ETX Capital and Central Markets.

Zoe has a degree (BA Hons) from the University of Westminster and in her spare time, Zoe enjoys travelling, painting, keeping fit and socialising with friends.

Email:
[email protected]
Phone:
020 7331 2174
Website:
www.techuk.org

Read lessmore

 

Related topics