Christopher Savoie, PhD is the CEO & founder of Zapata Computing. He is a published scholar in medicine, biochemistry and computer science.
Every day, we get closer to the point where quantum computers will outperform existing classical computers when solving real business problems. We call this milestone “quantum advantage,” and it could have profound implications across a broad spectrum of industries, ranging from pharmaceuticals and chemistry to machine learning and cybersecurity.
How far away is quantum advantage? It could be closer than you might think.
From a hardware standpoint, Honeywell, Google, IBM, Rigetti and IonQ all continue to make impressive progress on their respective devices. Google, for example, made waves last October when researchers used its Sycamore quantum device to perform a calculation that, by their estimates, would have taken a classical supercomputer 10,000 years to solve. More recently, Google AI Quantum and collaborators used the device to perform a simulation of two intermediate-scale chemistry problems.
While these feats have grabbed headlines, they represent but a small portion of the innovative solutions and discoveries the quantum community produces on an ongoing basis. These discoveries not only lay the groundwork for the quantum devices of tomorrow, but more importantly, they are helping us optimize the quantum devices of today.
The Problem Of Noise Reduction
The beating heart of today’s quantum devices is qubits (the quantum equivalent of bits in classical computing) and the circuits one can build with them, also known as quantum logic gates. The gates we work with today are noisy, which means the values they produce are prone to error. One reason for this is the fact that these gates, when operating at the quantum level, are extremely sensitive to external conditions and don’t remain “coherent” for long.
Due to this noise, quantum devices are not yet capable of running the most well-known and potentially lucrative quantum algorithms — Shor’s and Grover’s. To give you a sense of their power, I believe successful calculations using these algorithms could overcome the most advanced encryption techniques and render cybersecurity as we know it obsolete.
John Preskill predicted in 2018 that “noisy intermediate-scale quantum,” or NISQ, devices would be available in the near future. In the last year, remarkable progress has been made in optimizing NISQ devices, and much of it has come from efforts to reduce the impact of noise in quantum circuits. Multiple teams, including my company’s, are developing noise models that account for noise in the quantum algorithms to enhance gate performance.
Significantly, researchers at CQT have demonstrated theoretically that noise correction in quantum computing can be executed without the addition of more quantum circuit depth, which was once thought necessary. This means that dealing with noise in NISQ devices may no longer be dependent on adding more and more qubits (an undertaking that poses challenges of its own). At the same time, other researchers have developed a protocol for more efficiently and precisely characterizing the quantum noise in a system so it can be addressed by quantum algorithms.
As exciting and promising as this work is, cutting through the noise, so to speak, doesn’t get us all the way to quantum advantage. Even with a hypothetically flawless quantum device, one without any noise, there is still another challenge to be faced: the measurement problem.
Solving The Measurement Problem
For many quantum calculations, including those that will have the greatest impact in the wider world, you need to sample, or measure, the quantum device repeatedly to get an accurate result. Without the qubits required to take measurements in parallel, today’s quantum devices can only take measurements in sequence. As a result, the time it takes to reach a valuable solution could stretch into years. Obviously, that hardly represents an advantage, quantum or otherwise, over classical computers.
Researchers at my company have discovered an algorithmic method for increasing the information gained by each measurement we take today. As you can imagine, increasing the amount of information per measurement reduces the number of measurements needed for an accurate result. This in turn shortens the time needed for running calculations and represents an important step for optimizing near-term devices.
The method itself relies on a type of Bayesian inference that utilizes engineered likelihood functions (ELFs), which enhances the sampling circuit to increase the information gain by as much as nine times over conventional sampling circuits. (If you would like to read a scientific preprint on this method, you can do so here.)
Other researchers presented a measurement approach (download required) that removed susceptibility to readout error and would “require a much smaller number of repetitions to measure the ground state energy to within a fixed accuracy target.”
Solving the measurement problem not only shortens the time that measurement demands. It can also improve the performance of near-term quantum hardware. With insight from enhanced sampling algorithms, for example, you can determine the exact circuit depth and fidelity needed to accelerate quantum speed-up to the point of quantum advantage.
The recent algorithmic breakthroughs in reducing quantum error, both from noise and measurement, show that we don’t need to wait for hardware to be perfect to realize quantum advantage. In fact, they highlight the importance of advances in quantum software. A computing device, after all, is only as good as the software you run on it. By developing software solutions that not only take into account the noisy nature of today’s hardware, but even compensate for it, we can get closer and closer to unleashing the power that quantum advantage promises.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?