Google Demonstrates Impractical Improvement In Quantum Error Correction – But It Does Work

Google has demonstrated a significant step forward in the error correction in quantum computing – although the method described in a paper this week remains some way off a practical application.

In December 2019, Google claimed quantum supremacy when its 54-qubits processor Sycamore completed a task in 200 seconds that the search giant said would take a classical computer 10,000 years to finish. The claim was then hotly contested by IBM, but that is another story.

[It is] a demonstration of a method that could one day be used to create a good system for error correction in quantum computing. It is not yet an effective system for error correction itself

A qubit is the quantum equivalent to a conventional computing bit. Each qubit can be 0 and 1, as in classical computing, but can also be in a state where it is both 0 and 1 at the same time. That mixed state is known as a "superposition". In theory, as you add qubits, the power of your quantum computer grows exponentially, increasing by 2n, where n is the number of qubits.

Now, in practical terms, it is difficult to overstate exactly how much heavy lifting the words "in theory" are doing in that last sentence.

Qubits are notoriously unstable, and susceptible to the slightest environmental interference, but understanding how much error that instability introduces is also difficult. Conventional computers are also prone to errors, but account for them by making copies of bits and performing a comparison.

Looking inside a qubit is impossible, as pioneer of quantum mechanics Erwin Schrödinger famously imagined when trying to assess the true health of a cat when randomly subjected to a life-threatening quantum event inside a box.

Google's approach to the problem is to create a parallel set of qubits "entangled" with the qubits performing the calculation exploiting one of the other strange phenomena of quantum mechanics.

Although arrays of physical qubits have been used to represent a single, "logical qubit" before, this is the first time they have been used to calculate errors. In the Chocolate Factory's setup, five to 21 physical qubits were used to represent a logical qubit and, with some post-hoc classical computing, it found that error rates fell exponentially for each additional physical qubit, according to a paper published in Nature this week. It was also able to demonstrate the error suppression was stable over 50 rounds of correction.

So far, so good, but the experiment by Julian Kelly, Google research scientist, and his team was a demonstration of a method that could one day be used to create a good system for error correction in quantum computing. It is not yet an effective system for error correction itself.

One problem is scale, explained Martin Reynolds, Gartner distinguished vice president. The paper suggests a practical quantum computer might need 1,000 to 10,000 error-correction qubits for each logical qubit.

"You can see that the scale isn't there, but the fact that they're doing it at all demonstrates that it works," he told The Register.

Meanwhile, researchers would need to improve the quality of qubit stability to get towards a workable machine.

"They are working on really poor quality qubits. These early qubits just aren't good enough, they have to get at least 10 times better in terms of their noise and stability, even to do error correction of the kind that we're going to need. But just to have this piece of the puzzle in place is a really good sign," Reynolds said.

Kuan Yen Tan, CTO co-founder at quantum computing firm IQM, told us: "What Google did was to show that this one method of error correction and detection is very suitable for the topology that they have in their system. It's a very important milestone to show that the proof of principle works. Now, you just need to scale it up, and scaling is a very big challenge: it's something that's not trivial: you still need thousands if not millions of qubit to be able to do error correction and detection. That's still a really huge technological gap that you have to overcome."

But these are not the only challenges that remain. Google's approach to error correction uses classical computers to spot likely errors using data from the physical qubits after its quantum processor has run the algorithms.

The next step is doing error correction on the fly. Kuan said Google's experiment relied on a set of classical controls when detecting errors, which takes "a really, really long time."

"Then you have to go back to the qubit and say, OK, now we have to correct the error by that time the error is something else already. I believe that is the bottleneck at the moment for the experiment," he said.

Still, Google's authors argue, in a peer-reviewed study, that their results suggest that quantum error correction can be successful in keeping errors under control. Although the approach is not yet at the threshold of error rates needed to realise the potential of quantum computing, the study indicates that the architecture of Google's Sycamore processor may be close to achieving this threshold, the researchers said. ®

RECENT NEWS

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more

OpenAI And Broadcom Collaborate On New AI Chip To Boost Computational Power

OpenAI and Broadcom are reportedly in discussions to co-develop a new AI chip. This strategic partnership aims to enhanc... Read more

Tech Industry Left In Limbo As AI Bill Missing From Kings Speech

The King’s Speech, a key event outlining the government's legislative agenda, has notably excluded a much-anticipated ... Read more