Quantum computing progress: Higher temps, better error correction

conceptual graphic of symbols representing quantum states floating above a stylized computer chip. reader comments 7

There's a strong consensus that tackling most useful problems with a quantum computer will require that the computer be capable of error correction. There is absolutely no consensus, however, about what technology will allow us to get there. A large number of companies, including major players like Microsoft, Intel, Amazon, and IBM, have all committed to different technologies to get there, while a collection of startups are exploring an even wider range of potential solutions.


We probably won't have a clearer picture of what's likely to work for a few years. But there's going to be lots of interesting research and development work between now and then, some of which may ultimately represent key milestones in the development of quantum computing. To give you a sense of that work, we're going to look at three papers that were published within the last couple of weeks, each of which tackles a different aspect of quantum computing technology.


Hot stuff


Error correction will require connecting multiple hardware qubits to act as a single unit termed a logical qubit. This spreads a single bit of quantum information across multiple hardware qubits, making it more robust. Additional qubits are used to monitor the behavior of the ones holding the data and perform corrections as needed. Some error correction schemes require over a hundred hardware qubits for each logical qubit, meaning we'd need tens of thousands of hardware qubits before we could do anything practical.


A number of companies have looked at that problem and decided we already know how to create hardware on that scale—just look at any silicon chip. So, if we could etch useful qubits through the same processes we use to make current processors, then scaling wouldn't be an issue. Typically, this has meant fabricating quantum dots on the surface of silicon chips and using these to store single electrons that can hold a qubit in their spin. The rest of the chip holds more traditional circuitry that performs the initiation, control, and readout of the qubit.

Advertisement

This creates a notable problem. Like many other qubit technologies, quantum dots need to be kept below one Kelvin in order to keep the environment from interfering with the qubit. And, as anyone who's ever owned an x86-based laptop knows, all the other circuitry on the silicon generates heat. So, there's the very real prospect that trying to control the qubits will raise the temperature to the point that the qubits can't hold onto their state.


That might not be the problem that we thought, according to some work published in Wednesday's Nature. A large international team that includes people from the startup Diraq have shown that a silicon quantum dot processor can work well at the relatively toasty temperature of 1 Kelvin, up from the usual milliKelvin that these processors normally operate at.


The work was done on a two-qubit prototype made with materials that were specifically chosen to improve noise tolerance; the experimental procedure was also optimized to limit errors. The team then performed normal operations starting at 0.1 K, and gradually ramped up the temperatures to 1.5 K, checking performance as they did so. They found that a major source of errors, state preparation and measurement (SPAM), didn't change dramatically in this temperature range: "SPAM around 1 K is comparable to that at millikelvin temperatures and remains workable at least until 1.4 K."


The error rates they did see depended on the state they were preparing. One particular state (both spin-up) had a fidelity of over 99 percent, while the rest were less constrained, at somewhere above 95 percent. States had a lifetime of over a millisecond, which qualifies as long-lived int he quantum world.


All of which is pretty good, and suggests that the chips can tolerate reasonable operating temperatures, meaning on-chip control circuitry can be used without causing problems. The error rates of the hardware qubits are still well above those that would be needed for error correction to work. However, the researchers suggest that they've identified error processes that can potentially be compensated for. They expect that the ability to do industrial-scale manufacturing will ultimately lead to working hardware.

Shrinking logical qubits


Meanwhile, in the same edition of Nature, IBM researchers describe looking into a new mode of error correction for use with its superconducting qubits, called transmons. Transmons are typically connected to their nearest neighbors on a two-dimensional surface. That lends itself to what are called "surface code" error correction schemes, where clusters of neighboring transmons are linked into a single logical qubit. Unfortunately, effective surface code schemes require dozens to over a hundred qubits.


But IBM plans to have error-corrected quantum computing long before it can put enough qubits together for surface codes to work. Instead, it based its roadmap around a theoretical form of logical qubit that uses far fewer hardware qubits, but requires long-range connections between qubits, an approach called "low-density parity-check codes," or LDPCs.


The problem with this is that IBM's current quantum processors do not have any long-range connections, so there's no possibility to test this in hardware. Still, the company has done extensive modeling of what the system might behave like based on the properties of the hardware qubits it has produced.


The simulations show that, using one of these LDPC schemes are able to handle a dozen logical qubits using just 288 physical qubits—far fewer than a useful surface code would need (depending on the details, they'd need about 3,000 hardware qubits). At reasonable hardware error rates, its overall error rates are similar to those of the surface code. And it benefits greatly from improving hardware: "as the physical error probability crosses the threshold value, the amount of error suppression achieved by the code can increase by orders of magnitude even with a small reduction of the physical error rate."


In the end, the optimal system IBM explored had an error rate of about 2×10-7, which means it could maintain a dozen logical qubits through about a million error correction cycles. Options that use even more hardware qubits would improve matters even further.

Advertisement

But, again, that's all done by modeling. What will IBM need to do to implement these ideas in hardware? It will have to roughly double the number of qubit-to-qubit interconnections from its existing configurations, where no qubit is connected to more than three others. It will also have to build chips with longer-range connections. So, we may have to wait a bit for this to be tested in the real world.


Errors vs. qubit count


Amazon is playing two roles in quantum computing. The first is through its Bracket cloud service, which allows other companies to provide online access to quantum computing hardware. But the company is also pursuing its own effort to develop quantum computers, also based on transmons. But it's using those transmons in distinct ways to try to bring error rates down.


In a paper published last week, a team of Amazon and academic researchers describe error correction with what they term a dual-rail transmon. Physically, a transmon links a loop of superconducting wire with a microwave resonator, where the photons in the resonator and the current in the wire can influence each other's behavior. IBM is storing the qubit in the current loop and using microwaves to control its state. But it's possible to do things the other way around, storing the qubits in the state of the photons in the resonator.


Amazon is putting a distinct twist on this by using two transmons to store a single qubit. The qubit is constructed using a single photon, which can be in one of the two transmons, meaning the two qubits can be in either the 01 or 10 states (where each digit represents one of the transmons).

Advertisement

In the dual-rail system, the majority of the errors occur when the photon gets lost, leaving the system in a 00 state. It turns out that it's easy to link a third transmon to the dual rail system and have it act as a sensor to determine if there's a photon in the system—it can only enter an excited state if the photon has leaked out of the dual rail system, allowing errors to be caught and corrected.


The result is a qubit where the inherent error rate is well below that needed for error correction schemes to work—where adding more hardware qubits to create a logical qubit doesn't increase the probability of error enough to offset the gains. There are two big drawbacks. One is that the error check is relatively slow compared to other operations, so that can potentially slow down calculations.


The bigger drawback is that it doesn't eliminate the need for additional error correction to prevent the less common types of errors. So, the key question is whether the additional hardware cost—three transmons per qubit—is offset by simplified error correction, which could then reduce the qubit count.


It's important to emphasize that some of these advances could be important to each other. IBM's error correction scheme should be usable with either of the two hardware systems being tested by the other research groups. But not all of them—transmons and quantum dots are completely distinct physical systems.


Still, the works show that, even if you're fundamentally skeptical of the prospects for quantum computing, there's some interesting work going on to try to provide the components needed to get things to work. It's unclear how much of this will make its way into future quantum computing efforts, but it does show that the people trying to push the field forward haven't come close to running out of ideas.


Nature, 2024. DOI: 10.1038/s41586-024-07160-2

Nature, 2024. DOI: 10.1038/s41586-024-07107-7

Physical Review X, 2024. DOI: 10.1103/PhysRevX.14.011051  (About DOIs).