Paper leaks showing a quantum computer doing something that a supercomputer can not



[ad_1]

Artist's view of quantum supremacy.
Enlarge / Artist's view of quantum supremacy.

Disney / Marvel Studios

Mathematically, it's easy to demonstrate that a functional quantum computer can easily outperform conventional computers on certain problems. Demonstrating with a real quantum computer, however, was another problem. Most quantum computers that we have built do not have enough qubits to handle the complex calculations in which they would clearly outperform the performance of a traditional computer. And the increase in the number of qubits has been complicated by problems with noise, crosstalk and the qubits' tendency to lose their entanglement with their neighbors. All this has raised questions as to whether the theoretical supremacy of quantum computing can really make a difference in the real world.

Over the weekend, the Financial Times claimed that Google researchers had demonstrated "quantum supremacy" in a preliminary research paper that had been briefly published on a NASA Web server before being removed. But the details of what Google has done have remained vague. In the meantime, Ars has acquired copies of the draft document and we can confirm the history of the Financial Times. More importantly, we can now describe exactly what Google suggests doing.

To summarize, Google analyzes the behavior of a large group of entangled qubits – 53 of them – to determine the statistics describing a quantum system. It took about 30 seconds of qubit, or about 10 minutes if you add communications and control the traffic. But determining these statistics – what we would do by solving the equations of quantum mechanics – is simply not possible on the fastest supercomputer in the world.

A quantum problem

The problem that Google tackled was to send a random pattern into the qubits and, at a later time, to repeatedly measure. If you do this with a single qubit, the measurement results will produce a string of random numbers. But, if you mix two qubits, then a phenomenon called quantum interference starts to influence the bit string generated by using them. As a result, some specific bit arrangements become more or less common. The same is true when more bits are entangled.

For a small number of bits, it is possible for a conventional computer to calculate the interference model, and thus the probabilities of different system results. But the problem becomes ugly as the number of bits increases. By running smaller problems on the world's most powerful supercomputer, the research team was able to estimate that the calculations would fail at about 14 qubits simply because the computer would lack memory. If they work with Google 's cloud computing services, calculations of up to 20 qubits would cost 50 trillion core hours and consume a petawatt of electricity.

On this basis, it would seem that a system with about 30 qubits would be enough to indicate a higher quantum performance than a traditional non-quantum supercomputer. So, naturally, the researchers involved built one with 54 qubits, just to be sure. One of them turned out to be faulty, leaving the computer with 53.

These were similar to the designs that other companies had worked on. Qubits are loops of superconducting wires in which current can flow in one or the other direction. These were connected to microwave resonators that could be used to control the qubit using a light of appropriate frequency. The qubits were arranged in a network, the connections passing from each internal qubit to four of its neighbors (those located at the periphery of the network had fewer connections). These connections could be used to entangle two neighboring qubits, with sequential operations adding ever-increasing numbers until the entire chip is entangled.

Non forced errors

Error corrections are particularly absent from this configuration. Over time, qubits tend to lose their state and thus their entanglement. This process is somewhat stochastic, so it can happen early enough to destroy the results of all calculations. With more qubits, of course, this becomes a bigger risk. But estimating the overall error rate of the system requires comparing its behavior to calculated descriptions of its behavior – and we have already established that can not calculate the behavior of this system.

To solve this problem, the research team began by observing the behavior of a single bit. Among other things, this revealed that the different qubits on the chip had error rates that could vary by a factor of 10. They then tested combinations of two qubits and found that the rates of Error were largely a combination of both. error rate of individual qubits. Not only did this make it easier to estimate the error rate of much larger combinations, but it also showed that the hardware used to connect the qubits and the process used to entangle them did not create important sources. additional errors.

That said, the error rate is not particularly impressive. "We can model the fidelity of a quantum circuit as the product of the probabilities of an error-free operation of all gates and all measurements," the researchers write. "Our largest random quantum circuits have 53 qubits, 1113 gates at one qubit, 430 gates at two qubits, and one measure at each qubit, for which we predict a total fidelity of 0.2%."

The supremes

It is therefore clear that this material is not the manufacture of a versatile quantum computer – or at least a versatile quantum computer that you can trust. We needed qubits corrected in error before these results; we still need it afterwards. And it is possible to state that it was less of a "computing operation" than a simple "repeated measurement of a quantum system to get a probability distribution ".

But this seriously underestimates what is happening here. Each calculation done on a quantum computer will eventually be a measure of a quantum system. And in this case, there is simply no way to get this probability distribution using a conventional computer. With this system, we can get it in less than 10 minutes, and most of that time is spent on processing that does not require qubits. As the researchers say: "To our knowledge, this experiment marks the first calculation that can only be performed on a quantum processor."

Just as importantly, it shows that there is no obvious obstacle to scaling quantum calculations. The hard part is the work needed to define a number of qubits in a specific state, then entangle them. There was no obvious slowdown – no previously unrecognized physical problem that prevented it from occurring when the number of qubits increased. This should give some confidence that nothing is fundamental to prevent quantum computers from occurring.

Recognizing the rate of error, the researchers suggest, however, that we are not witnessing the dawn of quantum computing, but rather what they call "noisy intermediate-level technologies." . And in this sense, they may be right, since last week again, IBM announced that it would propose in October a 53-bit general-purpose quantum computer. This will also have no error correction error, which may also not be reliable (although IBM qubits may have an error rate). different from Google's). But this raises the intriguing possibility that Google's result can be confirmed with the help of the IBM machine.

In the meantime, the only obvious use of this particular system is to produce a validated random number generator. There is not a lot of obvious follow-up. Rumors indicate that the final version of this article will be published in a major review a month ago, which no doubt explains why it was taken offline so quickly. When the official publication takes place, we can expect Google and some of its competitors to be more interested in the implications of this work.

[ad_2]

Source link