[ad_1]
Tracking quantum computing has been a bit confusing as there are multiple approaches. Most of the effort is spent on so-called gate-based computers, which allow you to perform logical operations on individual qubits. These are well understood theoretically and can perform a variety of calculations. But it is possible to create gate-based systems out of a variety of qubits, including photons, ions, and electronic devices called transmons, and companies have grown around each of these hardware options.
But there is a separate form of computation called quantum annealing that also involves the manipulation of collections of interconnected qubits. Annealing has not been so elaborate in theory, but it seems well suited to a class of optimization problems. And, when it comes to annealing equipment, there is only one company called D-Wave.
Now things are about to get even more muddled. On Tuesday, D-Wave released its roadmap for upcoming processors and software for its quantum annealing. But D-Wave also announces that it will develop its own hardware based on gates, which it will offer in parallel with quantum annealing. We spoke with the CEO of the company, Alan Baratz, to understand all the announcements.
Annealing equipment
The easiest part of the ad to understand is what’s going on with D-Wave’s quantum annealing processor. The current processor, called Advantage, has 5,000 qubits and 40,000 connections between them. These connections play a major role in the performance of the chip because, if a direct connection between two qubits cannot be made, others must be used to act as a bridge, resulting in a lower effective number of qubits.
Starting this week, users of D-Wave’s cloud service will have access to an updated version of Advantage. The qubit and connection stats will remain the same, but the device will be less influenced by noise in the system (in technical terms, its qubits will keep their consistency for longer). “This performance update will allow us to solve larger problems with greater precision and a greater likelihood of accuracy due to some new manufacturing processes that we are using,” Baratz told Ars. He said the improvements were due to changes in the process of making qubits and the materials used to create them.
The influence of noise in a quantum optimizer does not necessarily mean that it will produce a “false” result. Typically for optimization problems this means that the machine will not find the most optimal solution but will find something close to it. So the reduction in noise in the new processor means the machine is more likely to find something closer to the absolute optimum.
Further into the future, the tracking system, Advantage 2, is expected at the end of next year or the following year. This will see another increase in the number of qubits, reaching somewhere above 7,000. But connectivity would also increase dramatically, with D-Wave targeting 20 connections per qubit. “Now that we’ve crossed a certain threshold on the number of qubits, it looks like connectivity will give us the biggest boost,” Baratz told Ars.
Further from the material
D-Wave provides a set of development tools that it calls Ocean. In previous iterations, Ocean has allowed people to step back from direct control of the hardware; instead, if a problem could be expressed as unconstrained binary optimization (QUBO), Ocean could issue the commands needed to handle all the hardware configuration and run the problem on the optimizer. D-Wave called this a hybrid problem solver because Ocean would use classic computing to optimize the QUBO before execution.
The only problem is that not everyone who might be interested in D-Wave hardware knows how to express their problem as QUBO. Thus, the new version of Ocean will allow an additional layer of abstraction by allowing problems to be sent to the system in the format generally used by people who tend to solve these kinds of problems. “You will now be able to specify the problems in the language that scientists and data analysts understand,” Baratz promised.
If that works, it could remove a major hurdle that could prevent people from testing whether D-Wave’s hardware offers acceleration for their problems.
AND doors
The biggest part of today’s announcement, however, may be that D-Wave also intends to build hardware based on the doors. Baratz explained that he believes optimization would likely remain a valid approach, pointing to a draft publication that shows that structuring some optimization problems for hardware based on gates can be so computationally expensive that it would compensate. all the gains that quantum hardware could provide. But it’s also clear that gate-based hardware can solve an array of problems that quantum annealing cannot.
He also argued that D-Wave has fixed a number of issues that currently limit the advancement of gate-based hardware that uses electronic qubits called transmons. These include the amount and size of the hardware needed to send control signals to the qubits and the ability to cluster the qubits densely enough that they are easy to connect but not close enough that they start to interfere. the ones with the others.
One of the issues D-Wave faces, however, is that the qubits it uses for its annealing aren’t useful for door-based systems. Although they are based on the same material (the Josephson junction), the annealing qubits can only be set up or down. A gate-based qubit must allow three-dimensional manipulations. So the company will try to build flow qubits, which also rely on Josephson junctions but use them in a different way. So at least some of the company’s engineering expertise should still apply.
Does the rest? There’s no way to find out without building some hardware, and Baratz said the first test qubits were just being cooled to operating temperatures when we spoke. He was also cautious about what the number of qubits would look like once the material was ready for public use, stating “until we build and measure, I’m not going to guess.”
[ad_2]
Source link