قالب وردپرس درنا توس
Home / Business / This is what the people who claimed Google's quantum superiority have to say about it

This is what the people who claimed Google's quantum superiority have to say about it



  Picture of a human speaking.
Enlarge / Hartmut Neven, the head of Google's Quantum AI lab, wandered Ars and others through an overview of the company's quantum computation efforts this week.

John Timmer

SANTA BARBARA, California ̵

1; Early this fall, a paper leaked on a NASA page indicating that Google engineers had built and tested hardware that achieved what was called "quantum superiority" and completed calculations that would be impossible on a traditional computer. The paper was quickly disconnected, and Google remained silent, letting the rest of us speculate on its plans for this device and any follow-ups the company had to prepare.

This speculation ended today when Google released the final version of the leaked paper. But perhaps more significantly, the company invited the press to its quantum calculation laboratory, talked about their plans and gave us time to chat with the researchers behind the work.

The superiority result

"I'm not going to bother explaining the quantum superiority paper – if you were invited to come here, you probably all read the leaked paper," said Hartmut Neven, head of Google's Quantum AI lab. But he found it difficult to resist the topic entirely, and the others who spoke to reporters were more than happy to expand the discussion on Neven.

Google's Sergio Boixo explained the experiment in detail and described how a random source was used to configure the qubits ports, after which a measurement of the system's output was made. The process was then repeated several million times in succession. While on a regular computer the output will be the same given the same initial configuration, qubits may have values ​​that make their measured output probable, meaning that the result of a measurement cannot be predicted. However, with sufficient measurement, it is possible to obtain the probability distribution.

Calculation that distribution is possible on a classic computer for a small number of qubits. However, as the total number of qubits increases, it becomes impossible to do so during the lifetime of existing supercomputer hardware. And if the error rate was high enough, the probability distribution system produced would differ significantly from what you calculated, assuming the number of qubits was small enough for you to calculate it.

Google employees admitted that it was a problem specifically chosen because quantum machines can produce results even if they have a high error rate. But as researcher Julian Kelly put it, "if you can't beat the world's best classic computer on a pursued problem, you'll never turn it on to anything useful." Boixo emphasized that this problem provided a useful test, showing that the error rate remained a simple linear extrapolation of the errors involved in putting and reading pairs of qubits.

This apparently indicates that there is no additional fragility caused by the increasing complexity of the system. Although this was shown before for smaller collections of qubits, Google's hardware increases the limits of previous measurements by a factor of 10 13 .

Google and its hardware

However, none of that explains how Google ended up with a quantum computational research project to begin with. According to various people, the work was a growth of academic research that was going on at the University of California, Santa Barbara. Several of the Google staff hold academic positions there and have graduate students working on the projects at Google. This relationship was initiated by Google, which began to look at the prospect of doing its own quantum computing work at about the same time as academics were looking for ways to expand beyond the work traditionally done at universities.

Google's interest was spurred by AI efforts. There are a number of potential applications of quantum computation in AI, and the company had already experimented a bit on a D-Wave quantum anchor. But port-based quantum computers had not matured enough to run much more than demonstrations. So the company decided to build their own. To do this, it was about superconducting qubits called transmon – the same choice that others in the field, like IBM, have made.

The hardware itself is a capacitor connected to a superconducting Josephson junction, where a pile of electrons behaves as if it were a single quantum object. Each qubit behaves like an oscillator, with its two possible output values ​​corresponding to silence or motion. The hardware is quite large, which makes it relatively easy to control – you can bring wires right next to it, something you can't do for individual electrons.

Google has its own factories, and the company makes cabling and qubits on separate chips before combining them. But the challenges do not end there. The chip's packaging plays a role in protecting it from the environment, and it brings in control and reading signals from external hardware – Google's Jimmy Chen noted that packaging is so important that a member of the team was honored to be the first author on parent paper.

The control and reading wires consist of a superconducting niobium titanium alloy, which is one of the most expensive individual parts of the entire unit, according to Pedram Roushan. And that connects it to external control hardware, with five wires required for each other qubits. (This wiring requirement is starting to cause problems, as we will come to later.)

The external control hardware for quantum computers is quite extensive. As Google's Evan Jeffrey described it, traditional processors include circuits that help control the processor's behavior in response to relatively sparse external inputs. This is not true for quantum processors – all aspects of their control must be provided from external sources. Currently, Google's setup loads all the control instructions into external hardware that is extremely low latency and then runs it several times. Nevertheless, Jeffrey Ars said, as the complexity of instructions has increased with the number of qubits, the time spent on qubits at idle has increased from 1% to 5%.

Chen also described how simply assembling the hardware is not the end of the challenge. While the individual qubits are designed to be identical, small bugs or impurities and the local environment can all change the behavior of individual qubits. As a result, each qubit has its own frequency and error rate, and these must be determined before a given chip can be used. Chen is working on automating this calibration process, which currently takes a day or so.

What is coming, hardware-wise

The processor that handled the quantum supremacy experiment is based on a hardware design called Sycamore, and it has 53 qubits (due to a non-functional device in a scheduled range of 54). It's actually a step down from the company's previous Bristlecone design, which had 72 kbits. But Sycamore has several connections between its qubits, which better match Google's long-term design goals.

Google describes the design goal as "surface code" and its focus is to enable error-tolerant, error-correction quantum computing. Surface code, as Google's Marissa Giustina described it, requires a link to the nearest neighbor, and the Sycamore design puts out qubits in a square grid. Anything but edge qubits have connections to their four neighbors.

  The Google qubits setup provides each internal qubit with connections to four of its neighbors. "Src =" https://cdn.arstechnica.net/wp-content/ uploads / 2019/10 / figure_1-640x531.png "width =" 640 "height =" 531 "srcset =" https: // cdn. arstechnica.net/wp-content/uploads/2019/10/figure_1-1280x1062.png 2x
Enlarge / The Google qubits setup provides each internal qubit with connections to four of the neighbors.

Google

But setup is not the only issue between Google and qubits error correction. Google Hardware Lead John Martinis said you also need two-qubit operations to have an error rate of about 0.1% before error correction is realistically possible. Right now, that figure is around 0.3%. The team is sure it can be brought down, but they are not there yet.

Another issue is wiring. Debugging requires multiple qubits to function as a single logical qubit, which means much more control threads for each logical qubit in use. And right now the cabling is physically great compared to the chip itself. It will certainly need to be changed to add a significant number of extra qubits to the chips, and Google knows that. The wiring problem "is dull – it's not very exciting," Martinis said. "But it's so important that I have worked with it."

  The chip's packaging is dominated by the wires needed to carry signals in and out of the chip. "Src =" https://cdn.arstechnica.net/wp-content/uploads/2019/10/IMG_6114-640x540.jpg "width =" 640 "height =" 540 "srcset =" https://cdn.arstechnica .net / wp-content / uploads / 2019/10 / IMG_6114-1280x1079.jpg 2x
Enlarge / The chip's packaging is dominated by the cabling needed to carry signals in and out of the chip.

John Timmer

Error correction also requires a fundamental change in the control hardware and software. At present, controlling the chip generally involves sending a series of operations, and then reading the results. But error correction requires more of a call, with constant sampling of the qubit state and corrective commands issued as needed. For this to work, Jeffrey noted, you will really need to reduce the delay.

Overall, the future of Google's hardware was best summed up by Kelly, who said, "A lot of things will have to change, and we're aware of that." Martinis said that, as they did when they moved from the Bristlecone design, is not afraid to scrap something that is currently successful: "We attend conferences and pay attention, and we are willing to swing if we think we need it."

Google Viewer


Source link