Or did you mean a *useful* calculation? That will take longer. But hardly anyone worth listening to is claiming that’s been done yet.

* “Quantum computational supremacy, on the other hand, does now look to be a done deal, modulo tying up a few loose ends with the Sycamore experiment.”*

I would like to point at a question that keeps puzzling me, regarding the conclusions that can be drawn from the (otherwise beautiful theoretically and technologically extremely impressive) « quantum computational supremacy » results published by the Google AI team.

There are six different circuit variants: { patch, elided, full} X {simplifiable, non-simplifiable } and the « supremacy circuits » (related to which the central claim of quantum computational supremacy is made) correspond to {full, non-simplifiable}.

Unfortunately, the paper does not provide any XEB data for such supremacy circuits.

(If I understood it correctly, Boaz Barak also raised that concern during the debate that took place in Jerusalem).

Of course, given current knowledge on how to simulate RCS on classical computers, it is not feasible to compute XEB data for a large (say n>40) number of qubits: this is shown on Fig S50 c) of the supplementary information. The same Figure however also clearly indicates that XEB computations for the supremacy circuits are totally feasible with up to about 30 qubits, (even for depth m=20).

This leads to at least two questions:

* Are there compelling reasons to believe that XEB data for {full, simplifiable} and {full, non-simplifiable} circuits should just perfectly match?

* Should we consider the absence of published XEB data for the supremacy circuits (even for n<30) as one of the loose ends that still needs to be tied up?

*“But provided you agree that n sufficiently clean qubits should take ~2n time to simulate classically, I’d advise you to move quickly on this!”*

Is it though?

I’m trying to come up with a simpler analogy to express this:

You have a magic computer that can add floating point numbers with a trillion bits of precision at no extra cost, but the final result is always rounded to two decimal places. (this ignores how those numbers are loaded into memory).

Then you also have a “normal” computer that only adds up floats with 64 bit precision.

And we’re trying to come up with situations where the second machine just can’t match the result of the first one by a long shot, for a similar run time.

It really depends on the data profile, i.e. the amount of numbers and their range distribution.

There are often strategies to limit errors sufficiently, without requiring the second machine to rely on emulating the “trillion bit registry” ability via brute force, like sort the data and start adding up the smaller numbers first, etc.

And of course there are scenarios where a trillion bits of precision just can’t be replaced.

Then the question is whether those scenarios occur naturally/often.

But have we considered that there might be a hidden cost to it?

Has anyone else noticed that the total IQ of all the brains in the world seems to have gone down drastically those last few years?

Just as the number of qubits we entangle go up!

Coincidence or there’s really no free lunch?

But provided you agree that n sufficiently clean qubits should take ~2^{n} time to simulate classically, I’d advise you to move quickly on this! Yes, both classical hardware and classical algorithms will surely improve, but we should expect the number of qubits and the circuit fidelity (for a given depth) to quickly improve as well from here on out.

Suppose we either ignore the exponent base, or find ways to improve Feynman’s random path algorithm such that it’s exponent matches sycamore’s, wouldn’t that prove that what sycamore computed can be simulated in classical computer, thereby refuting quantum supremacy claim?

This sounds like a big deal in itself.

I can also see how one might explain what happened as having k instances of Feynman random path algorithm running in parallel, instead of resorting to the 2^53 amplitudes. It won’t even sounds like a conspiracy as we already describe nature as running a sort of Feynman path integral. If the case is that each of those k instances is backed by something physical, then quantum computing is hopeless – and will only be able to scale as good as a parallel computer, running those same k instances of Feynman path.

]]>This is exactly the reason why we don’t call Sycamore a “scalable quantum computer.” It’s also why all serious QC researchers agree that error-correction and fault-tolerance will ultimately be needed for scalability.

In the meantime, what can be done now is to achieve a circuit fidelity of ~0.002 with n=53 qubits and a depth of d=20. And while ~0.002 looks small, the key point for our purposes is that it can be distinguished from 0 using a number of samples that was feasible for Google to take (indeed, in about 3 minutes). And once the fidelity has been distinguished from 0, my claim is that, in some sense, the burden shifts to those who claim that 2^{53} ~ 9 quadrillion amplitudes were *not* harnessed to do a computation. Those people now have the burden of explaining: then how *was* the measured Linear XEB score obtained? What’s your alternate theory of how it happened, which doesn’t invoke quantum computation but also doesn’t invoke ~2^{53} classical computation steps? This is where (e.g.) my and Sam Gunn’s hardness result becomes relevant—by showing that, in some sense, there’s “nothing special” about the problem of spoofing Linear XEB. If there’s a fast classical algorithm to do *that*, then there’s also a fast classical algorithm to simply estimate final amplitudes in random quantum circuits, better than the trivial estimator.

Crucially, nothing in the above argument depended on Sycamore being “scalable” (which, indeed, no one claims that it is, at least not until you can do error-correction).

]]>This raises the question of whether k paths Feynman won’t solve this with similar fidelity. As you said, the error decays exponentially with number of gates, but so does the error of sycamore. It looks like there should exist a k for which k paths Feynman approximation will reach the same expected fidelity as sycamore.

]]>