← Back to blog
4 min read Advanced

Is it actually quantum advantage? A no-hype checklist

Every month, someone claims quantum advantage. Here's a framework for evaluating those claims — fairly but sceptically.

evaluationclaimsbenchmarks

“Quantum advantage” is the most overloaded term in the field. Sometimes it means “we solved a problem faster than any classical computer could.” Sometimes it means “we ran a circuit.” The gap between these two definitions is enormous.

Here’s a checklist for cutting through the noise.

1. What problem was actually solved?

Start here. Is it:

  • A real-world task — simulating a molecule, optimising a supply chain, breaking encryption?
  • A synthetic benchmark — sampling from a specific distribution, running random circuits?

Synthetic benchmarks can demonstrate that the hardware works, but they don’t prove the computer is useful. Google’s 2019 “quantum supremacy” experiment sampled from random circuits — a task with no known practical application.

The key question: Would anyone pay to solve this problem?

2. What’s being compared to what?

“Faster than classical” requires specifying:

  • Which classical hardware? A laptop? A supercomputer? A GPU cluster?
  • Which classical algorithm? The best known? A naive approach? A deliberately slow baseline?
  • What are the accuracy targets? Matching to 1%? To chemical accuracy (1.6 millihartree)? Exactly?

A quantum computer beating a deliberately weak classical algorithm isn’t quantum advantage — it’s a stacked comparison.

The gold standard: beating the best-known classical algorithm on the best available classical hardware, at the same accuracy target.

3. What’s included in the cost?

Quantum computations have hidden costs that don’t always show up in the headline:

On the quantum side:

  • Shots (thousands of runs to get statistical confidence)
  • Error mitigation (running extra circuits to correct for noise)
  • Compilation overhead (extra operations from routing on limited connectivity)
  • Calibration time (the machine needs regular tuning)

On the classical side:

  • Are they using the right hardware? (Some problems are GPU-native, and a CPU comparison isn’t fair)
  • Are they using the latest algorithms? (Classical methods improve too)

The fairness test: if you gave the classical side the same budget (time, money, engineering effort), could they catch up?

4. Does it scale?

One impressive result at one problem size is a demonstration, not an advantage. Ask:

  • What happens as the problem gets bigger? Does the quantum approach maintain its edge?
  • Does noise grow with problem size? In the NISQ era, bigger problems usually mean more errors.
  • Is the advantage polynomial or exponential? A 2× speedup today that stays 2× forever is much less exciting than a speedup that grows with problem size.

The strongest claims show scaling evidence — not just a single data point.

5. Could the classical baseline improve?

This is the question that has overturned several “quantum advantage” claims. Google’s 2019 supremacy experiment estimated the classical simulation would take 10,000 years. Within months, IBM showed it could be done in days. Later work brought it down to hours.

Classical algorithms are not static. They improve. A quantum advantage claim is only as strong as the classical baseline it’s compared against — and that baseline keeps moving.

6. Is the advantage in something that matters to you?

“We beat a classical sampler on this specific distribution” is a true but narrow claim. It doesn’t mean “we can solve your optimisation problem faster.”

Force a precise statement: “Using device X, we solved task Y to accuracy A at cost C, compared to the best known classical approach Z at cost C’.”

If you can’t write that sentence from the announcement, the claim is doing too much work.

Quick scorecard

For any quantum advantage claim, score it on these five criteria:

CriterionYesNo
Clear, useful problem
Strong classical baseline
Full cost accounting (both sides)
Scaling evidence
Robust statistics

Five yeses = pay close attention. Three or fewer = interesting research, but not a competitive advantage yet.

The honest summary

  • “Quantum advantage” requires specifying the problem, the comparison, and the cost model
  • Synthetic benchmarks prove hardware works, not usefulness
  • Classical baselines keep improving — today’s advantage might disappear tomorrow
  • Scaling matters more than single data points
  • Force precise claims: what problem, what accuracy, what cost, compared to what?
  • Most “advantage” announcements fail 2-3 of these criteria — that doesn’t mean the research is bad, just that the headline is ahead of the evidence