How to tell if a quantum computer is actually good (not just big)
'We have 1,000 qubits' means nothing without context. Here's a framework for evaluating quantum hardware claims — the numbers that matter, the ones that don't, and the red flags.
Every quantum computing company leads with qubit count. “We have 1,000 qubits!” “We hit 1,200!” The number keeps going up, and the press releases keep coming.
But qubit count alone is like judging a car by its horsepower. A 500-horsepower car with no brakes, bald tyres, and a leaky fuel tank isn’t going to win any races. Here’s how to evaluate what actually matters.
The six numbers that matter
1. Qubit count (necessary but not sufficient)
Yes, you need qubits. Bigger algorithms need more of them. But 1,000 qubits with 90% gate fidelity are less useful than 50 qubits with 99.9% fidelity.
What to ask: How many are usable? (Some qubits on a chip may be too noisy to include in computations.)
2. Gate fidelity (this is the big one)
Gate fidelity measures how accurately the hardware performs each operation. A fidelity of 99.5% means each operation has a 0.5% chance of error.
That sounds good until you chain operations together. A circuit with 200 operations at 99.5% fidelity has a (0.995)²⁰⁰ ≈ 37% chance of getting through without an error. At 99%, that drops to 13%. At 98%, it’s 2%.
Two-qubit gate fidelity is the bottleneck. Single-qubit gates are typically 10× more accurate. Two-qubit gates are what limit circuit depth.
The magic number: 99.9% two-qubit gate fidelity is roughly the threshold needed for error correction to work. Below that, adding error correction makes things worse, not better.
3. Coherence time
How long a qubit maintains its quantum state before noise destroys it. This sets the clock on your computation — if your algorithm takes 200 microseconds but your qubits decohere in 100, you’re done.
Platform ranges:
- Superconducting: 50-300 microseconds
- Trapped ions: seconds to minutes
- Neutral atoms: 1-10 seconds
The useful ratio: coherence time ÷ gate time. You want this to be as large as possible — it tells you how many operations you can fit before the clock runs out.
4. Connectivity
Which qubits can directly interact with each other. If two qubits aren’t neighbours, you need extra SWAP operations to move data around, which adds depth and errors.
Best case: All-to-all (any qubit can interact with any other) — common in trapped-ion systems. Typical case: Nearest-neighbour grid — common in superconducting systems.
As we discussed in the compilation article, poor connectivity can multiply your circuit depth by 3-5×.
5. Readout fidelity
How accurately you can measure a qubit’s state at the end of the computation. If your readout is wrong 5% of the time, even a perfect computation gives you noisy results.
Typical values: 95-99.5%. Often worse than gate fidelity, and frequently overlooked.
6. Circuit depth (how deep before noise wins)
The practical limit on how long a computation can run. This is a function of gate fidelity, coherence time, and connectivity (which determines how many extra operations compilation adds).
Current limits:
- Superconducting: roughly 100-1,000 operations
- Trapped ions: roughly 1,000-10,000 operations
The composite metrics
Quantum Volume
IBM invented Quantum Volume as a single number that captures the tradeoffs between qubit count, fidelity, connectivity, and depth. It runs standardised random circuits and measures how large a useful computation the system can handle.
Pros: One number to compare systems. Captures real tradeoffs. Cons: Can be gamed (optimise for the benchmark, not real work). Doesn’t reflect how well your specific algorithm runs.
Algorithm-specific benchmarks
The most useful benchmarks run actual algorithms: simulate a specific molecule, optimise a specific graph, factor a specific number. This is what users care about — not abstract metrics, but “can your machine solve my problem?”
The best question: Can it run the algorithm I care about with acceptable error rates?
Everything else is a proxy.
Red flags in quantum announcements
🚩 “We have X qubits” (no other details) Without gate fidelity, coherence, and connectivity, qubit count is meaningless.
🚩 “Achieved quantum advantage” (on a synthetic benchmark) Ask: was the problem useful? Can classical algorithms catch up? Is the comparison fair?
🚩 “Ready for practical applications” Ask: which application? What error rate does it need? How does it compare to the best classical approach?
🚩 “Fault-tolerant quantum computer” Ask: how many logical qubits? At what error rate? Can it run continuous error correction?
A quick checklist for evaluating announcements
- Qubit count — how many, and how many are usable?
- Two-qubit gate fidelity — above or below 99.9%?
- Coherence time — relative to gate time?
- Connectivity — how many extra operations does routing add?
- Any real algorithm results — or just synthetic benchmarks?
- Physical or logical qubits? — this distinction is everything
If the press release only gives you qubit count, be sceptical. If it gives you all six numbers, pay attention — they’re probably confident in their hardware.
The honest summary
- Qubit count is one of six important dimensions, not the only one
- Gate fidelity (especially two-qubit) is usually the most important factor
- Coherence time sets the computation clock
- Connectivity determines compilation overhead
- The best metric is “can it run my algorithm?” — everything else is a proxy
- Be sceptical of announcements that lead with qubit count and hide everything else