Shots, depth, and error: three numbers that decide if a quantum circuit actually works
Quantum computation isn't 'run once, get answer.' It's 'run thousands of times, average the results, hope the noise doesn't drown the signal.' Here's the cost model.
In classical computing, you run a program and get an answer. In quantum computing, you run a program and get a sample — one random draw from a probability distribution. To get a useful answer, you need many samples, the computation can’t be too long, and the errors can’t be too large.
These three factors — shots, depth, and error — form the practical cost model for quantum computing.
Shots: why you run the same circuit thousands of times
Remember that measurement gives you a random result drawn from the qubit amplitudes. Run the circuit once, you get one random outcome. To figure out what the amplitudes actually are, you need to run the same circuit many times and look at the statistics.
How many is enough? It depends on how precise you need the answer. The precision improves with the square root of the number of samples. Want to cut your uncertainty in half? Run 4× as many shots. Want 10× better precision? 100× more shots.
A typical quantum experiment might use 1,000 to 100,000 shots per circuit. Each shot means running the full computation from scratch — there’s no way to pause a quantum computer and take multiple measurements from the same state.
The practical implication: Even a short, simple circuit can be expensive if it needs very precise results. Quantum “runtime” isn’t just the circuit duration — it’s the circuit duration multiplied by the number of shots.
Depth: the clock is always ticking
Depth is the number of sequential steps in your circuit — how many layers of operations happen one after another. It’s the quantum equivalent of how long your program takes to run.
Depth matters because qubits are decohering the entire time. A circuit with depth 10 is exposed to noise for much less time than a circuit with depth 1,000. And because errors accumulate with each step, deeper circuits are exponentially less reliable.
The practical limit varies by hardware:
- Superconducting qubits: roughly 100-1,000 operations before noise takes over
- Trapped ions: roughly 1,000-10,000 operations (better fidelity, longer coherence)
As we discussed in the compilation article, the compiled depth (after routing and gate decomposition) is often much larger than the theoretical depth. A circuit that looks 50 layers deep in the textbook might become 300 layers on real hardware.
Error: fidelity compounds badly
If each operation has a 1% chance of error, and your circuit has 100 operations, the probability of getting through without any error is roughly 0.99¹⁰⁰ ≈ 37%. At 200 operations, it drops to 13%. At 500 operations, 0.7%.
This exponential decay is the fundamental reason quantum computers can’t just “go deeper.” Every additional operation multiplies the chance of failure.
Two-qubit gates are the bottleneck. Single-qubit operations have error rates around 0.01-0.1%. Two-qubit gates are typically 0.1-1% — roughly 10× worse. Since useful algorithms require many two-qubit gates, the two-qubit error rate is usually the limiting factor.
How they interact: the total cost
To get one useful result from a quantum computer, you typically need:
- Compile the circuit (classical preprocessing)
- Run it many times (shots × circuit duration)
- Repeat across different parameter settings (if you’re optimising)
- Post-process the results (error mitigation, statistical analysis)
So “I ran a quantum algorithm” actually means: “I ran S copies of a compiled circuit with depth d, across K parameter settings, and the device was stable enough to interpret the statistics.”
The total quantum time is roughly: shots × circuit duration × parameter settings. For a typical experiment, this might be 10,000 shots × 100 microseconds × 50 settings = 50 seconds of quantum time. Not bad — but that’s for a small problem on good hardware.
What to ask when evaluating results
When you read a quantum computing paper or press release, ask:
- How many shots per data point? (Fewer = less reliable)
- What was the compiled circuit depth? (Not the abstract algorithm depth)
- How many two-qubit gates? (Usually the noise bottleneck)
- What error mitigation was used? (And how many extra circuits did it require?)
- Was the device stable enough? (Hardware calibration drifts over time)
If these numbers are missing, you can admire the plot but you can’t judge the feasibility.
The honest summary
- Quantum computation produces random samples, not deterministic answers — you need many shots
- Deeper circuits accumulate errors exponentially — depth is limited by hardware quality
- Two-qubit gate error rates are usually the bottleneck
- Total cost = shots × depth × parameter settings
- Always ask for compiled depth and shot count when evaluating claims
What’s next?
These three numbers — shots, depth, error — explain why different hardware platforms make different tradeoffs. Fast but noisy (superconducting) vs slow but precise (trapped ions) is really a question of which combination of shots, depth, and error works best for your specific problem.