Five ways to build a quantum computer (and nobody knows which one wins)
Superconducting, trapped ion, photonic, neutral atom, topological — five fundamentally different bets on the future of computing. Here's what each one does and why the race is still open.
There’s no single “quantum computer” technology. There are at least five fundamentally different approaches, each using different physics, different engineering, and different tradeoffs. It’s as if five groups were each trying to build the first airplane, but one was using propellers, one was using rockets, one was using balloons, one was using bird-like wings, and one was theorising about anti-gravity.
Nobody knows which one wins. Here’s how they work and why.
1. Superconducting qubits — the front runner (for now)
Who: IBM, Google, Rigetti
The idea: Cool a specially designed electrical circuit to near absolute zero (−273°C). At that temperature, the circuit becomes superconducting — electricity flows with zero resistance — and the circuit behaves like a single artificial atom that follows quantum rules.
Why it’s leading: Speed and manufacturing. Superconducting qubits perform operations in nanoseconds (billionths of a second), and the chips are built using the same kind of fabrication processes as regular computer chips. When you need to scale up, you can leverage decades of semiconductor manufacturing knowledge.
The catch: These qubits are fragile. They lose their quantum properties in about 100-300 microseconds — which sounds short because it is. That means every computation is a race against the clock. And each qubit can only talk directly to its nearest neighbours on the chip, so if your algorithm needs distant qubits to interact, you need extra operations (which introduce extra errors).
Think of it as: A Formula 1 car. Incredibly fast, but finicky, expensive to maintain, and needs very specific conditions to run at all.
2. Trapped ions — slow but precise
Who: IonQ, Quantinuum
The idea: Take individual atoms (ions), strip off an electron so they have an electrical charge, then hold them in place using electromagnetic fields. Manipulate them with precisely aimed laser beams.
Why it’s compelling: These are actual atoms — nature’s qubits. Every ion of the same element is identical (unlike manufactured circuits, which always have tiny variations). They hold their quantum state for seconds or even minutes (thousands of times longer than superconducting qubits). And any ion can interact with any other ion in the trap, giving you flexible connectivity.
The catch: Slow. Each operation takes microseconds, roughly 1,000 times longer than superconducting. And scaling beyond about 50-100 ions in a single trap gets complicated — the ions start interfering with each other. The solution is probably connecting multiple traps together, but that’s still being worked out.
Think of it as: A Swiss watch. Beautifully precise, but you’re not going to mass-produce them on a semiconductor fab line.
3. Photonic qubits — room temperature, but tricky
Who: Xanadu, PsiQuantum
The idea: Use particles of light (photons) as qubits. Manipulate them with mirrors, beam splitters, and detectors — essentially very sophisticated optics.
Why it’s appealing: Photons don’t need to be cooled. They work at room temperature. They’re naturally suited for communication (fibre optic networks already carry photons). And optical components can be fast.
The catch: Photons don’t easily interact with each other. In normal physics, two beams of light pass right through each other. This makes two-qubit operations — which require qubits to interact — genuinely hard. Most photonic approaches use a workaround: measure some photons and use the measurement results to indirectly create the interaction. This works, but it’s probabilistic and resource-intensive.
Think of it as: Building a computer out of ping-pong balls bouncing off mirrors. The balls are fast and cheap, but making two of them interact reliably is an engineering nightmare.
4. Neutral atoms — flexible and scalable
Who: QuEra, Pasqal
The idea: Instead of ions, use neutral (uncharged) atoms, held in place by focused laser beams called “optical tweezers.” Arrange them in custom 2D or 3D patterns and use interactions between excited states (called Rydberg states) to perform quantum operations.
Why it’s exciting: You can arrange hundreds of atoms in whatever geometry your algorithm needs — grids, triangles, arbitrary graphs. The atoms have decent coherence times (seconds). And the technology is scaling quickly: systems with over 1,000 atoms have been demonstrated.
The catch: Gate quality is still catching up with superconducting and trapped ions. Atoms occasionally escape the tweezers or heat up during operations. And the approach is newer, so the software and error correction ecosystem is less mature.
Think of it as: Lego. You can build whatever shape you want, the pieces are interchangeable, and you can scale by buying more bricks. But the tolerances aren’t as tight as machined metal parts.
5. Topological qubits — the moonshot
Who: Microsoft (primarily)
The idea: Use exotic quantum states of matter called “anyons” — quasiparticles that only exist in certain carefully engineered materials. Operations are performed by braiding these anyons around each other, and the braiding pattern is inherently resistant to errors.
Why it could be revolutionary: If it works, the error correction problem mostly goes away. The computation is protected by the topology (the shape of the braids), not by adding thousands of extra qubits. This could dramatically reduce the overhead needed for reliable quantum computing.
The catch: Nobody has built one yet. Microsoft has been working on this for over 15 years and has published some promising results on the underlying materials, but a working topological qubit that performs actual computations hasn’t been demonstrated. The timeline is genuinely uncertain — it could be years or it could be decades.
Think of it as: Cold fusion — except the physics might actually work. The potential is enormous, but until someone demonstrates it, it’s a bet on the future, not a product.
So who’s winning?
That depends on what you mean by “winning”:
Most mature today: Superconducting (IBM, Google) and trapped ions (IonQ, Quantinuum). These platforms have the most qubits, the best software ecosystems, and the most demonstrated algorithms.
Best quality: Trapped ions lead on gate fidelity and coherence time. If your algorithm needs precision over speed, this is the platform.
Most scalable potential: Neutral atoms are scaling qubit counts fast. Photonic approaches could be very scalable if the two-qubit gate problem is solved.
Highest ceiling (if it works): Topological. But “if it works” is doing a lot of heavy lifting.
Why doesn’t one approach just win?
Because the tradeoffs are real:
- Fast but fragile (superconducting) vs slow but precise (trapped ions)
- Room temperature but hard interactions (photonic) vs cold but reliable (everything else)
- Theoretically perfect but unproven (topological) vs imperfect but available today (everything else)
The future might not be “one winner.” It might be hybrid systems — superconducting processors for fast computation, photonic links for communication between them, trapped ions for high-precision subroutines. Like how modern computing uses CPUs, GPUs, and network cards together, quantum computing might use multiple qubit technologies for different parts of the job.
What to watch
If you want to track which approach is pulling ahead, watch these numbers:
- Two-qubit gate fidelity — needs to reach 99.9%+ for practical error correction
- Logical qubit demonstrations — who can show a protected qubit that actually computes?
- Qubit count vs quality — more qubits with worse quality isn’t always progress
- Real algorithm results — who’s running useful computations, not just benchmarks?
The race is genuinely open. Anyone who tells you they know who wins is selling something.