Over the last few years, the big question in quantum computing has shifted from “can we get this to work?” to “can we get this to scale?” It’s no longer news when an algorithm is run on a small quantum computer—we’ve done that with a number of different technologies. The big question now: When can we run a useful problem on quantum hardware that clearly outperforms a traditional computer?
For that, we still need more qubits. And to consistently outperform classical computers on complicated problems, we’ll need enough qubits to do error correction. That means thousands of qubits. So while there’s currently a clear technology leader in qubit count (superconducting qubits called transmons), there’s still a chance that some other technology will end up scaling better.
That possibility is what makes several results being published today interesting. While there are differences among the three results being announced, they all share one thing in common: high-quality qubits produced in silicon. After all, if there’s anything we know how to scale, it’s silicon-based technologies.
Quality issues
The idea of crafting qubits out of silicon has some history, and we’ve made some progress with the technology in the past. That’s because making qubits from silicon is relatively easy when using techniques developed for the semiconductor industry. For example, the intentional contamination called “doping” that’s used to tweak the properties of silicon could also be used to embed atoms that can act as qubits. Similarly, our ability to place wiring on silicon can be used to made structures that create quantum dots where an individual electron can be controlled.
The best part is that these approaches require very little space to implement, meaning we could potentially squeeze a lot of qubits onto a single silicon chip. That’s a big contrast to alternative technologies like transmons and trapped ions, both of which are bulky enough that the companies working with them are already talking about (or even implementing) spreading processors across multiple chips.
The problem so far has been that silicon-based qubits are rather error prone. Ultimately, we want to use groups of these individual qubits as a single logical qubit that implements error correction. But if errors occur faster than they can be corrected, this won’t be possible. And so far, silicon-based qubits are definitely on the wrong side of that error threshold.
High-quality dots
Two papers take a similar approach to improving the performance of qubits based on quantum dots. One is from a group of researchers based at the Delft University of Technology, and the other is primarily from Japan’s RIKEN, with some collaborators at Delft. Both groups used silicon with wiring on it to create a quantum dot that trapped a single electron. The spin of the trapped electron was used as the basis for the qubit. And both groups took a similar approach: testing their gate under a wide range of conditions to identify the ones that tended to produce errors and then operating the qubit in a way that avoided those errors.
In the work at Delft, entangling the two qubits was done by manipulating the quantum dots so that the wave functions of the trapped electrons overlapped. After optimizing the use of the hardware, the researchers found that both the single-qubit and two-qubit gate operations have a fidelity rate of over 99.5 percent. That’s above the threshold needed for getting the most commonly considered form of quantum error correction to work.
To show that the qubits are actually useful, the researchers use their two-qubit setup to calculate the ground state energy of molecular hydrogen. This calculation is relatively easy to do on classical hardware, so the results could be checked.
The RIKEN group did something similar and generally found that speeding up operations had a major effect on error rates. Again, managing this problem produced gates with a fidelity of 99.5 percent, well above the threshold needed for error correction. To show that the gates worked, the team implemented a couple of quantum computing algorithms and showed that they were completed with a success rate in the area of 97 percent.