Optimizing quantum error correction for superconducting qubit processors

More Info
expand_more

Abstract

The theory of quantum mechanics describes many phenomena that may initially seem to be counter-intuitive and, in some cases, impossible, given the understanding of classical mechanics that most of us are more intimately familiar with. Following its initial introduction, there was a great deal of debate among scientists regarding the predictions made by this theory. The strange nature of quantum mechanics has led to many memorable quotes and the use of “spooky” to describe some of these predictions. Since its initial introduction, quantum mechanics has been rigorously tested and has proven to be quite a successful theory. Quantum mechanics has found many different applications and has led to the existence of devices and technologies we use daily. Another potential application of quantum mechanics is quantum computation, which Richard Feynman first put forward as an idea in 1982. Quantum computers have the potential to solve specific problems that can be infeasible for even the most powerful (classical) supercomputers and have potential applications in many different areas, such as quantum chemistry, cryptography, and optimization. However, performing a quantum computation is challenging and requires overcoming the inherent fragility of quantum systems. Storing information in a quantum system requires it to be well isolated from the environment to avoid any unwanted interactions that can corrupt the stored data. Unfortunately, at the same time, we need the ability to control this system, make it interact with other such systems, and ultimately measure it for us to perform an actual computation. This is a universal issue and all of the systems we have so far developed to be used as quantum bits (qubits) have been plagued by noise. Each operation applied to the qubit or even the act of leaving the qubit idling for some time generally leads to an error with a non-negligible probability. The impact of this noise has so far prevented quantum computers from performing any practical computation. While substantial efforts have been made to reduce these physical error rates over the past several years, we are still far from the universal fault-tolerant quantum computers we ultimately strive for. Fortunately, quantum error correction can help us reach the low error rates necessary for quantum computers to realize their potential applications in the future. This can be achieved by storing the quantum information in a logical qubit instead of a noisy physical one. When using a stabilizer code, which will be the focus of this dissertation, this logical information is distributed over many (noisy) physical qubits, referred to as data qubits. Another set of qubits, the so-called ancilla qubits, is used to perform indirect parity measurements, which do not destroy the stored information but give some information about whether an error has occurred. We then try to interpret this information to identify what errors have happened and correct them, which is done by a classical algorithm referred to as the decoder. Increasing the number of physical qubits used to encode the logical qubits allows more physical errors to be detected and corrected. The number of correctable errors is captured by the distance of the code, defined as the minimum number of physical single-qubit errors that constitute a logical error. One of the critical properties of error correction is the ability to reduce the logical error rate by increasing the code distance, which requires the physical error rates to be below some threshold value. The valiant experimental effort over the years has led to several recent experiments that implement various error-correcting codes and demonstrate the reduction of the error rates promised by error correction. In particular, these experiments (and the experiments leading up to them) identified several noise sources that had not been explored in sufficient detail and could significantly impact the logical performance of the code. In this dissertation, we explore the impact of the noise encountered in transmon-qubit devices on the performance of error-correcting codes, namely the surface code. Transmon qubits are, in practice, multi-level systems, and only the lowest two energy levels are used for computation. Unfortunately, they are also weakly anharmonic, leading to the applied operations having some probability of exciting the qubit outside of this computational subspace, referred to as a leakage error. We explore the impact of leakage in both simulations and experiments and develop schemes to mitigate it. We also consider other approaches to improve the logical performance or to reduce unwanted interactions. In Chapter 2, we develop a realistic model of leakage induced by the two-qubit gates between flux-tunable transmon qubits. We show that leaked qubits effectively spread errors on their neighboring qubits, which are then detected by the parity measurements. We show that a Hidden Markov model can detect the increased error rate due to leakage. This enables us to post-select out runs during which any qubit has leaked to restore the code performance. Unfortunately, post-selection is ultimately not scalable. Instead, it is desirable to have operations that return leaked qubits to the computational subspace. These operations are called leakage-reduction units and convert leakage into a regular error. In Chapter 3, we propose a leakage-reduction scheme, which does not require any overhead in the time needed to perform the parity measurements or an overhead in the quantum hardware. For data qubits, we propose an operation that transfers the leakage to a dedicated readout resonator, where it can quickly decay. This operation is designed to not disturb the computational states, allowing it to be applied unconditionally. For the ancilla qubit, we use the fact that measurements can determine if a qubit is in the leaked state. We then apply a conditional operation to return the qubit to the computational subspace whenever it is measured to be leaked. Using detailed density-matrix simulation, we show that this scheme can be easily implemented to remove qubit leakage from the system, mitigating its impact on the logical performance of the code. In Chapter 4, we realize the data-qubit leakage reduction unit in an experiment and show it can also be used to remove ancilla-qubit leakage, removing the need for fast conditional operations and readout that distinguishes the leaked states. We show that these operations can remove most of the leaked population in about a hundred nanoseconds while having a negligible impact on the computational subspace. We also demonstrate that these operations decrease the number of observed errors by a two-qubit parity check, showing that the effect of leakage can be mitigated. Chapter 5 considers an architecture employing two types of superconducting qubits, the transmon qubit and the fluxonium qubit. These qubits have very different frequencies, making it unclear whether these qubits can even interact with each other in the first place. We show that the interactions with the higher-excited states can be utilized to perform operations between them, and we propose two types of gates. In practice, qubit frequencies are targeted with only a certain precision in fabrication. In certain cases, this can lead to unwanted interaction between qubits that increase the physical error rates, referred to as frequency collisions. We show that the large detuning between these qubits reduces the frequency of frequency collision, thereby increasing the expected fabrication yield. In Chapter 6, we realize a distance-two surface code experiment and perform repeated parity measurements to detect and post-select errors, given that it’s impossible to correct them when using such a small code. We implement a suite of logical operations for this code, including initialization, measurement, and several single-qubit gates. In the context of error detection, a logical operation is said to be fault-tolerant if the errors produced by each operation are detectable. We show that fault-tolerant variants of operations perform better than non-fault-tolerant ones. We also characterize the impact of various noise sources on the code performance. In Chapter 7, we look at another small-distance code, in this case, the distance-seven repetition code. We show that increasing the distance weakly suppresses the logical error rate of the code. We investigate the limiting factors behind the observed logical performance by analyzing the correlation between the observed parity measurements and performing simulations using noise models parameterized by the measured physical error rates. Chapter 8 considers a decoder that can perform the error inference more accurately. In particular, we implement a neural network decoder and investigate how it performs on experimental data from surface code experiments. We show that the accuracy of this decoder approaches what can be achieved by an optimal and computationally inefficient tensor network decoder. Transmon measurement produces analog outcomes. These are then typically converted to binary ones, leading to some information loss. We show how a neural network can also use this analog information to improve the achieved logical performance further. We have investigated the impact of non-conventional errors in simulation and in several experiments, demonstrating the importance of characterizing and mitigating these errors. We expect the methods introduced in this dissertation to lead to lower logical error rates. In the short term, this can aid in demonstrations of the usefulness of error correction. In the long term, addressing such errors is important to ensure the ability to suppress logical error rates to sufficiently low levels. We finish this dissertation with a brief conclusion of each chapter. We also outline several potential challenges that can impact future error-correction experiments, namely how to reduce the larger qubit overhead needed for fault-tolerant computation and several error sources that might become a limiting factor for future error-correction experiments.