As quantum systems increase in complexity, accurately reconstructing quantum states becomes a fundamental challenge. Quantum state tomography (QST) provides a framework for reconstructing quantum states from experimental measurements. However, the computational resources required
...
As quantum systems increase in complexity, accurately reconstructing quantum states becomes a fundamental challenge. Quantum state tomography (QST) provides a framework for reconstructing quantum states from experimental measurements. However, the computational resources required for QST scale exponentially with the number of qubits as the Hilbert space dimension grows as N = 2^n, where n is the number of qubits and N the resulting probability amplitudes. This exponential growth renders traditional approaches computationally prohibitive for systems beyond a few qubits. Recent advancements in machine learning have introduced neural network-based approaches to QST, such as variational autoencoders (VAEs) and restricted Boltzmann machines (RBM). These methods leverage the representational power of neural networks to approximate high-dimensional quantum states efficiently. However, the energy-intensive nature of artificial neural networks (ANNs) poses scalability concerns, particularly as quantum systems grow larger. Recent advancements in machine learning have introduced neural network-based approaches to QST, such as VAE and RBM. These methods leverage the representational power of neural networks to approximate high-dimensional quantum states efficiently. Neuromorphic computing offers a biologically inspired paradigm for addressing the challenges of QST. This event-driven architecture allows for asynchronous computation with significantly lower energy consumption compared to traditional neural networks. The RBM trained on the BrainScales-2 (BS2) Neuromorphic platform exhibited low fidelity because of the limited neuron availability and the 6-bit weights on the BS2. The novelty of this thesis is the idea that a variational autoencoder can be split on the level encoder and decoder and let the encoder outside of the BS2 use a bigger model. This results in the question of how a fully spiking variational autoencoder (FSVAE) can be effectively implemented on neuromorphic hardware to achieve high fidelity and scalability for quantum state tomography. Firstly, a quantum state is prepared using unitary operators, and S (also known as Shots) times are measured to create a dataset to interconnect the Qubit information with the FSVAE. This dataset consists of a set of one hot encoded vectors that represent the quantum state and are scaled by 4N. Lastly, the FSVAE consists of an encoder that is built on the CPU and a decoder that is built on the BS2. The results came from implementing these methods by training the FSVAE (encoder-decoder) on a CPU-CPU configuration and a CPU-BS2 configuration. Two experiments were conducted to compare the performance of BS2 in the configuration mentioned earlier. The CPU-CPU configuration could be trained for 3 to 7, and the CPU-BS2 configuration could be trained for 2 to 5 qubits in the Greenberger–Horne–Zeilinger (GHZ) state. The two configurations are compared, and the MSE loss did not converge on the CPU- BS2 configuration as low as the CPU-CPU configuration. The discrepancy results from BS2 having 6-bit weights and the CPU 32-bit weights. Despite this difference, the fidelity of reconstructing by the FSVAE of 4 qubits is improved by around 20% compared to the RBM architecture.