Presentation
KLiNQ: Knowledge Distillation-Assisted Lightweight Neural Network for Qubit Readout on FPGA
DescriptionSuperconducting qubits are among the most promising candidates for building quantum information processors. Yet, they are often limited by slow and error-prone qubit readout—a critical factor in achieving high-fidelity operations. While current methods, including deep neural networks, enhance readout accuracy, they typically lack support for mid-circuit measurements essential for quantum error correction and usually rely on large, resource-intensive network models. This paper presents KLiNQ, a novel qubit readout architecture leveraging lightweight neural networks optimized via knowledge distillation. Our approach achieves around a 99% reduction in model size compared to the baseline while maintaining a qubit-state discrimination accuracy of 91%. By assigning a dedicated, compact neural network for each qubit, KLiNQ facilitates rapid, independent qubit-state readouts that enable mid-circuit measurements. Implemented on the Xilinx UltraScale+ FPGA, our design is able to perform the discrimination with an average of 32 ns. The results demonstrate that compressed neural networks can maintain high-fidelity independent readout while enabling efficient hardware implementation, advancing practical quantum computing.
Event Type
Research Manuscript
TimeMonday, June 233:45pm - 4:00pm PDT
Location3004, Level 3
Design
DES6: Quantum Computing